For context:
I’m copying the same files to the same USB drive for comparison from Windows and from my Fedora 41 Workstation.
Around 10k photos.
Windows PC: Dual Core AMD Athlon from 2009, 4GB RAM, old HDD, takes around 40min to copy the files to USB
Linux PC: 5800X3D, 64GB RAM, NVMe SSD, takes around 3h to copy the same files to the same USB stick
I’ve tried chagning from NTFS to exFAT but the same result. What can I do to improve this? It’s really annoying.
I find that it’s around the same, except linux waits on updating the UI until all write buffers are flushed, whereas Windows does not.
except linux waits on updating the UI until all write buffers are flushed, whereas Windows does not.
I wish that were true here. But when I copy to USB the file manager ( XFCE/Thunar ) shows the copy is finished and closes the copy notifications way way before it’s even half done, when I copy movies to a stick.
I use fast USB 3 stick on USB 3 port, and I don’t get anywhere near the write speed the stick manufacturer claims. So I always open a terminal and run sync, to see when it’s actually finished.I hate to the extreme when systems don’t account for write cache before claiming a copy is finished, it’s such an ancient problem we’ve had since the 90’s, and I find it embarrassing that such problems still exist on modern systems.
I’ve ran sync and it exited already 4 times and the copy is still going
Yes that’s annoying too, I have no clue why it does that, but when the sync says “clear”, I always wait a couple seconds, and run sync again a couple of times, to see if it’s actually finished. And only THEN unmount the stick.
Copy to USB does not seem very solid on Linux IMO. So I also ALWAYS buy sticks with activity LED.
But even that can fool you, sometimes when I think a smaller copy is finished, because the LED stops blinking, it suddenly starts up again, after having paused for about 1½ second?!?!Try checking the progress here
https://unix.stackexchange.com/questions/48235/can-i-watch-the-progress-of-a-sync-operation#48245
That’s nice but I managed to copy 300GB worth of data from the Windows PC to my Linux PC in around 3h to make a backup while I reinstall system and now I’ve been stuck for half a day copying the data back to the old Windows PC and I’ve not even finished 100GB yet… I’ve noticed this issue long ago but I ignored it as I never really had to copy this much data. Now it’s just infuriating.
rsync -aP <source>/ <dest>
I find it faster and more reliable than most GUI explorers
-avP
Random peripherals get tested against windows a lot more than Linux, and there are quirks which get worked around.
I would suggest an external SSD for any drive over 32GB. Flash drives are kind of junk in general, and the external SSDs have better controllers and thermals.
Out of curiosity, was the drive reformatted between runs, and was a Linux native FS tried on the flash drive?
The Linux native FS doesn’t help migrate the files between Windows and Linux, but it would be interesting to see exFAT or NTFS vs XFS/ext4/F2FS.
That’s just the state of things. I have experienced this as well, trying to copy a 160 GB usb stick to another one (my old itunes library). Windows manages fine, but neither Linux nor MacOS do it properly. They crawl, and in macos’ case, it gets much slower as time goes by, and I had to stop the transfer. Overall, it’s how these things are implemented. It’s ok for a few gigabytes, but not a good case for many small files (e.g. 3-5 mb each) with many subfolders, and many GBs overall. Seems to me that some cache is overfilling, while windows is more diligent to clear up that cache in time, before things get into a crawl. Just a weak implementation for both Linux and MacOS IMHO, and while I’m a full time Linux user, I’m not afraid to say it how I experienced it under a debian and ubuntu.
Depends on the distro and desktop environment but some will “transfer” files to a software buffer that doesn’t actually write the data immediately. Works for limiting unnecessary writes on Flash memory but not USB sticks that are designed to be inserted and removed at short notice.
You can force Linux to commit pending writes using the ‘sync’ command. Note it won’t give you any feedback until the operation is finished (multiple minutes for a thumbdrive writing GBs of data) so append & to your command (‘sync &’) to start it as its own process so you don’t lock the terminal.
You can also watch the progress using the command form this Linux Stack Exchange Q;
https://unix.stackexchange.com/questions/48235/can-i-watch-the-progress-of-a-sync-operation#48245
Side question though, it seems that there are faster options. How come we don’t use those in GUI file explorers if they’re faster?
As I’ve already mentioned, sync does absolutely nothing. The copy took so long that the
sync
command exited 4 times while the files were still transfering and were nowhere near finishing. Regarding thewatch -d grep -e Dirty: -e Writeback: /proc/meminfo
command, I did not mention it in this thread but I did try it and yes, there was some almost 900k kB of data in the “Dirty” buffer that went up and down constantly even after I’ve disabled the caching.
I haven’t had this problem. It could be the filesystem you’re using? Sometimes Linux gets weird with Windows filesystems. Try formatting it to ext4.