For context:

I’m copying the same files to the same USB drive for comparison from Windows and from my Fedora 41 Workstation.

Around 10k photos.

Windows PC: Dual Core AMD Athlon from 2009, 4GB RAM, old HDD, takes around 40min to copy the files to USB

Linux PC: 5800X3D, 64GB RAM, NVMe SSD, takes around 3h to copy the same files to the same USB stick

I’ve tried chagning from NTFS to exFAT but the same result. What can I do to improve this? It’s really annoying.

  • neidu3@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 month ago

    I find that it’s around the same, except linux waits on updating the UI until all write buffers are flushed, whereas Windows does not.

    • Buffalox@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 month ago

      except linux waits on updating the UI until all write buffers are flushed, whereas Windows does not.

      I wish that were true here. But when I copy to USB the file manager ( XFCE/Thunar ) shows the copy is finished and closes the copy notifications way way before it’s even half done, when I copy movies to a stick.
      I use fast USB 3 stick on USB 3 port, and I don’t get anywhere near the write speed the stick manufacturer claims. So I always open a terminal and run sync, to see when it’s actually finished.

      I hate to the extreme when systems don’t account for write cache before claiming a copy is finished, it’s such an ancient problem we’ve had since the 90’s, and I find it embarrassing that such problems still exist on modern systems.

    • WereCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      That’s nice but I managed to copy 300GB worth of data from the Windows PC to my Linux PC in around 3h to make a backup while I reinstall system and now I’ve been stuck for half a day copying the data back to the old Windows PC and I’ve not even finished 100GB yet… I’ve noticed this issue long ago but I ignored it as I never really had to copy this much data. Now it’s just infuriating.

  • jollyrogue@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    1 month ago

    Random peripherals get tested against windows a lot more than Linux, and there are quirks which get worked around.

    I would suggest an external SSD for any drive over 32GB. Flash drives are kind of junk in general, and the external SSDs have better controllers and thermals.

    Out of curiosity, was the drive reformatted between runs, and was a Linux native FS tried on the flash drive?

    The Linux native FS doesn’t help migrate the files between Windows and Linux, but it would be interesting to see exFAT or NTFS vs XFS/ext4/F2FS.

  • Eugenia@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    1 month ago

    That’s just the state of things. I have experienced this as well, trying to copy a 160 GB usb stick to another one (my old itunes library). Windows manages fine, but neither Linux nor MacOS do it properly. They crawl, and in macos’ case, it gets much slower as time goes by, and I had to stop the transfer. Overall, it’s how these things are implemented. It’s ok for a few gigabytes, but not a good case for many small files (e.g. 3-5 mb each) with many subfolders, and many GBs overall. Seems to me that some cache is overfilling, while windows is more diligent to clear up that cache in time, before things get into a crawl. Just a weak implementation for both Linux and MacOS IMHO, and while I’m a full time Linux user, I’m not afraid to say it how I experienced it under a debian and ubuntu.

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Depends on the distro and desktop environment but some will “transfer” files to a software buffer that doesn’t actually write the data immediately. Works for limiting unnecessary writes on Flash memory but not USB sticks that are designed to be inserted and removed at short notice.

    You can force Linux to commit pending writes using the ‘sync’ command. Note it won’t give you any feedback until the operation is finished (multiple minutes for a thumbdrive writing GBs of data) so append & to your command (‘sync &’) to start it as its own process so you don’t lock the terminal.

    You can also watch the progress using the command form this Linux Stack Exchange Q;

    https://unix.stackexchange.com/questions/48235/can-i-watch-the-progress-of-a-sync-operation#48245


    Side question though, it seems that there are faster options. How come we don’t use those in GUI file explorers if they’re faster?

    • WereCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      As I’ve already mentioned, sync does absolutely nothing. The copy took so long that the sync command exited 4 times while the files were still transfering and were nowhere near finishing. Regarding the watch -d grep -e Dirty: -e Writeback: /proc/meminfo command, I did not mention it in this thread but I did try it and yes, there was some almost 900k kB of data in the “Dirty” buffer that went up and down constantly even after I’ve disabled the caching.

  • zarkanian@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    I haven’t had this problem. It could be the filesystem you’re using? Sometimes Linux gets weird with Windows filesystems. Try formatting it to ext4.