• 0 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: December 14th, 2023

help-circle
  • Gotcha thanks for the info! It looks like I would be fine with ocis or opencloud, but since my main use case and pain points are with document editing which is collabora, it probably wouldn’t change much besides simplifying the docker setup (I had to make a gross pile of nginx config stuff pieced together from many forum help posts to get the nextcloud fpm container to work smoothly). But it already works so unless it breaks there’s little incentive for me to change.


  • What are the apps that you would miss? I basically only use my NC as a Google drive and docs replacement, so all it has to do is store docx files and let me edit them on desktop or mobile without being glitchy and I’ve really wanted to consider OCIS or similar.

    That second requirement for me seems hard because of how complex office suites are, but NC is driving me to my wit’s end with how slow and error prone it is, and how glitchy the NC office UI is (like glitches when selecting text or randomly scrolling you to the beginning).



  • Hmm, well it doesn’t seem to be any problem with the docker compose then as best as I can tell. I picked a random ext4 flash drive and replicated your setup with the UID and GID set and it seems to work fine:

    # /etc/fstab
    /dev/sda1       /home/<me>/mount/ext_hdd_01  ext4    defaults 0 2
    
    ~/mount % ls -an
    total 12
    drwxr-xr-x  3 1000 1000 4096 Mar 27 16:22 .
    drwx------ 86 1000 1000 4096 Mar 27 16:31 ..
    drwxrwxrwx  3    0    0 4096 Mar 27 16:26 ext_hdd_01
    
    ~/mount/ext_hdd_01 % ls -an
    total 6521728
    drwxrwxrwx 3    0    0       4096 Mar 27 16:26 .
    drwxr-xr-x 3 1000 1000       4096 Mar 27 16:22 ..
    -rw-r--r-- 1 1000 1000 6678214224 May  5  2024 PXL_20240504_233345242.mp4
    drwxrwxrwx 2    0    0      16384 May  5  2024 lost+found
    -rwxr--r-- 1 1000 1000          5 Mar 27 16:27 test.txt
    
    # ~/samba/docker-compose.yml
    services:
      samba:
        image: dockurr/samba
        container_name: samba
        environment:
          NAME: "Data"
          USER: "user"
          PASS: "pass"
          UID: "1000"
          GID: "1000"
        ports:
          - 445:445
        volumes:
          - /home/<me>/mount:/storage
        restart: always
    

    I was able to play the PXL.mp4 video from my desktop and write back the test.txt file

    Have you checked the logs with docker logs -f samba to see if there’s anything there?

    Also you could try to access the HD from within the container, using docker exec -it samba bash and then cd into /storage and see what happens.


  • I would suggest adding “UID” and “GID” environment variables to the container, and set them to the numeric values for user and group numbers that show in place of your name when you use “ls -an” inside of the “mount” folder (they will probably be the same number).

    For example, if inside your mount folder you see:

    ls -an
    total 12
    drwx------ 2 1001 1001 4096 Mar 27 13:54 .
    drwxr-xr-x 3 1000 1000 4096 Mar 27 13:51 ..
    -rwx------ 1 1001 1001    0 Mar 27 13:54 hello.txt
    -rwx------ 1 1001 1001    4 Mar 27 13:54 test.txt
    

    Then set UID: 1001 and GID: 1001

    I get the same error as you when I copy your docker-compose and try to access a folder owned by my user. When I add the UID and GID of my user id to the docker-compose (1001 for me), the error goes away.


  • What did you set UID and GID to and what is the output of “ls -an” when run inside of the shared directory? You can remove the file names for privacy. I just tested the docker container and it seems to work between my Linux laptop and my windows 11 desktop using this docker compose:

    services:
      samba:
        image: dockurr/samba
        container_name: samba
        environment:
          NAME: "Data"
          USER: "samba"
          PASS: "secret"
          UID: "1000"
          GID: "1000"
        ports:
          - 445:445
        volumes:
          - ./samba:/storage
        restart: always
    

    The files in my shared folder are owned by UID/GID 1000/1000 which is why I put those as my UID/GID, and when I logged in from Windows I entered samba and secret as the password and I was able to access and modify the files in the shared folder.



  • BakedCatboy@lemmy.mltoPlex@lemmy.mlImportant 2025 Plex Updates
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Tl;Dr

    • Plex pass price increase (6.99/mo, 69.99/yr)
    • Non-LAN streaming from a personal Plex server will require either server owner or user to have Plex pass or the new “remote watch” subscription tier
    • No more $5 mobile unlock fee to watch in the mobile app, but now there will be a $2/mo “remote watch” subscription tier that unlocks remote streaming mentioned above

    I’m glad this won’t affect Plex pass (lifetime for over a decade in my case) users who are sharing their server with non-paying friends, but I also hope this entices more development in Jellyfin. If Plex decided to make it so that my non-paying friends can’t stream easily from my paid for Plex server, I need jellyfin to be a good alternative, and it currently doesn’t appeal to any of the friends I share with so something like that would probably get my friends to switch back to paid streaming.


  • I use it to auto update nginx and haproxy containers, since they adhere very well to semver there is very little risk of breakage if you use the correct tag and not just :latest. I haven’t had a single issue in many years, and it’s nice to know that I’ll get critical security updates within 24h of images being pushed.


  • You could do something like that using point-to-point wireless links or just cables slung between buildings to connect boxes running a self-organizing mesh network protocol like yggdrasil. But there are too many challenges for me to go into depth here ranging from getting buy in from enough people who are located in close proximity, managing user expectations of speed, making services available over such an overlay network (or managing and paying for proxies that provide access to the regular Internet), dealing with geography, etc.

    You’d basically be looking at replicating freifunk or nycmesh or doing something along those lines. NYCmesh as I can tell operates more like an ISP so I would expect it to be at least harder than what they do.

    Imo time is better invested in developing and advancing decentralized applications and protocols, such as developing stuff using bittorrent/DHT or I2P which can just take advantage of the existing internet.


  • Sure no biggie, I keep pretty meticulous records so it’s easy to check. My old place in the Boston metro was 4br and used 600-1200kwh, peaking in the summer. Natural gas heat and central AC. Now we’re in a 2br in a complex and get more free heat from our neighbors and it ranges from 800-1100, with central heat pump heat and AC, but since the heat isn’t gas anymore the heat is included in that.



  • Immich has a setting that does automatic photo backup over WiFi, I use the android app as a Google photos replacement. You can choose however many folders on your phone as you want (I just do camera roll) and enable only backup over WiFi and it backs up all the photos in original quality. I self-host the server on my Synology with a reverse proxy (can’t forward ports at my current place due to cgnat) so I can access it from anywhere.

    I believe the app is cross platform so the iPhone version should be identical to the android one.


  • I once had someone open an issue in my side project repo who asked about a major release bump and whether it meant there were any breaking changes or major changes and I was just like idk I just thought I added enough and felt like bumping the major version ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯


  • Woah federation would be huge!

    Someday I would love to be able to share and receive shared photos / albums to and from users on different servers. Especially if it lets me sync the original files so that I can keep a copy in case their server goes down. It would also be neat if you could enable activitypub so that your account could show up as a fediverse user that people can follow for public or approved follower only posts, pixelfed compatibility would be super cool.






  • Keep in mind that if you set up raid using zfs or btrfs (idk how it works with other systems but that’s what I’ve used) then you also get scrubs which detect and fix bit rot and unrecoverable read errors. Without that or a similar system, those errors will go undetected and your backup system will backup those corrupted files as well.

    Personally one of the main reasons I used zfs and now btrfs with redundancy is to protect irreplaceable files (family memories and stuff) from those kinds of errors, as I used to just keep stuff on a hard drive until I discovered loads of my irreplaceable vacation photos to be corrupted, including the backups which backed up the corruption.

    If your files can be reacquired, then I don’t think it’s a big deal. But if they aren’t, then I think having scrubs or integrity checks with redundancy so that issues can be repaired, as well as backups with snapshots to prevent errors or mistakes from messing up your backups, is a necessity. But it just depends on how much you value your files.