Hello! 😀
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good “homelabing guy” I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing… I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker 🙃
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don’t really planned to.
So here’s my thoughts and slowly I’m going to leave docker for more old-school way of hosting services. Don’t get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it’s not my case.

Maybe I’m doing something wrong but I let you talk about it in the comments, thx.

  • SpazOut@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    For me the power of docker is its inherent immutability. I want to be able to move a service around without having to manual tinker, install packages and change permissions etc. It’s repeatable and reliable. However, to get to the point of understanding enough about it to do this reliably can be a huge investment of time. As a daily user of docker (and k8s) I would use it everyday over a VM. I’ve lost count of the number of VMs I’ve setup following installation guidelines, and missed a single step - so machines that should be identical aren’t. I do however understand the frustration with it when you first start, but IMO stick with it as the benefits are huge.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Yeah docker is great for this and it’s really a pleasure to deploy apps so quickly but the problems comes later, if you want to really customize the service to you, you can’t instead of doing your own image…

      • SpazOut@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        In most cases you can get away with over mounting configuration files within the container. In extreme cases you can build your own image - but the steps for that are just the changes you would have applied manually on a VM. At least that image is repeatable and you can bring it up somewhere else without having to manually apply all those changes in a panic.

  • Decq@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I’ve never really like the convoluted docker tooling. And I’ve been hit a few times with a docker image uodates just breaking everything (looking at you nginx reverse proxy manager…). Now I’ve converted everything to nixos services/containers. And i couldn’t be happier with the ease of configuration and control. Backup is just.a matter of pushing my flake to github and I’m done.

  • beerclue@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 days ago

    I’m actually doing the opposite :)

    I’ve been using vms, lxc containers and docker for years. In the last 3 years or so, I’ve slowly moved to just docker containers. I still have a few vms, of course, but they only run docker :)

    Containers are a breeze to update, there is no dependency hell, no separate vms for each app…

    More recently, I’ve been trying out kubernetes. Mostly to learn and experiment, since I use it at work.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    Are you using docker-compose and local bind mounts? I’d not, you’re making backing up uch harder than it needs to be. Its certainly easier than backing up LXCs and a whole lot easier to restore.

  • SanndyTheManndy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    I used docker for my homeserver for several years, but managing everything with a single docker compose file that I edit over SSH became too tiring, so I moved to kubernetes using k3s. Painless setup, and far easier to control and monitor remotely. The learning curve is there, but I already use kubernetes at work. It’s way easier to setup routing and storage with k3s than juggling volumes was with docker, for starters.

      • SanndyTheManndy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Both are ways to manage containers, and both can use the same container runtime provider, IIRC. They are different in how they manage the containers, with docker/docker-compose being suited for development or one-off services, and kubernetes being more suitable for running and managing a bunch of containers in production, across machines, etc. Think of kubernetes as the pokemon evolution of docker.

      • SanndyTheManndy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 days ago

        Several services are interlinked, and I want to share configs across services. Docker doesn’t provide a clean interface for separating and bundling network interfaces, storage, and containers like k8s.

  • InnerScientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    I use podman using home-manager configs, I could run the services natively but currently I have a user for each service that runs the podman containers. This way each service is securely isolated from each other and the rest of the system. Maybe if/when NixOS supports good selinux rules I’ll switch back to running it native.

    • agile_squirrel@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      This sounds great! I’d love to see your config. I’m not using home manager, but have 1 non root user for all podman containers. 1 user per service seems like a great setup.

      • InnerScientist@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        Yeah it works great and is very secure but every time I create a new service it’s a lot of copy paste boilerplate, maybe I’ll put most of that into a nix function at some point but until then here’s an example n8n config, as loaded from the main nixos file.

        I wrote this last night for testing purposes and just added comments, the config works but n8n uses sqlite and probably needs some other stuff that I hadn’t had a chance to use yet so keep that in mind.
        Podman support in home-manager is also really new and doesn’t support pods (multiple containers, one loopback) and some other stuff yet, most of it can be compensated with the extraarguments but before this existed I used pure file definitions to write quadlet/systemd configs which was even more boilerplate but also mostly copypasta.

        Gaze into the boilerplate
        { config, pkgs, lib, ... }:
        
        {
            users.users.n8n = {
                # calculate sub{u,g}id using uid
                subUidRanges = [{
                    startUid = 100000+65536*( config.users.users.n8n.uid - 999);
                    count = 65536;
                }];
                subGidRanges = [{
                    startGid = 100000+65536*( config.users.users.n8n.uid - 999);
                    count = 65536;
                }];
                isNormalUser = true;
                linger = true; # start user services on system start, fist time start after `nixos-switch` still has to be done manually for some reason though
                openssh.authorizedKeys.keys = config.users.users.root.openssh.authorizedKeys.keys; # allows the ssh keys that can login as root to login as this user too
            };
            home-manager.users.n8n = { pkgs, ... }:
            let
                dir = config.users.users.n8n.home;
                data-dir = "${dir}/${config.users.users.n8n.name}-data"; # defines the path "/home/n8n/n8n-data" using evaluated home paths, could probably remove a lot of redundant n8n definitions....
            in
            {
                home.stateVersion = "24.11";
                systemd.user.tmpfiles.rules =
                let
                    folders = [
                        "${data-dir}"
                        #"${data-dir}/data-volume-name-one" 
                    ];
                    formated_folders = map (folder: "d ${folder} - - - -") folders; # a function that takes a path string and formats it for systemd tmpfiles such that they get created as folders
                in formated_folders;
        
                services.podman = {
                    enable = true;
                    containers = {
                        n8n-app = { # define a container, service name is "podman-n8n-app.service" in case you need to make multiple containers depend and run after each other
                            image = "docker.n8n.io/n8nio/n8n";
                            ports = [
                                "${config.local.users.users.n8n.listenIp}:${toString config.local.users.users.n8n.listenPort}:5678" # I'm using a self defined option to keep track of all ports and uids in a seperate file, these values just map to "127.0.0.1:30023:5678", a caddy does a reverse proxy there with the same option as the port.
                            ];
                            volumes = [
                                "${data-dir}:/home/node/.n8n" # the folder we created above
                            ];
                            userNS = "keep-id:uid=1000,gid=1000"; # n8n stores files as non-root inside the container so they end up as some high uid outside and the user which runs these containers can't read it because of that. This maps the user 1000 inside the container to the uid of the user that's running podman. Takes a lot of time to generate the podman image for a first run though so make sure systemd doesn't time out
                            environment = {
                                # MYHORSE = "amazing";
                            };
                            # there's also an environmentfile option for secret management, which works with sops if you set the owner of the secret/secret template
                            extraPodmanArgs = [
                                "--pull=newer" # always pull newer images when starting, I could make this declaritive but I haven't found a good way to automagically update the container hashes in my nix config at the push of a button.
                            ];
                         # few more options exist that I didn't need here
                        };
                    };
                };
            };
        }
        
        
  • mesamune@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Honestly after using docker and containerization for more than a decade, my home setups are just yunohost or baremetal (a small pi) with some periodic backups. I care more about my own time now than my home setup and I want things to just be stable. Its been good for a couple of years now, without anything other than some quick updates. You dont have to deal with infa changes with updates, you dont have to deal with slowdowns, everything works pretty well.

    At work its different Docker, Kubernetes, etc… are awesome because they can deal gracefully with dependencies, multiple deploys per day, large infa. But ill be the first to admit that takes a bit more manpower and monitoring systems that are much better than a small home setup.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 days ago

      yeah I think that at the end even if it seems a bit “retro” the “normal install” with periodic backups/updates on default vm (or even lxc containers) are the best to use, the most stable and configurable

      • mesamune@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Do you use any sort of RAID? Recently, ive been using an old SSD, but back 9ish years ago, I used to backup everything with a RAID system, but it took too much time to keep up.

  • markc@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    5 days ago

    Docker is a convoluted mess of overlays and truly weird network settings. I found that I have no interest in application containers and would much prefer to set up multiple services in a system container (or VM) as if it was a bare-metal server. I deploy a small Proxmox cluster with Proxmox Backup Server in a CT on each node and often use scripts from https://community-scripts.github.io/ProxmoxVE/. Everything is automatically backed up (and remote sync’d twice) with a deduplication factor of 10. A Dockerless Homelab FTW!

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Yeah I share your point of view and I think I’m going this way. These scripts are awesome but I prefer writing mine as I get more control over them

  • huskypenguin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    6 days ago

    I love docker, and backups are a breeze if you’re using ZFS or BTRFS with volume sending. That is the bummer about docker, it relies on you to back it up instead of having its native backup system.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      What are you hosting on docker? Are you configuring your apps after? Did you used the prebuild images or build yourself?

      • huskypenguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        I use the *arr suite, a project zomboid server, a foundry vtt server, invoice ninja, immich, next cloud, qbittorrent, and caddy.

        I pretty much only use prebuilt images, I run them like appliances. Anything custom I’d run in a vm with snapshots as my docker skills do not run that deep.

      • huskypenguin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        I should also say I use portainer for some graphical hand holding. And I run watchtower for updates (although portainer can monitor GitHub’s and run updates based on monitored merged).

        For simplicity I create all my volumes in the portainer gui, then specify the mount points in the docker compose (portainer calls this a stack for some reason).

        The volumes are looped into the base OS (Truenas scale) zfs snapshots. Any restoration is dead simple. It keeps 1x yearly, 3x monthly, 4x weekly, and 1x daily snapshot.

        All media etc… is mounted via NFS shares (for applications like immich or plex).

        Restoration to a new machine should be as simple as pasting the compose, restoring and restoring the Portainer volumes.

        • foremanguy@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          I don’t really like portainer, first their business model is not that good and second they are doing strange things with the compose files

          • IrateAnteater@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            I’m learning to hate it right now too. For some reason, its refusing to upload a local image from my laptop, and the alarm that comes up tells me exactly nothing useful.