Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    4
    ·
    18 days ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      18 days ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 days ago

        kubernetes

        Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.

        Those terms do mean something, but they’re a lot simpler than execs claim they are.

      • AtariDump@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        17 days ago

        …oh shit, the RAM is on fire.

        The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

        Burn mothercucker, burn.

        (Thanks phone for the spelling mistakes that I’m leaving).

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 days ago

        Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    18 days ago

    I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

    But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

    I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

    And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      18 days ago

      Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.

  • enumerator4829@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 days ago

    My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

    As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

    Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

    So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.

  • fubarx@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    18 days ago

    Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

    The only constant is change.

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    18 days ago

    I started self hosting in the days well before containers (early 2000’s). Having been though that hell, I’m very happy to have containers.
    I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That’s my own fault, but I’m a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

    These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    15 days ago

    Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.

    Just having all these things sandboxed makes it SO much easier.

  • kutsyk_alexander@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    18 days ago

    I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service which I want to use.

  • Strider@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 days ago

    Erm. I’d just say there’s no benefit in adding layers just for the sake of it.

    It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.

  • HiTekRedNek@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 days ago

    In my own experience, certain things should always be on their own dedicated machines.

    My primary router/firewall is on bare metal for this very reason.

    I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.

    I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)

    And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.

    I didn’t see a point in removing it. So it’s there, just not automatically started.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      17 days ago

      Same here. In particular I like small cheap hardware to act as appliances, and have several raspberry pi.

      My example is home assistant. Deploying on its own hardware means an officially supported management layer, which makes my life easier. It is actually running containers but i don’t have to deal with that. It also needs to be always available so i use efficient “right sized” hardware and it works regardless whether im futzing with my “lab”

  • ZiemekZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    17 days ago

    I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

  • neidu3@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 days ago

    I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.

    Beyond that, it’s mostly that I’m not very used to containers.

  • LifeInMultipleChoice@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 days ago

    For me it’s lack of understanding usually. I haven’t sat down and really learned what docker is/does. And when I tried to use it once I ended up with errors (thankfully they all seemed contained by the docker) but I just haven’t gotten around to looking more into than seeing suggestions to install say Pihole in it. Pretty sure I installed Pihole outside of one. Jellyfin outside, copyparty outside, and I something else im forgetting at the moment.

    I was thinking of installing a chat app in one, but I put off that project because I got busy at work and it’s not something I normally use.

    I guess I just haven’t been forced to see the upsides yet. But am always wanting to learn

    • slazer2au@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 days ago

      containerisation is to applications as virtual machines are to hardware.

      VMs share the same CPU, memory, and storage on the same host.
      Containers share the same binaries in an OS.

      • LifeInMultipleChoice@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        18 days ago

        When you say binaries do you mean locally stored directories kind of like what Lutris or Steam would do for a Windows game. (Create a fake c:\ )

        • slazer2au@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 days ago

          Not so much a fake one but overlay the actual directory with specific needed files for that container.

          Take the Linux lib directory. It exists on the host and had python version 3.12 installed. Your docker container may need python 3.14 so an overlay directory is created that redirects calls to /lib/python to /lib/python3.14 instead of the regular symlinked /lib/python3.12.

          • LifeInMultipleChoice@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            18 days ago

            So let’s say I theoretically wanted to move a docker container to another device or maybe if I were re-installing an OS or moving to another distro, could I in theory drag my local docker container to an external and throw my device in a lake and pull that container off into the new device? If so … what then, I link the startups, or is there a “docker config” where they are all able to be linked and I can tell it which ones to launch on OS launch, User launch, delay or what not?

            • slazer2au@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              18 days ago

              For ease of moving containers between hosts I would use a docker-compose.yaml to set how you want storage shared, what ports to present to the host, what environment variables your application wants. Using Wordpress as an example this would be your starting point
              https://github.com/docker/awesome-compose/blob/master/wordpress-mysql/compose.yaml

              all the settings for the database is listed under the db heading. You would have your actual database files stored in /home/user/Wordpress/db_data and you would link /home/user/Wordpress/db_data to /var/lib/MySQL inside the container with the line

              volumes:
                    - db_data:/var/lib/mysql  
              

              As the compose file will also be in home/user/Wordpress/ you can drop the common path.

              That way if you wanted to change hosts just copy the /home/user/Wordpress folder to the new server and run docker compose up -d and boom, your server is up. No need to faf about.

              Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.

              • LifeInMultipleChoice@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 days ago

                “Containers by design are suppose to be temporary and the runtime data is recreated each time the container is launched. The persistent data is all you should care for.”

                So that’s really why they should be good for Jellyfin/File servers, as the data isn’t needing to be stored in container, just the run files. I suppose the config files as well.

                When I reverse proxy to my network using wireguard (set up on the jellyfin server, I also think I have a rustdesk server on there) on the other hand, is it worth using a container, or is that just the same either way?

                I have shoved way to many things on an old laptop, but I never have to touch it really, and the latest update mint put out actually cured any issues I had. I used to have to reboot once a week or so to get everything back online when it came to my Pihole and shit. Since the latest update I ran in September 4th, I haven’t touched it for anything. Screen just stays closed in a corner of my desk with other shit stacked on top

  • oortjunk@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 days ago

    I generally abstract to docker anything I don’t want to bother with and just have it work.

    If I’m working on something that requires lots of back and forth syncing between host and container, I’ll run that on bare metal and have it talk to things in docker.

    Ie: working on an app or a website or something in language of choice on framework of choice, but postgres and redis are living in docker. Just the app I’m messing with and it’s direct dependencies run outside.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    18 days ago

    In my case it’s performance and sheer RAM need.

    GLM 4.5 needs like 112GB RAM and absolutely every megabyte of VRAM from the GPU, at least without the quantization getting too compressed to use. I’m already swapping a tiny bit and simply cannot afford the overhead.

    I think containers may slow down CPU<->GPU transfers slightly, but don’t quote me on that.

    • kiol@lemmy.world
      cake
      OP
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 days ago

      Can anyone confirm if containers would actually impact CPU to GPU transfers

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        18 days ago

        To be clear, VMs absolutely have overhead but Docker/Podman is the question. It might be negligible.

        And this is a particularly weird scenario (since prompt processing literally has to shuffle ~112GB over the PCIe bus for each batch). Most GPGPU apps aren’t so sensitive to transfer speed/latency.

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    17 days ago

    It’s just another system to maintain, another link in the chain that can fail.

    I run all my services on my personal gaming pc.