Hey! I have been using Ansible to deploy Dockers for a few services on my Raspberry Pi for a while now and it’s working great, but I want to learn MOAR and I need help…
Recently, I’ve been considering migrating to bare metal K3S for a few reasons:
- To learn and actually practice K8S.
- To have redundancy and to try HA.
- My RPi are all already running on MicroOS, so it kind of make sense to me to try other SUSE stuff (?)
- Maybe eventually being able to manage my two separated servers locations with a neat k3s + Tailscale setup!
Here is my problem: I don’t understand how things are supposed to be done. All the examples I find feel wrong. More specifically:
- Am I really supposed to have a collection of small yaml files for everything, that I use with
kubectl apply -f?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ?? - I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?
- Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard ?!
I feel that having a K3S + Traefik + Longhorn + Rancher on MicroOS should be straightforward, but it’s really not.
It’s very much a noob question, but I really want to understand what I am doing wrong. I’m really looking for advice and especially configuration examples that I could try to copy, use and modify!
Thanks in advance,
Cheers!
K3s (and k8s for that matter) expect you to build a hierarchy of yaml configs, mostly because spinning up docker instances will be done in groups with certains traits applying to whole organization, certain ones applying only to most groups, but not all, and certain configs being special for certain services (http nodes added when demand is higher than x threshold).
But I wonder why you want to cluster navidrome or pihole? Navidrome would require a significant load before service load balancing is required (and non-trivial to implement), and pihole can be put behind a round-robin DNS forwarder, and also be weird to implement behind load balancing.
My goal is to have a k3s cluster as a deployment env and try and run the services I’m already using. I don’t need to have any advance load balancing, I just want pods to be restarted if one of my machine stops.
I’ve thought about k8s, but there is so much about Docker that I still don’t fully know.
Yeah - k8s has a bit of a steep learning curve. I recentlyish make the conversion from “a bunch of docker-compose files” to microk8s myself. So here are some thoughts for you (in no particular order).
I would avoid helm like the plague. Everybody is going to recommend it to you but it just puts a wrapper on a wrapper and is MUCH more complicated than what you’re going to need because you’re not spinning up hundreds of similar-but-different services. Making things into templates adds a ton of complexity and overhead. It’s something for a vendor to do, not a home-gamer. And you’re going to need to understand the basics before you can create helm charts anyway.
The actual yml files you need are actually relatively simple compared to a helm chart that needs to be parameterized and support a bazillion features.
So yes - you’re going to create a handful of yml files and
kubectl apply -fthem. But - you can do that with Ansible if you want, or you can combine them into a single yml (separate sections with----).What I do is - for each service I create a directory. In it I have
name_deployment.yml,name_service.yml, name_ingress.ymlandname_pvc.yml`. I just apply them when I change them, which isn’t frequent. Each application I deploy generally has its own namespace for all its resources. I’ll combine deployments into a NS if they’re closely related (e.g. prometheus and grafana are in the same NS).Do yourself a favor and install
kubenswhich lets you easily see and change your namespace globally. Gawd I hate having to type out my namespace for everything. 99% of the time when you can’t find a thing withkubectl getyou’re not looking in the right namespace.You’re going to need to sort out your storage situation. I use NFS for long-term storage for my pods and have microk8s configured to automatically create space on my NFS server when pods request a PV (persistent volume). You can also use local directories but that won’t cluster.
There are two basic types of “ingress” load balancing. “ClusterIp” means the cluster controller will act like a hostname-based router for HTTP. You can point your DNS entries at that server and it will route to your pods on their internal IP address based on the DNS name of the request. It’s easy to use and works very well - but it only works for HTTP traffic. The other is to use LoadBalancerIp that will give your pods an IP address on the network that you can connect to directly. The former only works for HTTP, the latter will let you use any ports (e.g. ssh for a forgejo instance).
I use Kube everyday for work but I would recomend you to not use it. It’s complicated to answer problems you don’t care about. How about docker swarm, or podman services ?
I disagree, it is great to use. Yes, some things are more difficult but as OP mentioned he wants to learn more, and running your own cluster for your services is an amazing way to learn k8s.
If possible: do that on company time. Let the boss pay for it.
The more I think about it the more I think you are right.



