I’ve been interested in self hosting a small variety of services yet I’m so confused on where to start. What would you guys recommend for a server machine?
My main uses (and some of the services I think are appropriate for the use case) are:
- 1tb photo, video storage, push/pull (immich)
- 512gb total shared between downloaded music storage (navidrome) and pdf/ebook storage (calibre)—all pull only
- 1tb movies/tv storage on a media server (jellyfin)
- 512gb storage for random junk or whatever, plus a file transfer push/pull (syncthing…? or nextcloud?)
- potential basic bio website hosting (near future)
- potential email hosting (distant future)
anyways with that all said i have a few questions:
- what server should i buy if i want to expand storage in the future? should i just build a pc with like 3x1tb storage, or 6x1tb storage w/ redundancy? totally confused about the concept of redundancy lol
- any thoughts on the services im suggesting? especially for file transfer
edit: im willing to learn anything cli-related, i already daily drive linux on my laptop and code in nvim if that provides any sort of reassurance lol
Removed by mod
I have no idea what to recommend for hardware but for OS I suggest Truenas Scale. It’s pretty easy to setup has a webui and it’s intuitive. As for hardware I say you go with a hba board for disks, a GPU for jellyfin transcoding and immich ai features, as for file sharing I use samba, which is kinda shitty but I didn’t found something easier.
Ditto on using TrueNAS scale! That is how I got a friend started on selfhosting, and they love the built in App Store!
Used business desktop from eBay is what I run. With what you want to run, you’ll be fine with even 10 year old hardware. I’m running a dozen services on 10 year old basic business hardware with no issues. With regards to media though: if you’re not getting a dedicated GPU, get a Intel 7xxx or later CPU so you have Quick sync for transcoding.
I run Ubuntu Server on one, proxmox on another. Both have their pros and cons. Depends on what you want to do. If your plan is just to run everything in containers (and it should be), Ubuntu with docker is plenty. If you plan on playing around with VMs, go proxmox.
As for what services, here’s a huge list of different self hostable services grouped by category/function: https://awesome-selfhosted.net/ Most have a demo site or a quick install guide for docker that makes it easy to try stuff out.
Avoid selfhosted email if you can… it’s a whole different animal.
Great advice and I would like to add to this, if you need something easier to run for VMs, go Fedora Server rather than proxmox. If you find your use case needs a bit more complexity, then go proxmox.
I personally found that proxmox was an overkill for what I wanted out of my (old laptop) server.
What would you guys recommend for a server machine?
I would recommend buying fairly modern equipment, say within the past 5 years or so. Desktops, workstations, with a few additions/adjustments, can make excellent, energy efficient servers. As far as RAM, if your equipment takes DDR3, you will escape the ridiculous current price gouging. For RAM, I shop at MemoryStock. HDD drives still make good storage units, tho I go with SSD for the OS, and HDD for everything else. I would stay far away from enterprise type equipment, even though the prices may be tempting. The money you may save buying cheap, enterprise equipment will be spent on your power bill.
Redundancy covers a lot of ground. You can have a redundant server to fall back to should the wheels fall off of the main server. In the case of say a NAS, RAID gives you redundancy where if one drive fails, you can hot swap it for a fresh one and keep on rocking…pretty much. Redundancy can also apply to backups. I have a main, daily backup, and the same backed up to two different locations.
In addition to equipment selection, you will need to do some reading up on securely setting up a server, if you’ve never done so. Also start thinking about firewalls, WAFs, etc. I would recommend going through the Linux Upskill Challenge. Get your server set up and secured. Familiarize yourself with your server. Add a single service, and play around with that until things start to gel. Then you can think about slowly adding additional services.
Do not go for server hardware, used consumer hardware is good enough for you use cases. Basically any machine from the last 5-10 yeare is powerfull enough to handle the load.
Most difficult decision is on the GPU or transcoding hardware for your jellyfin. Do you want to be power efficient? Then go with a modern but low end intel CPU there you got quicksync as transcoding engine. If not, i would go for a low end NVIDIA GPU like the 1050ti or a newer one, and for example an old AMD CPU like the 3600.
For storage, also depends on budged. Having a backup of your data is much more important then having redundancy. You do not need to backup your media, but everything that is important to you,lime the photos in immich etc.
I would go SSD since you do not need much storage, a seperate 500 GB drive for your OS and a 4 TB one for the data. This is much more compact and reduces power consumption, and especially for read heavy applications much more durable and faster inoperation, less noise etc.
Ofc, HDDs are good enough for your usecase and cheaper (factor 2.5-3x cheaper here) .
Probably 8-16 GB RAM would be more then enough.
For any local redundancy or RAID i would always go ZFS.
Machine wise anything will work. Give yourself a chassis with room to add more disks down the road or just build your storage setup in a way that gives you what flexibility you need (though that tends to come with sacrifices).
I use Nextcloud for general file syncing between devices as occaisonal small file sharing.
I’d say that a good starting point would be the smallest setup that would serve a useful purpose. This is usually some sort of network storage, and it sounds this might be a good starting point for you as well. And then you can add on and refine your setup however you see fit, provided your hardware is up to it.
Speaking of hardware, while it’s certainly possible to go all out with a rack-mounted purpose built 19" 4U server full of disks, the truth is that “any” machine will do. Servers generally don’t require much (depending on use case, of course), and you can get away with a 2nd hand regular desktop machine. The only caveat here is that for your (percieved) use cases, you might want the ability to add a bunch of disks, so for now, just go for a simple setup with as many disk as you see fit, and then you can expand with a JBOD cabinet later.
Tying this storage together depends on your tastes, but it generally comes down to to schools of thought, both of which are valid:
- Hardware RAID. I think I’m one of the few fans of this, as it does offer some advantages over software RAID. I suspect that the ones who are against hardware RAID and call it unreliable have not been using proper RAID controllers. Proper RAID controllers with write cache are expensive, though.
- Software RAID. As above, except it’s done via software instead (duh), hence the name. There are many ways to approach this, but personally I like ZFS - Set up multiple disks as a storage pool, and add more drives as needed. This works really well with JBOD cabinets. The downside to ZFS is that it can be quite hungry when it comes to RAM. Either way, keep in mind that RAID, software or hardware, is not a backup.
Source: Hardware RAID at work, software RAID at home.
Now that we’ve got storage addressed, let’s look at specific services. The most basic use case is something like an NFS/SMB share that you can mount remotely. This allows you to archive a lot of the stuff you don’t need live. Just keep in mind, an archive is not a backup!
And just to be clear: An archive is mainly a manner of offloading chunks of data you don’t need accessible 100% of the time. For example older/completed projects, etc. An archive is well suited for storing on a large NAS, as you’ll still have access to it if needed, but it’s not something you need to spend disk space on on your daily driver. But an archive is not a backup, I cannot state this enough!
So, backups… well, this depends on how valuable your data is. A rule of thumb in a perfect world involves three copies: One online, one offline, and one offsite. This should keep your data safe in any reasonable contingency scenarious. Which of these you implement, and how, is entirely up to you. It all comes down to a cost/benefit equation. Sometimes keeping the rule of thumb active is simply not viable, if you have data in the petabytes. Ask me how I know.
But, to circle back on your immediate need, it sounds like you can start with something simple. Your storage requirement is pretty small, and adding some sort of hosting on top of that is pretty trivial. So I’d say that, as a starting point, any PC will do - just add a couple of harddrives to make sure you have enough for the forseeable future.
If you are a real and total noob try to get a synology, ugreen or another reputable brand of nas an start from there.
The point of having one of these is to avoid a big fuck up resulting in a data loss. An from there you will be able to Bild up what you need.
all the best in this journey
I would absolutely discourage the use of synology and probably any other brand in the NAS realm.
Synology has pulled of some really scummy things in the last few years with their certified SSDs where only a white list of SSDs could be used in an array or when they tried to push their own HDDa and show warnings and messengers to worry the user that something is wrong. Also they retroactively removed transcoding capabilities from their systems.
Those Systems are all quite limited for how expensive they are. They are great for just simple things but with the list OP posted, you would be heavily limited and have to jump through hoops in order to have a well functioning home lab/server.
I see your point but in this world there is only 2 options, or you have the skills, the knowledge and the time to do it by yourself, or you need to outsource it.
Assuming that the op is a real noob it is clear that the 2 first prerequisites are missing making that option unacceptable, then you can only go to the buy something easy enough for the general public.
And in top of that, in a homelab, the most sacred thing is the data, not the service, the data. If you misconfigure a nas or the automated backup system it could lead into the worst scenario: the data is lost forever.
Weighting everything I still recommend what I did. Although if instead of synology you prefer ugreen or asustor… Well that’s depends of your taste
I see your point but in this world there is only 2 options, or you have the skills, the knowledge and the time to do it by yourself, or you need to outsource it.
But your not, outsourcing it?! You just choose a proprietary provider for a docker compose file! and some raid configuration. Everything ia still on you to fuck up.
Assuming that the op is a real noob it is clear that the 2 first prerequisites are missing making that option unacceptable, then you can only go to the buy something easy enough for the general public.
Reading the Post again from OP, its clear that OP is clearly interessted in learning those things.
And in top of that, in a homelab, the most sacred thing is the data, not the service, the data. If you misconfigure a nas or the automated backup system it could lead into the worst scenario: the data is lost forever.
The exact same ia true for you synology NAS. + the limitations on how synology thinks you should do backups vs how it actually suits you.
I think you are missing the point how easy is to fuck things up in a console with truenas when trying to activate de duplication or making a backup VS the same thing in a user friendly, already tested private solution. Of course from the noob point of view.
Installing truenas when having no idea about almost anything is cumbersome, dealing with the millions options (some of them incompatible between them) is frustrating, cryptic error codes are discouraging…
You want people jump in? Then make it easy for them, lower the entry barrier, if not, you will find yourself alone in your ivory tower.
The exact same ia true for you synology NAS. + the limitations on how synology thinks you should do backups vs how it actually suits you.
If you already know how to setup a proper backup system, balancing the pros and cons, with a robust and solid way to avoid data loss, then you don’t qualify for noob.
If you don’t know any of that and still makes yiur backup system, that’s the recipe of the disaster and you have real probabilities of losing data with nay option to recover.
I think you are missing the point how easy is to fuck things up in a console
No i think you are. Why should a beginner ever even touch the CLI? You can also SSH into the synology and fuck things up.
Using a ‘friendly environment’ like synology is not gurantee to not fuck things up.
Installing truenas when having no idea about almost anything is cumbersome, dealing with the millions options (some of them incompatible between them) is frustrating, cryptic error codes are discouraging…
What millions of options? You select a drive, and set a password and your done? 1 Set fewer then on synology.
You brought up TrueNas. TrueNas for example also gives you safe boundaries and suggestions how to set up things. Same as synology. There is literally also a setup wizard for backups.
AND AGAIN just because you follow the synology wizards does not mean your data is safe either. You always can fuck things up if you want to.
Oh I see, could you please point to that system that
- it is free and not tie to any vendor
- easy to use to the point that my grandma could use it
- properly tested by an active q group
- with safe boundaries
- production ready
- total flexibility
- with a proper wizard / gui that is self explanatory, robust enough to make sure you don’t select contradicting options.
If such system exist perhaps I move my homelab, who knows…
Thats exactly my point. Both are not. But you keep claiming synology is compared to others.



