Oh no, you!

  • 17 Posts
  • 253 Comments
Joined 1 year ago
cake
Cake day: November 3rd, 2024

help-circle

  • I’d say that a good starting point would be the smallest setup that would serve a useful purpose. This is usually some sort of network storage, and it sounds this might be a good starting point for you as well. And then you can add on and refine your setup however you see fit, provided your hardware is up to it.

    Speaking of hardware, while it’s certainly possible to go all out with a rack-mounted purpose built 19" 4U server full of disks, the truth is that “any” machine will do. Servers generally don’t require much (depending on use case, of course), and you can get away with a 2nd hand regular desktop machine. The only caveat here is that for your (percieved) use cases, you might want the ability to add a bunch of disks, so for now, just go for a simple setup with as many disk as you see fit, and then you can expand with a JBOD cabinet later.

    Tying this storage together depends on your tastes, but it generally comes down to to schools of thought, both of which are valid:

    • Hardware RAID. I think I’m one of the few fans of this, as it does offer some advantages over software RAID. I suspect that the ones who are against hardware RAID and call it unreliable have not been using proper RAID controllers. Proper RAID controllers with write cache are expensive, though.
    • Software RAID. As above, except it’s done via software instead (duh), hence the name. There are many ways to approach this, but personally I like ZFS - Set up multiple disks as a storage pool, and add more drives as needed. This works really well with JBOD cabinets. The downside to ZFS is that it can be quite hungry when it comes to RAM. Either way, keep in mind that RAID, software or hardware, is not a backup.

    Source: Hardware RAID at work, software RAID at home.

    Now that we’ve got storage addressed, let’s look at specific services. The most basic use case is something like an NFS/SMB share that you can mount remotely. This allows you to archive a lot of the stuff you don’t need live. Just keep in mind, an archive is not a backup!

    And just to be clear: An archive is mainly a manner of offloading chunks of data you don’t need accessible 100% of the time. For example older/completed projects, etc. An archive is well suited for storing on a large NAS, as you’ll still have access to it if needed, but it’s not something you need to spend disk space on on your daily driver. But an archive is not a backup, I cannot state this enough!

    So, backups… well, this depends on how valuable your data is. A rule of thumb in a perfect world involves three copies: One online, one offline, and one offsite. This should keep your data safe in any reasonable contingency scenarious. Which of these you implement, and how, is entirely up to you. It all comes down to a cost/benefit equation. Sometimes keeping the rule of thumb active is simply not viable, if you have data in the petabytes. Ask me how I know.

    But, to circle back on your immediate need, it sounds like you can start with something simple. Your storage requirement is pretty small, and adding some sort of hosting on top of that is pretty trivial. So I’d say that, as a starting point, any PC will do - just add a couple of harddrives to make sure you have enough for the forseeable future.


  • Back in the day I used Nagios to get an overview of large systems, and it made it very obvious if something wasn’t working and where. But that was 20 years ago, I’m sure there are more modern approaches.

    Come to think of it, at work we have grafana running, but I’m not sure exactly what scope it’s operating under.











  • They can be. Some motherboards come with one built in. But in most cases it refers to its own PCIe card, such as one of the many models from LSI Megaraid.

    The advantage of this is that it can have a small capacitor bank (or a proper battery) to provide emergency power so that if something stupid happens such as motherboard failure, the raid controller will use this power to cleanly write to the disks.

    EDIT: I just remembered one such stupid situation at work where a motherboard died and then the entire system blacked out, including power to the drives. I spoke with my vendor since data loss and corruption carries a hefty price tag in my field. They told me not to worry - The data could sit in the buffer for ages, as the capacitor bank was there to handle things like this. Turned out that upon restoring power, once the array was online again, the write buffer will be written to disk. No CPU or motherboard required - the controller took care of it. This was especially handy since it took a little longer to find a replacement board.


  • Ooh, I did this a while back, except it was hardware Raid5 to Raid6. Turns out one of the servers in a cluster were, for some reason, set up with 11 disks in raid5 + hot spare, except for raid 6 on all raids on all servers. Took me embarrassingly long to realize why storage space was as expected despite one disk being reported as not in an array.

    Storcli and a nice raid controller makes thinks like this easy, as long as you grab enough coffee and read the storcli syntax while taking notes to build the full command string.


  • bash setup/config/PS1 is your friend here. I frequently find myself with a myriad of terminals between a bunch of usernames and servers at work, and setting up a proper prompt is key to help you keep track.

    My bashrc makes my prompt look like this:

    username@hostname:/absolute/path
    $ inputgoeshere

    … with color coding, of course. Yes, I use a multiline prompt. I somehow never saw that before using ParrotSec despite being a bash user for 25 years. I modified the ParrotSec default to suit my needs better, and I like it:

    • Obvious which user I am.
    • Obvious which host I’m on.
    • Obvious which path I’m in.
    • It’s easy to copy and paste a complete source/destination for pasting into, for example, an rsync comman

    I pasted my PS1 config here: https://pastebin.com/ZcYwabfB

    Stick that line near the bottom of your ~/.bashrc file if you want to try it out.



  • I was considering something similar for my Volvo 940 about around 2010. The idea was that I’d install a touch screen as an infotainment system where I could see stuff like OBD2 data and navigation.

    While not having a functioning speedometer for a little bit (later fixed), I used my phone to see the GPS speed with the screen flipped so I could get the speed on the windshield like a HUD in some modern cars. The plan was to do something similar integrated with the home brewed infotainment.

    It annoys me that I never went through with it, because so much stuff of what I’d drawn up became standard for “fancy” cars later.




  • Anything that does the job is good enough. At its core a server is just a regular PC with a dedicated purpose and software. Sure, there are specialized hardware better suitable and purpose built, but it’s not a requirement.

    I for one prefer 19" rackmount stuff with disk bays in the front, but that’s more of a convenience than anything.

    UPS is nice, but it’ll work without it.

    I’ve had to deal with the Brazilian computer market and how it’s ridiculously overpriced due to import fees, so in your situation I’d just get any hand-me-down computer. Servers generally don’t require much unless you’re doing something special or intensive.

    Get your hands on whatever you can find for free or dirt cheap (laptop or desktop doesn’tmatter), install linux, and you have a basic setup that you can work with. If your use case requires more, then that’s something you can accommodate in the next iteration of your server.