I understand that people enter the world of self hosting for various reasons. I am trying to dip my toes in this ocean to try and get away from privacy-offending centralised services such as Google, Cloudflare, AWS, etc.

As I spend more time here, I realise that it is practically impossible; especially for a newcomer, to setup any any usable self hosted web service without relying on these corporate behemoths.

I wanted to have my own little static website and alongside that run Immich, but I find that without Cloudflare, Google, and AWS, I run the risk of getting DDOSed or hacked. Also, since the physical server will be hosted at my home (to avoid AWS), there is a serious risk of infecting all devices at home as well (currently reading about VLANS to avoid this).

Am I correct in thinking that avoiding these corporations is impossible (and make peace with this situation), or are there ways to circumvent these giants and still have a good experience self hosting and using web services, even as a newcomer (all without draining my pockets too much)?

Edit: I was working on a lot of misconceptions and still have a lot of learn. Thank you all for your answers.

  • hsdkfr734r@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    One aspect is how interesting you are as a target. What would a possible attacker gain by getting access to your services or hosts?

    The danger to get hacked is there but you are not Microsoft, amazon or PayPal. Expect login attempts and port scans from actors who map out the internets. But I doubt someone would spend much effort to break into your hosts if you do not make it easy (like scripted automatic exploits and known passwords login attempts easy) .

    DDOS protection isn’t something a tiny self hosted instance would need (at least in my experience).

    Firewall your hosts, maybe use a reverse proxy and only expose the necessary services. Use secure passwords (different for each service), add fail2ban or the like if you’re paranoid. Maybe look into MFA. Use a DMZ (yes, VLANs could be involved here). Keep your software updated so that exploits don’t work. Have backups if something breaks or gets broken.

    In my experience the biggest danger to my services is my laziness. It takes steady low level effort to keep the instances updated and running. (Yes there are automated update mechanisms - unattended upgrades i.e. -, but also downwards compatibility breaking changes in the software which will require manual interactions by me.)

    • mad_asshatter@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      …maybe use a reverse proxy…

      +1 post.

      I would suggest definitely reverse proxy. Caddy should be trivial in this use case.

      cheers,

        • d_ohlin@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          May not add security in and of itself, but it certainly adds the ability to have a little extra security. Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles. Use firewall to only expose certain ports and destinations exposed to your origins. Install a single wildcard cert and easily cover any subdomains you set up. There’s even nginx configuration files out there that will block URL’s based on regex pattern matches for suspicious strings. All of this (probably a lot more I’m missing) adds some level of layered security.

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 months ago

            Put your reverse proxy in a DMZ, so that only it is directly facing the intergoogles

            So what? I can still access your application through the rproxy. You’re not protecting the application by doing that.

            Install a single wildcard cert and easily cover any subdomains you set up

            This is a way to do it but not a necessary way to do it. The rproxy has not improved security here. It’s just convenient to have a single SSL endpoint.

            There’s even nginx configuration files out there that will block URL’s based on regex pattern matches for suspicious strings. All of this (probably a lot more I’m missing) adds some level of layered security.

            If you do that, sure. But that’s not the advice given in this forum is it? It’s “install an rproxy!” as though that alone has done anything useful.

            For the most part people in this form seem to think that “direct access to my server” is unsafe but if you simply put a second hop in the chain that now you can sleep easily at night. And bonus points if that rproxy is a VPS or in a separate subnet!

            The web browser doesn’t care if the application is behind one, two or three rproxies. If I can still get to your application and guess your password or exploit a known vulnerability in your application then it’s game over.

            • zingo@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              The web browser doesn’t care if the application is behind one, two or three rproxies. If I can still get to your application and guess your password or exploit a known vulnerability in your application then it’s game over.

              Right!?

              Your castle can have many walls of protection but if you leave the doors/ports open, people/traffic just passes through.

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              5 months ago

              So I’ve always wondered this. How does a cloudflare tunnel offer protection from the same thing.

        • ShellMonkey@lemmy.socdojo.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          I have a dozen services running on a myriad of ports. My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web, plus the fact that an entity needs to know a hostname now instead of just an exposed port. IPS signatures can help identify abstract hostname scans and the proxy can be configured to permit only designated sources. Reverse proxies also commonly get used to allow for SSL offloading to permit clear text observation of traffic between the proxy and the backing host. Plenty of other use cases for them out there too, don’t think of it as some one trick off/on access gateway tool

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web,

            The mapping is helpful but not a security benefit. The latter can be done with a firewall.

            Paraphrasing - there is a bunch of stuff you can also do with a reverse proxy

            Yes. But that’s no longer just a reverse proxy. The reverse proxy isn’t itself a security tool.

            I see a lot of vacuous security advice in this forum. “Install a firewall”, “install a reverse proxy”, etc. This is mostly useless advice. Yes, do those things but they do not add any protection to the service you are exposing.

            A firewall only protects you from exposing services you didn’t want to expose (e.g. NFS or some other service running on the same system), and the rproxy just allows for host based routing. In both cases your service is still exposed to the internet. Directly or indirectly makes no significant difference.

            What we should be advising people to do is “use a valid ssl certificate, ensure you don’t use any application default passwords, use very good passwords where you do use them, and keep your services and servers up-to-date”.

            A firewall allowing port 443 in and an rproxy happily forwarding traffic to a vulnerable server is of no help.

          • Guadin@k.fe.derate.me
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            I don’t get why they say that? Sure, maybe the attackers don’t know that I’m on Ubuntu 21.2 but if they come across https://paperless.myproxy.com and the Paperless-NGX website opens, I’m pretty sure they know they just visited a Paperless install and can try the exploits they know. Yes, the last part was a bit snarky, but I am truly curious how it can help? Since I’ve looked at proxies multiple times to use it for my selfhosted stuff but I never saw really practical examples of what to do and how to set it up to add an safety/security layer so I always fall back to my VPN and leave it at that.

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 months ago

            I’m positive that F5’s marketing department knows more than me about security and has not ulterior motive in making you think you’re more secure.

            Snark aside, they may do some sort of WAF in addition to being a proxy. Just “adding a proxy” does very little.

    • thirdBreakfast@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      +1 for the main risk to my service reliability being me getting distracted by some other shiny thing and getting behind on maintenance.

    • Oisteink@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      All reverse proxies i have used do rudimentary DDoS protection: rate limiting. Enough to keep your local script kiddy at bay - but not advanced stuff.

      You can protect your ssh instance with rate limiting too but you’ll likely do this in the firewall and not the proxy.