• 0 Posts
  • 54 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • I saw that they are working on big refactoring to use EFCore instead of doing direct SQL queries. I actually was surprised when they were saying that the migration will take days for some, and you shouldn’t interrupt it.

    That you should not interrupt a database migration is really standard procedure. If it takes days is unfortunate, but what should the devs do? Create a migration process with weeks and months of testing that can recover after a interruption, for those 3 ppl that run on slow hardware?

    Pls do not get me wrong, that the database and everything related to it is slow and basically legacy code is not good, but exactly that is beeing worked on right now, instead of continuously pumping out new features. Complaining about the exact thing that is currently in the works feels very disingenuous.


  • Based on you screenshot from the NPM Dashboard there seems to be something wrong. In the setup window you show that you forward the traffic with http and port 80, in the dashboard screenshot you forward the traffic with https and port 80.

    Just skip http and self signed certificates all together. Modern Browsers make it a pain to use non https sites. A simple domain setup with dns acme challenge is a little bit of a hassle but worth the hour(s) of invested time. Especially with npm were it is a set and forget option.

    Does pihole support wildcard dns entries yet? To my knowledge the gui only supports single entries so that you have to enter every subdomain manually in pihole that you want to have forwarded. Workaround would be to use a dnsmasq config file or use something else like addguard.


  • I know that the project is done by volunteers but I was just wondering whatever I should invest more time on trying to resolve the issues. Maybe my server specs are just not ideal for Jellyfin.

    Why do you think they do not?

    If you would look up what they are actually doing, you woulf realize that a lot of work is done to improve the underlying quality of code to make it easier to do major changes to core functionality. Quick and dirty fixes by the previously project, emby, has led to a very shitty code base that makes changes hard.



  • Do you want to prevent brute forcing or do you want to prevent the attack getting in?

    If you want to prevent brute forcing then software like fail2ban helps a little, but this is only a IP based block, so with IPv6 this is not really helpfull against a real attack, since rotating IP addresses is trivial. But still can slow down the attacker. Also limiting the amount of sessions and auth tries does significantly slow down the attacker.

    If you just want to not worry about it set strong passwords, and when it is a multi user system where other ppl might access it, configure Public Key Auth so you can be sure the other users have strong passwords (or keys in this case) to authenticate.

    With strong passwords or keys it is basically impossible to brute force your way in with ssh.


  • ShortN0te@lemmy.mltoSelfhosted@lemmy.worldLatest Watchtower fork?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 months ago

    Just because there is no update does not mean there are security vulnerabilities to worry about, or do you have a specific one that is not fixed?

    The attack vector seems very narrow to me. It checks the container registry downloads the containers and runs some docker commands.

    It has no interface, so in order to attack it you either have to compromise the container registry (but then it would be easier to compromise the containers you download) the secure connection used to download the containers (https is quite stable) or something on the server side.

    Also the project does not really look that abundant to me.

    EDIT: So i have not checked this, but watchtower is probably using docker for most steps anyway? So basically the only thing that could be attacked is via the notifications watchtower is sending?




  • Most critical infrastructure like my mail i subscribe to the release and blog rss feed. My OSs send me Update notifications via Mail (apticron), those i handle manual. Everything else auto updates daily.

    You still need to check if the software you use is still maintained and receives security updates. This is mostly done by choosing popular and community drive options, since those are less likely to get abandoned.




  • Just subscribe to the release channel. That varies from OS to OS or Software, but is worth it.

    Use tools that are universal. For example, I have not used TrueNAS Scale because they did not support native docker at the time. OS specific solutions are more likely to break then universal once (truecharts vs docker)

    To get up and running again after a complete failure i can just download the latest config and data from my backup and set up any distro that supports docker and my system is running again.

    I do OS upgrades when they are available, usually within 1 or 2 days and containers are updated with watchtower daily.



  • Immich requires to be run on a server to function, but a lot of (or even all) of its functions are things that could reasonably done entirely on-device. Aves combined with some automatic backup solution such as Nextcloud gets (from what I can tell) most of the functionality Immich offers.

    How would you backup Immich on device?

    And if you backup to Nextcloud than you already have a served?

    So you are arguing that having a file server is enough? And processing is done on client side?

    That would be in this case very inefficient.

    1. You would need to have all the data on the Client or transfer all the data to the client once you load it.
    2. You device has to do all the processing which would lead to lower battery life.
    3. How do you handle multiple Users? Giving partially access to the Filesystem?

    I could come up with other points but this should give you an idea. Yes, for some use cases a server-client approach does not make sense but for a dedicated photo backup and indexer it absolutely does.