• 1 Post
  • 23 Comments
Joined 4 years ago
cake
Cake day: January 21st, 2021

help-circle

  • I switched to Immich recently and am very happy.

    1. Immich’s face detection is much better, very rarely fails. Especially for non-white faces. But even for white faces PhotoPrisim regularly needed me reviewing the unmatched faces. I also needed to really turn up the “what is a face” threshold because otherwise it would miss a ton of clear faces. (Then it only missed some, but also has tons of false positives). On the other hand Immich just works.
    2. Immich’s UI is much nicer overall. Lots of small affordances. For example the menu item to “view in timeline” is worth switching alone. Also good riddance to PhotoPrism’s persistent and buggy selection. Someone must have worked really hard on implementing this but it was really just a bad idea.
    3. Immich has an app with uploading, and it allows you to view local and uploaded photos in one interface which is a huge UX win. I couldn’t find a good Android app for uploading to photoprism. You could set up import delays and stuff but you would still regularly get partially uploaded files imported and have to clean it up manually.
    4. Immich’s search by content is much better. For example searching for “cat with red and yellow ball” was useless on PhotoPrism, but I found tons of the results I was looking for on Immich.

    The bad:

    1. There is currently a terrible jank in the Immich app which makes videos unusable and everything painful. Apparently this is due to some Album sync process running in the main thread. They are working on it. I can’t fathom how a few hundred albums causes this much lag but 🤷 There is also even worse lag on the location view page, but at least that is just one page.
    2. The Immich app has a lot less features than the website. But the website works very well on mobile so even just using the website (and the app for uploading) is better than PhotoPrism here. The fundamentals are good but it just needs more work.
    3. I liked PhotoPrism’s advanced filters. They were very limited but at least they were there.
    4. Not being able to sort search results by date is a huge usability issue. I often know roughly when the photo I want to find was taken and being able to order by date would be hugely helpful.
    5. You have to eagerly transcode all videos. There is no way to clean up old transcodes and re-transcode on the fly. To be fair the PhotoPrism story also wasn’t great because you had to wait for the full video to be transcoded before starting, leading to a huge delay for videos more than a few seconds, but at least I could save a few hundred gigs of disk space.

    Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)


  • There are three parts to the whole push system.

    1. A push protocol. You get a URL and post a message to it. That message is E2EE and gets delivered to the application.
    2. A way to acquire that URL.
    3. A way to respond to those notifications.

    My point is that 1 is the core and already available across devices including over Google’s push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don’t want to self host and still maintains the option to fully self host as desired.



  • IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.

    UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).





  • The concern is that it would be nice if the UNIX users and LDAP is automatically in sync and managed from a version controlled source. I guess the answer is just build up a static LDAP database from my existing configs. It would be nice to have one authoritative system on the server but I guess as long as they are both built from one source of truth it shouldn’t be an issue.


  • Yes, LDAP is a general tool. But many applications that I am interested in using it for user information. That is what I want to use it for. I’m not really interested in storing other data.

    I think you are sort of missing the goal of the question. I have a bunch of self-hosted services like Jellyfin, qBittorrent, PhotoPrism, Metabase … I want to avoid having to configure users in each one individually. I am considering LDAP because it is supported by many of these services. I’m not concerned about synchronizing UNIX users, I already have that solved. (If I need to move those to LDAP as well that can be considered, but isn’t a goal).




  • kevincox@lemmy.mltoPrivacy@lemmy.mlIn search for a good VPN
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    I mean it is always better to have more open source. But the point of the multi-hop system is that you don’t need to trust the server. Even if the server was open source:

    1. You wouldn’t know that we are running an unmodified version.
    2. If you need to trust the server then someone could compel us to tap it or monitor it.

    The open source client is enough to verify this and the security of the whole scheme.




  • Here is the problem with crop quality:

    1. Most of the purchase decision is what is observable at the store.
      • Does it look good.
      • What is the price.
      • How is the smell, texture, weight…
    2. Some happens at home, and you might remember for next time.
      • How does it taste.
      • How long does it last.
      • Does it make you feel satisfied.
    3. It is basically impossible to know how good food was for you.
      • You eat a lot of food and the response is delayed.
      • Even if you have a response you probably don’t properly understand your body.
      • In the end most of the “health” of food is just your believes and marketing.

    So there is basically no business pressure to have crops be nutritious.


  • Because these buckets probably don’t exist (citation needed on all of these, I don’t have access to data from a large online store).

    I suspect that this is actually a “good” recommendation in the face of many other facts.

    1. Any recommendation has a very low risk of success. Outside of searching contexts (where there is clear intent) I suspect that the chance of a recommendation leading to a purchase is <1%.
    2. You usually make more money from bigger sales. So showing a 1% expected $1k GPU is better than showing a 20% expected purchase $20 pair of sunglasses (and I doubt any recommendation has 20% purchase rate outside of clear sources intent).
    3. People return things. Return rate is much higher than 1% on many platforms and some good chunk of these will want a similar product to replace the defective/bad/unsuitable one.
      • For Amazon this maybe isn’t a good excuse because they should be able to incorporate return information into the recommendations. But even then, lots of people may prefer to order a second one before going through with the return. Maybe they want to do a comparison to be sure that they like the new one more before sending the first back.
    4. People do have uses for multiple even for things that wouldn’t seem that way at first glance. If I just bought a GPU and am happy with it maybe my partner needs an upgrade (or gets a little jealous). Maybe I will see a similar or identical product recommended and get it for her. Maybe I like my new fridge and also want to replace my second basement fridge with it, or maybe the quietness of the new one made me realize how loud the other one is and I want to get a similar model to replace it.
    5. People recommend things to each other. Maybe I just bought a GPU and my buddy is asking if I like it. The next day I see a recommendation for a GPU that I think is a good open for them, I send the link.

    Yes, all of these scenarios are unlikely, but I suspect that is actually significantly higher than the baseline, and for the big items that people usually complain about much more profitable. I suspect you see these ads because they work. Not as in they are often right, but that they have higher expected value than other available ads.


  • Yeah, I can’t believe how hard targeting other consoles is for basically no reason. I love this Godot page that accurately showcases the difference:

    https://docs.godotengine.org/en/stable/tutorials/platform/consoles.html

    Currently, the only console Godot officially supports is Steam Deck (through the official Linux export templates).

    The reason other consoles are not officially supported are:

    • To develop for consoles, one must be licensed as a company. As an open source project, Godot has no legal structure to provide console ports.
    • Console SDKs are secret and covered by non-disclosure agreements. Even if we could get access to them, we could not publish the platform-specific code under an open source license.

    Who at these console companies think that making it hard to develop software for them is beneficial? It’s not like the SDK APIs are actually technologically interesting in any way (maybe some early consoles were, the last “interesting” hardware is probably the PS2). Even if the APIs were open source (the signatures, not the implementation) every console has DRM to prevent running unsigned games, so it wouldn’t allow people to distribute games outside of the console marker’s control (other than modded systems).

    So to develop for the Steam Deck:

    1. Click export.
    2. Test a bit.

    To develop for Switch (or any other locked-down console):

    1. Select a third-party who maintains a Godot port.
    2. Negotiate a contract.
      • If this falls through go back to step 1.
    3. Integrate your code to their port.
    4. Click export.
    5. Test a bit.

    What it could be (after you register with Nintendo to get access to the SDK download):

    1. Download the SDK to whatever location Godot expects it.
    2. Click export.
    3. Test a bit.

    All they need to do is grant an open source license on the API headers. All the rest is done for them and magically they have more games on their platform.


  • kevincox@lemmy.mltoPrivacy@lemmy.mlIn search for a good VPN
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    2 months ago

    Mullvad is one of the best options if you care about privacy. They take privacy seriously, both on their side and pushing users towards private options. They also support fully anonymous payments. Their price is also incredibly reasonable.

    I’m actually working on a VPN product as well. It is a multi-hop system so that we can’t track you. But it isn’t publicly available yet, so in the meantime I happily recommend Mullvad.