I hope they are using more than just docker for isolation 😅 Each user should be running in a different VM for security.
I hope they are using more than just docker for isolation 😅 Each user should be running in a different VM for security.
I’ve been using Restic to Backblaze B2.
I don’t really trust B2 that much (I think it is mostly a single-DC kind of storage) but it is reasonably priced and easy to use. Plus as long as their failures aren’t correlated with mine it should be fine.
Strongly reminds me of Old MacDonald Had a Barcode, E-I-E-I CAR. Basically put a standard anti-virus test string into various sorts of barcode and see what breaks.
For me the biggest benefit is the ease of applying patches. For example in Nix I can easily take a patch that is either unreleased, or that I wrote myself, and apply it to my systems immediately. I don’t need to wait for it to be released upstream then packaged in my distro. This allows me to fix problems and get new features quickly without needing to mess with my system in any other way (no packages in other directories that need to be cleaned up, no extra steps after updates to remember, no cases where some packages are using different versions and no breaking due to library ABI breaks).
Another benefit that you are pointing at is changing build flags. Often times I want to enable an optional feature that my distro doesn’t enable by default.
Lastly building packages with different micro-architecture optimizations can be beneficial. I don’t do this often but occasionally if I want to run some compute-heavy work it can be nice to get a small performance boost.
The others have made great points about how any amount adds up. Especially with compounding.
But the most important reason me just be making it a habit. If you are saving $50/month you have a place to put your savings and an investment strategy for that money. The next time you get a pay raise or get rid of some recurring spend it will be natural to start saving $60/month, then $100 and more and more. It is much easier to improve an existing habit than starting a new one. So as soon as you have the chance start that got habit.
Your Firefox install contains a file called omni.ja
. For example on many Linux machines it will be at /usr/lib/firefox/browser/omni.ja
. This file is a ZIP archive and contains your places.xhtml
as well as other browser files. The exact paths are not always obvious as there is some remapping taking place (see the .manifest
files in the archive) but I think the vast majority of chrome://
paths come from this archive.
We did it not because it was easy, but because we thought it would be easy.
I switched to Immich recently and am very happy.
The bad:
Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)
There are three parts to the whole push system.
My point is that 1 is the core and already available across devices including over Google’s push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don’t want to self host and still maintains the option to fully self host as desired.
I don’t want the end executable to have to bundle these files and re-parse them each time it gets run.
No matter how you persist data you will need to re-parse it. The question is really just if the new format is more efficient to read than the old format. Some formats such as FlatBuffers and Cap'n Proto are designed to have very efficient loading processes.
(Well technically you could persist the process image to disk, but this tends to be much larger than serialized data would be and has issues such as defeating ASLR. This is very rarely done.)
Lots of people are talking about Pickle. But it isn’t particularly fast. That being side with Python you can’t expect much to start with.
IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.
UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).
Why are these TypeScript + JSX rather than just SVGs? It seems that the paths are defined as SVG but they are using some JavaScript framework to define the animations rather than just using SVG or CSS animations.
Why WASM? It seems to me that the attack surface of WASM is negligible compared to JavaScript (and IIUC disabling JavaScript will also disable WASM).
Third-party frames is definitely a good way to reduce your attack surface though. Ad embeds are often used to distribute exploits.
A few hundred a month is just a few per day. That is pretty low volume by most standards.
I would say in general if the SMTP server could be replaced by a single human writing and mailing snail-mail letters by hand it qualifies as low volume.
The concern is that it would be nice if the UNIX users and LDAP is automatically in sync and managed from a version controlled source. I guess the answer is just build up a static LDAP database from my existing configs. It would be nice to have one authoritative system on the server but I guess as long as they are both built from one source of truth it shouldn’t be an issue.
Yes, LDAP is a general tool. But many applications that I am interested in using it for user information. That is what I want to use it for. I’m not really interested in storing other data.
I think you are sort of missing the goal of the question. I have a bunch of self-hosted services like Jellyfin, qBittorrent, PhotoPrism, Metabase … I want to avoid having to configure users in each one individually. I am considering LDAP because it is supported by many of these services. I’m not concerned about synchronizing UNIX users, I already have that solved. (If I need to move those to LDAP as well that can be considered, but isn’t a goal).
But it does boil down to business pressures. The business prefers more and bigger produce to more nutritional produce.
Is that a bad thing? Maybe not. Maybe you can just eat more to get your nutrition since higher yield should reduce cost.
But the point still stands that there is very little business pressure to make a nutritious product.
But the problem is that most self-hosted apps don’t integrate well with these. For example qBittorrent, Jellyfin, Metabase and many other common self-hosted apps.
The short answer is that Docker (and other containerization technologies) share the Linux kernel with the host. The Linux kernel is very complicated and shouldn’t be trusted to be vulnerability free. Exploitable bugs are regularly discovered in the Linux kernel (and Windows and Darwin). No serious companies separate different tenets with just container technology. Look at GCP, AWS, DigitalOcean… they all use hardware virtualization which is much simpler and much more likely to be secure (but even then bugs are found on occasion).
So in theory it is secure, but it is just too complex to rely on. I say that docker is good for “mostly trusted” isolation. Different organizations in the same companies, different software that isn’t actively trying to be malicious. But shouldn’t be used to separate different untrusted parties.