

Looks like it’s the new default in OpenCloud.
Mama told me not to come.
She said, that ain’t the way to have fun.
Looks like it’s the new default in OpenCloud.
It’s a problem. I have like 10:
I only use three:
I should probably give some up, but I’m a little attached to them.
I think it’s a chicken and egg problem. A FOSS Roku-replacement needs apps to make get popular, and manufacturers won’t port their apps until it’s popular. Basically, manufacturers need someone with a big marketing budget to help them feel comfortable investing in a platform, but that’s not going to happen with a nice FOSS platform.
Maybe if we collectively raise like $100M or something, we could put together a big enough marketing budget to convince some of the bigger names (Netflix, HBO, etc) to take the risk, and the rest will follow if it’s popular enough. Maybe.
If you control both sides, you can just use a non-standard port. If you only control the client, you just need to make sure wherever your data exits allows outgoing traffic on that port.
So yeah, that should work.
If you turn off the battery optimization setting, does it get better?
I was missing a bunch of alerts for Signal because I downloaded it outside the GPlay store (which handles waking up processes better), and I got much more reliable notifications after disabling that setting. It uses a lot more battery though, but maybe that’s worth it for you.
Check out the POSIX driver in OCIS/OpenCloud. It should keep the responsiveness of Seafile, while having a sane disk format.
Or you can try out the Seafile FUSE layer.
I’m in a similar boat, and I’ve been testing out Seafile and ownCloud OCIS, and I think I prefer OCIS. I’ll probably switch to OpenCloud though, since it seems a lot of the OCIS devs went there due to issues w/ management.
Some things I didn’t like about Seafile:
But hey, if it works, it works, so don’t mess w/ it.
Yes, maybe. Or maybe not.
Here’s what I can verify:
For the last point, I don’t know if this particular person sucks more than others in the same community. I can see they claim that, but I can’t independently verify that. I hope the Matrix community can see the constructive criticism here and fix the underlying issues, regardless of who is “right” here.
Yeah, I would be very hesitant to hire anyone under 22, not because of ageism (I can’t legally ask that), but because they’re unlikely to have the experience needed to do the job.
Here’s my opinion:
Then upgrade anything that’s <1G on your LAN, and leave the rest as-is until you actually need it. Chances are, you won’t, and it’s not worth spending the money. Prices for 2.5G and 10G (and higher) will eventually come down, so put it off until you actually need it and you’ll probably save money in the long run.
In terms of what It takes, I think others gave good insight. Here’s my basic summary:
It’s going to be expensive supporting anything over 2.5G in an entire network. Honestly, 1G is probably fine, and you can upgrade things more incrementally as you decide to improve speeds between endpoints (big ones are anything that handles high bitrate video).
If they don’t say it’s required, assume it’s not and ask them for details to run your own. IMO, you’ll be happier if you can control exactly what you’re running.
I’m guessing it’s closer to 5 people than 500. The Matrix development ecosystem can’t be that large.
Why use OpenCloud instead of ownCloud Infinite Scale, which it was forked from? What’s the value proposition?
But it doesn’t, they have instructions for bare metal installs as well. Or you could use podman if you want.
You can enable the POSIX driver on OCIS and get a more traditional filesystem layout.
Server is Apache 2.0, and frontend is AGPL v3, which seems to be the same for ownCloud OCIS, which they seem to have forked from.
Makes sense.
I’m more interested in cutting off-site backup costs, so my NAS has RAID mirror to reduce chance of total failure, and offsite backup only stores important data. I don’t even backup the bulk of it (ripped movies and whatnot), just the important data.
Restore from backup looks like this for my NAS:
Personal devices are similar, but installing packages is manual (perhaps I’ll backup my explicitly stored package list or something to speed it up a little). Setup takes longer than your method, but I think it’s worth the reduced storage costs since I’ve never actually needed to do it and a few hours of downtime is totally fine for me.
Your options are only as limited as your imagination and complexity of your requirements.
If you’re only using it on your network, just use HTTP with mdns (or have static routes from your router or something, but you said you don’t want that) so you don’t have to remember IP addresses. If you want TLS, you can borrow someone else’s domain with a service like FreeDNS.afraid.org (5 free subdomains). Or if you control the devices completely, you can make a root CA and add that to each device’s trusted CA list, and then sign your own certs and eliminate MITM attacks.
You have options, and most are overkill. The simplest, secure solution is HTTP on your local network or over a VPN you trust (if you have a publicly accessible IP, just host your own WireGuard server on/via your router).
Better yet, track your configs in version control do you can easily roll it back and back it up, all at the same time.
What do you mean by “separately be able to clear completed tasks”?
I just mean keep the list of completed tasks until I manually push clear, just like Google Keep does (cross them out), and only clear the completed tasks when I push a button.
Basically, I sometimes mark tasks done on accident, and sometimes I’ll carry the extra tasks on to the next trip.
Basically it’s the same things as text notes, just with a bit more formatting options.
It has a lot more formatting options:
You could get something pretty useful by just making a collaborative Markdown editor, but then it’s not really a Docs replacement, but more of an Etherpad replacement.
That’s fine, I guess I’m more concerned about scope creep ultimately killing the project.
there must always be a protocol behind it
Sure. I guess my point is that Matrix is targeting text, audio, and video chat with hundreds if not thousands of simultaneous users in one room, all with E2EE enabled.
A Google Keep replacement doesn’t even need to be real time collaborative, and it certainly doesn’t need to support hundreds of simultaneous users on a given document. It’s like using a chainsaw to trim a bush, it’s way overkill, and there’s a decent chance of changes to the protocol breaking stuff for you since you don’t need most of the features.
The backend for this just needs to notify other clients of a change, real time isn’t necessary or even particularly helpful.
And you’d still need an application server to handle the storage and retrieval of the data, no? So all Matrix is buying you is synchronization, which is just a simple pub/sub.
What’s the difference between chat and data?
You don’t really need a list of changes for a shared TODO app. The data is going to be small and going back in time isn’t that useful.
Maybe it makes sense for something with revision history, like a DIY git
. But TODO lists are ephemeral, and I really don’t care about them after I’m done with my shopping trip.
the user X is currently typing
Seems like overkill to me.
Maybe it makes sense for something more fancy like an Etherpad or Confluence replacement, but not for a shopping list.
Build it however you like and prove me wrong, I’ll check it out if it solves my problem.
Why?
I have a similar setup, but to add to the problem, I’m also behind CGNAT. Here’s my setup:
To access my LAN from outside, I have a WireGuard tunnel to my VPS.
The address my DNS resolves to is absolutely unrelated to any addresses my router understands. So to prevent traffic to my locally hosted resources from leaving my LAN, I need my DNS to resolve to local addresses. So I configured static DNS entries on my router to point to local addresses, and I have DHCP provide my router as the primary DNS source and something else as a backup.
This works really well, and TLS works as expected both on my LAN and from outside my LAN. The issue OP is seeing is probably with a non-configured device somewhere that’s not querying the local DNS server.