- 23 Posts
- 20 Comments
And, i can’t find clients on f-droid. Any variants recomended that dont come from the playstore.
Another key feature will be Keepass data import.
That is another problem i face when i have the app open on desktop and phone at the same time. Its a nightmare.
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Linkwarden downloaded the whole flipping Internet ...English
1·3 months agoThis is a good comment! I just discovered after your comment that floccus has a setting to link up with Linkwarden so that together, they achieve most of my desired outcomes. It just becomes more i volved in the managemente as you no end up with two components to manage rather than one ;-)
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Linkwarden downloaded the whole flipping Internet ...English
5·4 months agoMate, it was a sarcastic statement 😉
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Linkwarden downloaded the whole flipping Internet ...English
2·4 months agoWell no. Initially i had the storage set on the VM where its running. I wasn’t expecting it to download all that data.
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Linkwarden downloaded the whole flipping Internet ...English
2·4 months agoI was using floccus, but what is the point of saving bookmarks twice, once in linkwarden and once in browser
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•Getting old and would like a better way to track health the self hosted wayEnglish
6·4 months agoabsolutely, none of that is going past my router.
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•Moving docker image data between VMs
0·4 months agoInterestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don’t like working off NFS share so we’ll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they’ll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•Moving docker image data between VMs
0·4 months agoThe first rule of containers is that you do not store any data in containers.
Do you mean they should be bind mounts? From here, a bind mount should look like this:
version: ‘3.8’
services: my_container: image: my_image:latest volumes: - /path/on/host:/path/in/container
So referring to my Firefly compose above, then I shoudl simply be able to copy over the
/var/www/html/storage/uploadfor the main app data and the database stored in here/var/lib/mysqlcan just be copied over? but then why does my local folder not have anystrorage/uploadfolders?user@vm101:/var/www/html$ ls index.html
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•Moving docker image data between VMs
0·4 months agoHere is my docker compose file below. I think I used the standard file that the developer ships, simply because I was keen to get firefly going without fully understanding the complexity of docker storage in volumes.
The Firefly III Data Importer will ask you for the Firefly III URL and a "Client ID". # You can generate the Client ID at http://localhost/profile (after registering) # The Firefly III URL is: http://app:8080/ # # Other URL's will give 500 | Server Error # services: app: image: fireflyiii/core:latest hostname: app container_name: firefly_iii_core networks: - firefly_iii restart: always volumes: - firefly_iii_upload:/var/www/html/storage/upload env_file: .env ports: - '84:8080' depends_on: - db db: image: mariadb:lts hostname: db container_name: firefly_iii_db networks: - firefly_iii restart: always env_file: .db.env volumes: - firefly_iii_db:/var/lib/mysql importer: image: fireflyiii/data-importer:latest hostname: importer restart: always container_name: firefly_iii_importer networks: - firefly_iii ports: - '81:8080' depends_on: - app env_file: .importer.env cron: # # To make this work, set STATIC_CRON_TOKEN in your .env file or as an environment variable and replace REPLACEME below # The STATIC_CRON_TOKEN must be *exactly* 32 characters long # image: alpine container_name: firefly_iii_cron restart: always command: sh -c "echo \"0 3 * * * wget -qO- http://app:8080/api/v1/cron/XTrhfJh9crQGfGst0OxoU7BCRD9VepYb;echo/" | crontab - && crond -f -L /dev/stdout" networks: - firefly_iii volumes: firefly_iii_upload: firefly_iii_db: networks: firefly_iii: driver: bridge
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•self hosted system for managing donations at museumEnglish
1·7 months agoThey are mostly cash. On average 5-10/day over a 5 hrs day.
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•My first seccam, now the Frigate mystery in LXCEnglish
1·8 months agoAh right. Docker seems to have gained more ground than LXC if its the first time I come across it. I hadn’t realised they were similar, especially after I discovered that people are running docker in LXC …
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•My first seccam, now the Frigate mystery in LXCEnglish
2·9 months agoOK, I should have been clearer. With “community LXC repository on github” I actually meant that I used the LXC scripts. It did go through a few questions at the start but nothing relating to storage and camera setup.
trilobite@lemmy.mlOPto
Self Hosted - Self-hosting your services.@lemmy.ml•[QUESTION] Running Frigate on VM that in turn runs on Proxmox
0·10 months agoLooks like I have two options for Proxmox + Frigate:
a) full VM via a QEMU VM that then has Frigate as app container (Frigate website is not recommending this approach from what I understand)
b) Virtual environment (VE) thgrough the “Proxmox Container Toolkit” where Frigate is as a system container (i.e. docker container directly in the Proxmox environment, which eliminates the VM overhead. See here: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct
Looks like someone has got it up and running in the PCT environment https://www.homeautomationguy.io/blog/running-frigate-on-proxmox
Also, I need to get my hands on a Micro desktop with a PCIe slot so that I can stick the Coral unit in it. Any thoughts for cheap solutions on eBay?
trilobite@lemmy.mlto
Selfhosted@lemmy.world•A collection of 150+ self-hosted alternatives to popular softwareEnglish
1·1 year agoI think vTiger community efition is still open source?
Really helpful, thank.
trilobite@lemmy.mlOPto
Selfhosted@lemmy.world•ZFS snapshots of VM Truenas datasets - am I safe?English
3·1 year agomake me shake … brrr
I’m going to try and see is I can get a VM running on the second Truenas server using the replicated dataset. I only use the second machine to duplicate datasets in case the first machine fails and have to rebuild it.
I’ve been asking myself the same question for a while. The container inside a VM is my setup too. It feels like the container in the VM in the OS is a bit of an onion approach which has pros and cons. If u are on low powered hardware, I suspect having too many onion layers just eat up the little resources you have. On the other hand, as Scott@lem.free.as suggests, it easier to run a system, update and generally maintain. It would be good to have other opinion on this. Note that not all those that have a home lab have good powered labs. I’m still using two T110’s (32GB ECC ram) that are now quite dated but are sufficient for my uses. They have Truenas scale installed and one VM running 6 containers. It’s not fast, but its realiable.
trilobite@lemmy.mlto
Selfhosted@lemmy.world•Looking for Self-hosted Bookmark ManagerEnglish
0·2 years agoI’m also looking into this a bit as I’m ditching Nextcloud and need a more modulare approach to managing the three things i care about: calendards, files and bookmarks. Sorted calendars with Radicale (superb) and files with Syncthing but now looking at the bookmarks. This (https://github.com/awesome-selfhosted/awesome-selfhosted?tab=readme-ov-file#bookmarks-and-link-sharing) has several solutions proposed. lingding and linkwarden seem to be good and reasonable active on Github. Anyone compared these?
I’m picking up on this because I’m getting a bit confused. I’ve run this through docker compose using the below yaml. I’ve done it as normal user, “Fred” (added to docker group) rather than root (using sudo although it make no difference as I get the same outcome). I normally have a “docker” folder in my /home/fred/ folder so is /home/fred/docker/vaultwarden in this instance (i.e. my data folder is in here).
I get the same issue highlighted here which is all about the SSL_ERROR_RX_RECORD_TOO_LONG when trying to connect via https, whereas when I try to connect via http, I get a white page with Vaultwarden logo in top left corner and the spinning wheel in the center. I’ve got no proxy enabled and I’m still not clear why I need one if I’m only accessing this via LAN. Is this something on the lines of “you must yse this through a proxy or it won’t work” thing? Although that not why I understood the from the guidance. I’m clearly missing something although not sure what exactly it is …
services: vaultwarden: image: vaultwarden/server:latest container_name: vaultwarden restart: always environment: # DOMAIN: "https://vw.home.home/" SIGNUPS_ALLOWED: "true" volumes: - ./vw-data/:/data/ ports: - 11001:80