

“TRaSH-guides” into your favourite search engine. Even if you don’t want to set up a *arr, the pros and cons of file format are discussed there.
Prowlarr suggests Knaben, then TheRARBG are my most successful sources of Linux ISOs.
By DMing me you consent for them to be shared with whomever I wish, whenever I wish, unless you specify otherwise
“TRaSH-guides” into your favourite search engine. Even if you don’t want to set up a *arr, the pros and cons of file format are discussed there.
Prowlarr suggests Knaben, then TheRARBG are my most successful sources of Linux ISOs.
I thought Libation merely broke the TOS, not violated the law (UK). Doesn’t matter, I’m on a different vendor now.
Update went fine on a bare metal install. Customising the webUI port is a little easier now, instead of editing lighttdp.conf I think you can do it in the UI.
I struggled to find some settings, I looked for ages for the API token. Found it in all settings: expert, scroll for half a mile down the webUI API section.
Also, struggled with adding CNAMES in bulk, I thought you could do that in the old UI. You might be able to in the new UI. I just 'one by one’d them.
Docker update went flawlessly.
I have an lxc and to go which is a task for another day, unless TTeck’s updater beats me to it.
+1 for running pihole in an LXC, and a redundant pihole in a docker container.
They never update at the same time, or in the same way so near as dammit constant uptime.
What does IR red shift into over cosmic distances? But it would be just as, if not less, noticeable as a star suddenly dimming to [100%-optimal capture rate]
The other “benefit” to the sphere is blacking out a star. Other life, should it exist, is less likely to find the structure. ITT people destroying my dreams of a big shelly boi
My main storage is a mirrored pair of HDD. Versioning is handled here.
It Syncthings an “important” folder to a local back up only 1 HDD.
The local Backup Syncthings to my parents house with 1 SSD.
My setup can be better, if I put the versioning on my local backup it’d free space on my main storage. I could migrate to a dedicated backup software, Borg maybe, over syncthing. But Syncthing I knew and understood when I was slapdashing this together. It’s a problem for future me.
I’ve been seriously considering an Elitedesk G4 or Dell/Lenovo equivalent as back up machines. Mirrored drives. Enough oomph to HA the things using the “important” files: immich paperless etc.
My big problem is remote stuff. None of my users have aftermarket routers to easily manipulate their DNS. One has an android modem thing which is hot garbage. I’m using a combination of making their pi be their DHCP and one user is running on avahi.
Chrome, the people’s browser of choice, really, really hates http so I’m putting them on my garbage ######.xyz domain. I had plans to one day deal with Https, just not this day. Locally I just use the domain for vaultwarden so the domain didn’t matter. But if people are going to be using it then I’ll have to get a more memorable one.
System updates have been a faff. I’m 'ssh’ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally. Also, it fucks up dkpg beyond what --configure -a can repair. I’ll learn to update in background one day, or include tailscale in the unattended-upgrades. Honestly, I should put everything into unattended-upgrades.
Locally works as intended though, so that’s nice. Everything also works for my fiancee and I remotely all as intended, which is also nice. My big project is coalescing what I’ve got into something rational. I’m on the make it good part of the “make it work > make it good” cycle.
In that case. Homarr is awesome, no complaints.
I probably won’t retroact this, my family aren’t going to explore and it was more to keep them on their specific homepage and stop them getting lost. New users will be locked to their specific page, I don’t expect they’ll ever go exploring to find out.
+1 for Homarr. I didn’t need to learn how to write any configs. Everything can be setup in realtime, in the GUI, and is immediately testable. Homarr brought a homepage down to my skill level.
My only wish is to lock homepages behind user permissions but it’s fine, my family friends don’t intend to explore, just to get to where they’re going.
You over estimate my competence. I do intend to leave my ISP firewall up and intact, but I could build layers behind it.
I run everything on a minipc (beelink eq12), which I intend to age into a network box (router, dns, firewall) when I outgrow it as a server. It’ll be a couple years and few more users yet though.
I don’t think I’m ever opening up anything to the internet. It’s scary out there.
I don’t trust my competence, and if I did, I dont trust my attention to detail. That’s why I outsource my security: pihole+firebog for links, ISP for my firewall, and Tailscale for tunnels. I’m not claiming any of them are the best, but they’re all better than me.
That was my conclusion as well, however I am at work and it’s not appropriate to be reading docker documentation. Thank you for the write up.
I am not the person to be asking, I am no docker expert. It’s is my understanding depends_on: defines starting order. Once a service is started, it’s started. If it has an internal check for “healthy” I believe watchtower will restart unhealthy containers.
This is blind leading the blind though, I would check the documentation if using watchtower. We should both go read the “depends on” documents as we both use it.
It’s Watchtower that I had problems with because of what you described. Watchtower will drop your microservice, say a database, to update it and then not reset the things that are dependent on it. It can be great just not in the ham fisted way I used it. So instead I’m going to update the stack together, everything drops, updates, and comes back up in the correct order
Uptime Kuma can alert you when a service goes down. I am constantly in my Homarr homepage that tells me if it can’t ping a service, then I go investigating.
I get that it’s scary, and after my Watchtower trauma I was hesitant to go automatic too. But, I’m managing 5 machines now, and scaling by getting more so I have to think about scale.
I’ve encountered that before with Watchtower updating parts of a serrvice and breaking the whole stack. But automating a stack update, as opposed to a service update, should mitigate all of that. I’ll include a system prune in the script.
Most of my stacks are stable so aside from breaking changes I should be fine. If I hit a breaking change, I keep backups, I’ll rebuild and update manually. I think that’ll be a net time save over all.
I keep two docker lxcs, one for arrs and one for everything else. I might make a third lxc for things that currently require manual updates. Immich is my only one currently.
deleted by creator
Release: stable
Keep the updates as hands off as possible. Docker compose, TTeck’s LXC updater, automatic upgrades.
I come through once a week or so to update the stacks (dockge > stack > update), I come through once a month or so to update the machines (I have 5 total). Total time updating is 3hrs a month. I could drop that time a lot when I get around to writing some scripts to update docker images, then I’d just have to “apt update && apt upgrade”
Minimise attack surface and outsource security. I have nothing at all open to the internet, I use Tailscale to create tunnels. I’m trusting my security to Tailscale but they are much, much, better at it than I am.
Fuck me for not wanting to die to industrial negligence I guess.
Hardware wise I’d go AIO. A mini and a pair of mirrored USB drives is my setup. I have an off-site backup running: another mini + USB. Finally, I have an inherited laptop as a redundant network box/local backup/immich compute. I have 5 households on my network, and aside from immich spiking in resources (hence the laptop), I have overhead to spare.
An n100 mini (or n150, n200, whatever) is cheap enough and powerful enough for you to jump in, decide if you want to spend more later. They’re small, quiet, reasonable Value for Money, easy Wife Acceptance Factor, and can age into a bunch of devices if you decided self hosting isn’t for you. I’d make a retro console out of any spare mini.
This way, when spending £x00s on a server, you’ll have some idea on what you actually need/want. The n100 can the age into a firewall/network box/local back up/etc if/when you upgrade.
All that said. An AIO storage-compute box is where I’m headed. I now know I need a dedicated graphics card for the immich demand. I now know I want a graphics card for generative AI hobby stuff. I know how much storage I need for everyone’s photos, and favorite entertainment, permanent stuff. I know how much storage I need for stuff being churned, temporary stuff. I now know I don’t care about high availability/clusters. I now know… Finally, the ‘Wife’ has grown used to having a server in the house: it’s a thing I have, and do, which she benefits from. So, a bigger, more expensive, and probably louder box, is an easier sell.