

Wikipedia also offers some (limited) information: https://en.wikipedia.org/wiki/.zip_(top-level_domain)
A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.
I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.
Wikipedia also offers some (limited) information: https://en.wikipedia.org/wiki/.zip_(top-level_domain)
I don’t think you need to worry about that too much. It’s a very uncommon character trait for constructive people. Alike people who run a successful instance. Most of them are nice. And the very few who aren’t, or are very agitated/argumentative will inevitably run into issues with other people as well… So there isn’t much to loose.
I did some wardriving a long time ago but never used those internet connections. And I shared my connection before and had a Freifunk router. With the neighbours not so much. I’m mostly nice to them and ask before borrowing their stuff.
I think that’s a size where it’s a bit more than a good autocomplete. Could be part of a chain for retrieval augmented generation. Maybe some specific tasks. And there are small machine learning models that can do translation or sentiment analysis, though I don’t think those are your regular LLM chatbots… And well, you can ask basic questions and write dialogue. Something like “What is an Alpaca?” will work. But they don’t have much knowledge under 8B parameters and they regularly struggle to apply their knowledge to a given task at smaller sizes. At least that’s my experience. They’ve become way better at smaller sizes during the last year or so. But they’re very limited.
I’m not sure what you intend to do. If you have some specific thing you’d like an LLM to do, you need to pick the correct one. If you don’t have any use-case… just run an arbitrary one and tinker around?
Thanks! I’ve updated the link. I always just use Batocera or something like that, which has Emulationstation and Kodi set up for me. So I don’t pay a lot of attention to the included projects and their development state…
I didn’t include this, since OP wasn’t mentioning retro-gaming. But Batocera, Recalbox, Lakka, RetroPie are quite nice. I picked one which includes both Kodi and Emulationstation and I can switch between the interfaces with the gamecontroller. I get all the TV and streaming stuff in Kodi, and Emulationstaation launches the games. And I believe it can do Flatpaks and other applications as well.
https://plasma-bigscreen.org/ from KDE? I’m not sure if they’ve replaced that since. Wikipedia says it’s unmaintained. Depending on your use-case, you might want to have a look at Emulationstation, Steam Big Picture and Kodi Plugins, as well.
I think I’m fine. I’ll just search for some words in the title and that usually returns the correct post. And as long as it’s the Fediverse and not a closed forum with login or Discord, I can use Google, since it’s on the open internet. At least for Lemmy. Other than that it’s really hard. I don’t think any search engine can find me the article that I skimmed by Friday evening where I just vaguely remember on how it was about some Youtuber that I know, and I have no other information. I sometimes want to find stuff and it’s impossible. With any search engine/method. Sometimes my browser history helps me with that. Or homing in on a timeframe and a rough place and then scrolling through things. But a least for me it tends to be one of the two extremes. Either the rudimentary tools are fine. Or it’s really hard but a “better” search wouldn’t cut it either.
I think many people use it and it works. But sorry - no, I don’t have any first-hand experience. I’ve tested it for a bit and it looked fine. Has a lot of features and it should be as efficient as any other ggml/llama.cpp based inference solution at least for text. I myself use KoboldCPP for the few things I do with AI and my computer is lacking a GPU so I don’t really do a lot of images with software like this. And it’s likely going to be less for you than the 15 minutes it takes me to generate an image on my unsuited machine.
Maybe LocalAI? It doesn’t do python code execution, but pretty much all of the rest.
I’d just set up the reverse proxy on the VPS and make it forward everything via IPv6. But you could also use a tunnel/VPN, everything from Tailscale to Wireguard or even an SSH tunnel would work. And there are dedicated services like Cloudflare, nohost, neutrinet, pagekite…
Try finding out if it received an IP address, if the driver is loaded or if there are any error messages in dmesg
. You might also want to give more information. Which ethernet card? Which version of Linux are you running? And there seem to be some similar reports on Reddit and in some Linux forums. I couldn’t find a solution, though. Maybe you just want to buy a cheap new network card.
You could run multiple mail servers. Or download from Sharehosters in parallel. Or download more Youtube videos before the rate limit stops you. Or use virtualization or containers to launch some more virtualized servers.
Sure, I have an old PC with an energy efficient mainboard and a PicoPSU and I wouldn’t want anything else. I believe it does somewhere around 20W-25W though. And I have lots of RAM, a decent (old) CPU and enough SATA ports… Well, I would go for a newer PC, they get more energy efficient all the time… But it’s a lot of effort to pick the components unless some PC magazine writes something or someone has a blog with recommendations.
You’ll want to look up the QNAP as well. I’ve seen reports with quite some variety on the power consumption. Depending on the exact model, it could be somewhere in the range from 25W to 55W… So could be less, could be the same. And have a look at the amount of RAM if you want to run services on it.
And I guess if you’re in front of the computer, you could just press the reset button or unplug it at that point (after it sucessfully synchronized the disks). no need to let it sit, there is no harm or data to be lost at that point.
I think Radicale, Baikal, SabreDAV or NextCloud are the most common choices. I read those names a lot.
But I believe only one of those isn’t written in PHP.
I’d really recommend digging into the “hacking” though. Unless you learn from your specific mistakes and avoid that in the future, you might run in to the exact same issue again. And I mean it could be a security flaw in the program code of the WebDAV server. But it could as well be a few dozen other reasons why your server wasn’t secure… (Missing updates, insecure passwords, missing fail2ban, a webserver or reverse proxy, unrelated other software… There are a lot of moving gears in a webserver and lots of things to consider.)
I can’t remember the exact details, but I believe the attackers also targeted instances? So it’s not just that it happens with certain problematic instances, but everyone could have that uploaded to their media storage. And it can come from arbitrary places. I believe that adds to the problem. And it kind of requires to shut these things down for everyone. Or at least everyone except a few excellent hand-picked instances who cooperate closely, and the moderation tools actually work.
Yes, they’ve done an excellent job. I just wish they wouldn’t have to deal with these things.
(And I also think some of the child protection agencies should finally offer some open-source tool to scan content. Afaik there are still no image classifiers or hash tables I could use for my projects.)
Same, same. I can’t verify it and I probably don’t want to. But I had people assure to me it happened.
I think they would need to find a way to address the problem first. Reportedly, these images have been a huge problem here on Lemmy. Several times now.
Fingers crossed, but we also know Lemmy might not be ready for that type of philosophy. I mean I still don’t know what exactly happened, but lemm.ee wasn’t successful in the end. And the underlying issues are still there. So the next admin team might face the same dynamics.