All the public Piped instances are getting blocked by YouTube but do small selfhosted instances, that are only used by a handful of users or just yourself, still working? Thinking of just selfhosting it.

On a side note, if I do it, I’d also like to install the new EFY redesign or is that branch too far behind?

Edit: As you can see in the replies, private instances still work. I also found the instructions for running the new EFY redesign here

  • Lucy :3@feddit.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 months ago

    I have dozens of services, and most of them start their own http server, using a regular websocket binding to localhost and a port. As most of them are web services, I run out of standard ports pretty fast - 80, 8000, 8080, and then 8069, 8070 etc. Keeping tracks is a pain. Docker just makes it worse. Also, all non-web services have standard ports - 25, 456 for smtp/smtps, which nmap identifies. In my current state, an attacker could just open a random port on my server and I couldn’t notice.

    Unix sockets are basically just regular files, where http traffic is written to and read from. So eg. gitlab-puma or piped-proxy creates the file /run/gitlab/gitlab.socket respectively /run/piped/proxy.socket, and my reverse proxy (nginx) communicates through that socket with the service, just as it would through a regular websocket using localhost and a port. Except unix sockets are easily identifiable (they are named and put in dirs dependent on their service) and can be access controlled much better - instead of any service in the whole network (assuming no firewall is present on the device, usually behind a consumer grade router) being able to communicate with the service, only members of the group http (nginx) or the services’ user can read/write to the socket, assuming nginx is save and root, http and the services’ user are not compromised, not even an attacker with access to the server can read any traffic, as it’s encrypted (https) to nginx, and not readable to other users through the socket file. It’s also a bit more performant. The catch is: Very few programs support it, and many of the ones that do implement it incorrectly. Usually I would create a specific user for a service (or sysusers.conf file would), under which the service runs in systemd, and which therefore owns the socket file. The http user is then added to the group of the services user, or the file’s group is set to http. With 770 (or 660) permissions (Read and write for the user and all users in the group, including http) everything would be fine, however, they’re usually 755, so only actually writeable for the owner, and not the group, so http, which makes communication impossible. And as just creating the file with correct ownership and permissions beforehand results in the service believing the socket is already in use, I usually have to patch the actual program itself. Maybe I can do something with systemd’s PostExec etc. tho.

    And piped-backends library does not support unix sockets at all - so I will need to extend the incredibly complicated library itself to get what I want. Damn.

    • LainTrain@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 months ago

      So basically you’re using Unix sockets on your LAN level between nginx and internal machines for finer grained access control and because you’re running out of ports. That’s really cool! I’ll have to read into this myself.