• 1 Post
  • 28 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle




  • I think you’re referring to FlareSolverr. If so, I’m not aware of a direct replacement.

    Main issue is it’s heavy on resources (I have an rpi4b)

    FlareSolverr does add some memory overhead, but otherwise it’s fairly lightweight. On my system FlareSolverr has been up for 8 days and is using ~300MB:

    NAME           CPU %     MEM USAGE
    flaresolverr   0.01%     310.3MiB
    

    Note that any CPU usage introduced by FlareSolverr is unavoidable because that’s how CloudFlare protection works. CloudFlare creates a workload in the client browser that should be trivial if you’re making a single request, but brings your system to a crawl if you’re trying to send many requests, e.g. DDOSing or scraping. You need to execute that browser-based work somewhere to get past those CloudFlare checks.

    If hosting the FlareSolverr container on your rpi4b would put it under memory or CPU pressure, you could run the docker container on a different system. When setting up Flaresolverr in Prowlarr you create an indexer proxy with a tag. Any indexer with that tag sends their requests through the proxy instead of sending them directly to the tracker site. When Flaresolverr is running in a local Docker container the address for the proxy is localhost, e.g.:

    If you run Flaresolverr’s Docker container on another system that’s accessible to your rpi4b, you could create an indexer proxy whose Host is “http://<other_system_IP>:8191”. Keep security in mind when doing this, if you’ve got a VPN connection on your rpi4b with split tunneling enabled (i.e. connections to local network resources are allowed when the tunnel is up) then this setup would allow requests to these indexers to escape the VPN tunnel.

    On a side note, I’d strongly recommend trying out a Docker-based setup. Aside from Flaresolverr, I ran my servarr setup without containers for years and that was fine, but moving over to Docker made the configuration a lot easier. Before Docker I had a complex set of firewall rules to allow traffic to my local network and my VPN server, but drop any other traffic that wasn’t using the VPN tunnel. All the firewall complexity has now been replaced with a gluetun container, which is much easier to manage and probably more secure. You don’t have to switch to Docker-based all in go, you can run hybrid if need be.

    If you really don’t want to use Docker then you could attempt to install from source on the rpi4b. Be advised that you’re absolutely going offroad if you do this as it’s not officially supported by the FlareSolverr devs. It requires install an ARM-based Chromium browser, then setting some environment variables so that FlareSolverr uses that browser instead of trying to download its own. Exact steps are documented in this GitHub comment. I haven’t tested these steps, so YMMV. Honestly, I think this is a bad idea because the full browser will almost certainly require more memory. The browser included in the FlareSolverr container is stripped down to the bare minimum required to pass the CloudFlare checks.

    If you’re just strongly opposed to Docker for whatever reason then I think your best bet would be to combine the two approaches above. Host the FlareSolverr proxy on an x86-based system so you can install from source using the officially supported steps.



  • It’s likely CentOS 7.9, which was released in Nov. 2020 and shipped with kernel version 3.10.0-1160. It’s not completely ridiculous for a one year old POS systems to have a four year old OS. Design for those systems probably started a few years ago, when CentOS 7.9 was relatively recent. For an embedded system the bias would have been toward an established and mature OS, and CentOS 8.x was likely considered “too new” at the time they were speccing these systems. Remotely upgrading between major releases would not be advisable in an embedded system. The RHEL/CentOS in-place upgrade story is… not great. There was zero support for in-place upgrade until RHEL/CentOS 7, and it’s still considered “at your own risk” (source).






  • It’s always been very difficult. It is possible to complete (I did one on Helldive with randoms a couple days ago), but you need to approach the mission in a very specific way. Have you noticed that the map is really large for a 15 minute mission? I think the extra space is there for a reason. If most of the team engages bots away from the main objective then the drops will be mostly off the main objective. One player can play ninja (Scout armor, smoke grenades/eagle/orbital) and run around pushing the buttons while the rest of the team draws heat from the main objective and kites around the reinforcements.

    The other technique, which works when solo and may be better for duos, is to kite reinforcements away from the pad until aggro drops. Once it does you run back to the pad and push buttons as much as you can until you start to get overrun again. Then repeat. Edit: to be clear, even with a 4-man team it’s likely that the button-pusher will start receiving drops at some point, in which case they’ll need to kite off the pad too.

    I still think the mission design is not good, mostly because it’s not communicated to players that you have to approach the mission completely differently than any other mission type in the game. I have to imagine it’s intentional though, why else make the map so large?








  • CountVon@sh.itjust.workstoLinux@lemmy.mlBoot into NVMe drives from USB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    I’m sure there would be a way to do this with Debian, but I have to confess I don’t know it. I have successfully done this in the past with Clover Bootloader. You have to enable an NVMe driver, but once that’s done you should see an option to boot from your NVMe device. After you’ve booted from it once, Clover should remember and boot from that device automatically going forward. I used this method for years in a home theatre PC with an old motherboard and an NVMe drive on a PCIe adapter.