• casual_turtle_stew_enjoyer@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    8 months ago

    If I were to fully elaborate, I’d be typing for hours, so I’ll sum up:

    • pip - default behavior is to install to system-wide site packages. In a venv, it will try to upgrade/uninstall system packages without notice/consent unless you specify --require-virtualenv. Multiple things can fuck up your ENV to make the python binaries point to system-wide, while your terminal will still show you as in a venv. Also why TF would package metadata files need to be executable? Bad practice, -1/10
    • nix - they acknowledged years ago that they should probably have some kind of package signing and perhaps an SBOM or similar mechanism, but then did nothing to implement it and just said “oh well, guess we’re vulnerable to supply chain attacks, best not to think about it”
    • brew - installing packages parallel to your system packages manager, without containers. My chief complaint here is that brew is a secondary package manager that people might treat as a “set and forget” for some packages, rarely updating them. So what happens when a standard library used by a brew package is vuln? A naive Linux user might update their system packages but totally forget to update brew. And when updating brew, you can easily hit max_open_file_descriptors because kitchen sink

    From there, it’s all extremely nit-picky and paranoid-fueled-- basically, none of the package managers I mentioned are conducive, in my eyes at least, to a secure and intuitive compute environment.

    Unfortunately, there’s not much I can do about it except bang pots and pans and throw maintainers under buses when the issue that has been present for years rears it’s ugly head. Because they are the only ones who can change this, and pressure is the only thing that might motivate them to.