• 0 Posts
  • 33 Comments
Joined 10 months ago
cake
Cake day: February 10th, 2024

help-circle
  • The detail that stands out the most about this is its screen resolution: 720x480 is both a perfect 3x integer scale of GBA’s 240x160 and a good 1x fit (with small black bars) for NTSC video.

    GBA games scale particularly poorly on most of their other devices’ screens. On 640x480, the closest integer scale is 2x which makes the image fill only 50% of the screen area, otherwise you can use 2.667x scaling to fill the width (11% of height being letterboxing) at the expense of blurriness from non-integer scaling.

    The last time Anbernic released a screen well-suited for GBA games was four years ago with RG351P/RG351M. Their 480x320 screens are a great 2x fit for GBA but an awful fit for every other system. 720x480 is a lot better for NTSC content while still being integer scale for GBA.

    Unfortunately the lack of analog sticks ruins the compatibility improvements it’d otherwise have over RG351 thanks to the screen and newer CPU. Best to see this as like a GBA that can also play SNES games and not much else. I’d rather have a system that includes the inputs it needs than one that imitates a classic system’s design.


  • Most of the “Is open source software safe?” section of this post seems to advocate for what’s conventionally called Security Through Obscurity, which is widely considered very ineffective at preventing exploitation and at best a minor hurdle.

    There are a lot of differences between Android and iOS in terms of security, attack surface, and exploitation, but attributing that to open vs closed-source completely misunderstands the entire subject. For just two of the countless reasons: Many of the worst vulnerabilities that affect Android devices are in closed-source proprietary Qualcomm firmware. A platform being open in the sense of allowing users to install any application they want to (like Windows and Android to a limited extent) or closed off to prevent installation of unapproved software (iOS, PlayStation, Toyota cars, TiVo, etc.) is completely separate from whether that platform is open-source or not. GPLv3 has license terms that try to tie the two concepts but I chose examples that don’t use it at all. Also, iOS has public kernel source code.


  • I’ve been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

    I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

    One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn’t do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.




  • A standard called SystemReady exists. For the systems that actually follow its standards, you can have a single ARM OS installation image that you copy to a USB drive and can then boot through UEFI and run with no problems on an Ampere server, an NXP device, an Nvidia Jetson system, and more.

    Unfortunately it’s a pretty new standard, only since 2020, and Qualcomm in particular is a major holdout who hasn’t been using it.

    Just like x86, you still need the OS to have drivers for the particular device you’re installing on, but this standard at least lets you have a unified image, and many ARM vendors have been getting better about upstreaming open-source drivers in the Linux kernel.


  • To the contrary, I would expect the sample to skew more towards people who have a heavily customized X session and strong opinions about window managers while drastically underrepresenting average GNOME users who stick with the default Wayland session. Someone who likes their custom setup can still be waiting for a Wayland equivalent while casual Ubuntu users have been defaulted to Wayland on new non-nvidia installs since early 2021.




  • A ground-up overhaul of the copyright system would make things so much worse, not better, considering the current climate of power. In the US for example, MPA, RIAA, Entertainment Software Association, Association of American Publishers, and others wouldn’t want public libraries or the used market to exist at all; they would push for making every single transfer of “ownership” on any media involve a payment to the rights holder. Lawmakers are far more likely to accommodate those groups’ desires than the public good.

    The worst parts of the current copyright system are the most recent. Both the DMCA and the extension of US copyright term to 95 years took effect in 1998, and the early 2000s saw many other countries passing laws to make their copyright system closer to US’s in various ways such as the WIPO Copyright Treaty which took effect in 2002 and EU’s 2006 Copyright Directive. Just about the only positive news we’ve seen in US copyright law since then is in temporary exemptions to DMCA’s anti-circumvention rules (Section 1201) which change every year. Copyright law was far less hostile to consumers and the public before the 90s than it is now, and up until 1976 it used to be expected that most media someone consumes would enter public domain within their lifetime.

    The digital era makes market relevance far more ephemeral than ever and yet the laws written for the digital era moved copyright in the opposite direction. Movie studios simultaneously judge whether a film succeeded almost exclusively based on its first week of ticket sales and also claim that depriving public domain for 95 years is necessary. Nothing should be able to justify more than 20 years of copyright. Media formats don’t even last as long as copyright; CDs and DVDs rot, game cartridges die, servers shut down, and even books printed on today’s low-quality paper will fall apart.

    Some of it is absurd to me, like the way something can be online but geographically restricted.

    This is a consequence of contract terms moreso than copyright. One issue in copyright law that this does connect to, though, is the fact that the question of whether the rightsholder keeps a work reasonably available on the market does not impact whether the work retains copyright protections. If copyright law did hypothetically include that limitation, providers would become far more likely to make sure that all content is available in all countries, but even then things could still vary in terms of which content is on which platform.


  • For years I’ve been using KeepassXC on desktop and Keepass2Android on mobile. Rather than sync the kdbx file between my devices, I have each device access it through the network. Either via sftp, smb, or nfs, but regardless I need to connect to my home’s VPN to access it when away from home since I don’t directly expose those things to the outside world.

    I used to also keep a second copy of the website-tied passwords in Firefox Sync, but recently tried migrating that to Proton Pass because I thought the PIN feature might help, then ultimately decided to move away from that too and start using the KeepassXC-Browser plugin instead. I considered Bitwarden too but haven’t tried it out yet, was somewhat deterred by seeing people say its UI seems very outdated.


  • There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

    I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.


  • Yes.

    My home server has dropbear-initramfs installed so that after reboot I can access the LUKS decryption prompt over SSH. The one LUKS partition contains a btrfs filesystem with both rootfs and home as subvolumes. For all the other drives attached to that system, I use ZFS native encryption with a dataset that decrypts with a keyfile from that rootfs and I have backups of an encrypted copy of that keyfile.

    I don’t think there’s a substantial performance impact but I’ve never bothered benchmarking.



  • Stylus/handwriting oriented note taking. Stuff like Samsung Notes or Goodnotes (or OneNote, though it does a lot more) in the Android space, or e-ink options like Remarkable’s stock software.

    If I just want to use a keyboard for everything I have great FOSS options like Joplin and Standard Notes, but when I want to use a pen instead it feels like no other freedom-respecting option seem to even remotely approach the usability of just sticking with real ink and moleskine-like paper notebooks.

    Even someone willing to pay an upfront fee for proprietary apps will struggle to find good options that allow syncing and reading (let alone editing) your notes on other devices/platforms without resorting to a monthly subscription.


  • Something I’ve noticed that is somewhat related but tangential to your problem: The result I’ve always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don’t use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

    volumes:
      nextcloud:
      db:
    
    services:
      db:
        image: docker.io/mariadb:10.6
        ...
      app:
        image: docker.io/nextcloud
        ...
    

    I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory’s name otherwise.

    The reasons I adjust my own compose files to be different from the image maintainer’s recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I’ve had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.


  • if the featureset is not clear enough at first glance

    My experience as someone who has barely dabbled in Matrix, tried comparing clients, and knows a lot of people who stick to Discord: a lot of Discord users heavily use custom emotes, voice chat, and screen sharing. It’s not even easy to figure out which Matrix clients support each of those features without installing everything and trying it out. There’s a clients comparison on matrix.org that mentions Voip but not stickers or video.

    For stickers alone:

    • Element is widely considered the go-to Matrix client but uses a strange integration system for predefined sticker packs instead of the MSC2545 stickers that more closely resemble what users coming from Discord would want.
    • Cinny seems to have the best support for stickers/emotes but its site doesn’t mention them at all. It supports uploading and managing sticker packs at either a channel or user level, provides a nice picker UI to send any picture from those packs as either a large “sticker” or a small inline “emoji”, and allows using them for reactions.
    • FluffyChat mentions stickers on its site and has the second best sticker support, with all of those except reactions and a graphical sticker picker for inline emoji (need to type them as shortcode).
    • SchildiChat, Nheko, and NeoChat have some sort of limited support for custom stickers/emoji. NeoChat is the only one of those that advertises stickers on its main site. Nheko mentions them in a GitHub readme.

    Being able to freely use custom emotes without paying for a Discord Nitro subscription nor server boosts would be a great selling point but it’s not something most users would be able to figure out before signing up. The limited client support isn’t great; e.g. Fluffy is the only Android client that supports sending custom stickers but some people may dislike the chat bubbles style UI.


  • I have configured custom Android kernel builds to enable more USB drivers, enable module support, and tweak various other things. For one tangible example of the result: I could plug in a USB Wi-Fi adapter and use it to simultaneously connect to another Wi-Fi network with the internal NIC while also sharing my own AP over USB. On an Android device of all things. I have also adjusted kernel builds for SBCs (like Pi clones) to get things working at all.

    I have never seen any reason to configure a custom kernel for my own desktop/laptop systems. Default builds for the distros I’ve used have been fine for me; if I’m ever dissatisfied with anything it’s the version number rather than the defconfig. The RHEL/Rocky kernel omits a few features I want (like btrfs) but I’d rather stick to other distros on personal systems than tweak a distro that isn’t even meant for tweaking.


  • I never had problems with Debian stable, especially on headless server. But it’s not especially well-suited for brand new desktop hardware; even Ubuntu LTS and RHEL focus more on hardware enablement backports than Debian.

    I’ve had a worse experience with Debian testing breaking my system with updates than Arch. Adding to that the freeze period (2012’s was the worst, lasting 11 months) makes testing feel like the worst of both worlds between rolling and standard release distros.


  • Not every work environment is the same.

    When I first started with my current employer I was given a system with RHEL preinstalled and I replaced it with Fedora on my first day. I was told to use LUKS and given a normal OpenVPN profile but otherwise they don’t control or monitor anything about my workstation. No matter how many years or decades I stay at this company, it’s extremely unlikely I’ll ever touch an OS that isn’t Linux-based during work time.

    Every previous job I’ve been at also had me use Linux for my primary workstation, because my field of work more or less requires it, but some have needed me to access a separate Windows system/server/VM on rare occasions.