• zenharbinger@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    10 months ago

    some partitions are useful. Keeping /var and /tmp separate can stop DoS attacks by now allowing logs to fill the entire drive /home means you can wipe the / partition and keep user data.

    • limelight79@lemm.ee
      link
      fedilink
      arrow-up
      12
      ·
      10 months ago

      I’ve had a full /var partition cause all sorts of problems using the system. But I still think it’s good to have four partitions /, /var, /tmp, and /home. At least split out /home so you can format / without losing your stuff in /home.

        • limelight79@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          I can definitely see doing that on a server many people are using. For my personal server, I used to do that, but in the end I couldn’t find much benefit, and only headache (“ahhhh / is short on space because I forgot to clean up old kernels…”).

          • scratchandgame@lemmy.ml
            link
            fedilink
            Tiếng Việt
            arrow-up
            3
            ·
            edit-2
            10 months ago

            I think it would save you someday, when there is nothing writing in /usr so the writing in /home would not cause much damage. On a system with a huge root partition, an incomplete writing might damage the whole filesystem.

            Fsck would be faster. newfs (mkfs) would be faster. I found NetBSD spend so much time when it do newfs a 32G root partition (installing NetBSD in hyper-v).

            Also for the /tmp partition, we can use memory filesystem (tmpfs) if we have 4G of RAM or more, instead of physical disk to store things that are cleaned on reboot.

            • limelight79@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              10 months ago

              I’m not saying it can’t happen, but I’ve been using Linux since the late 90s and have never had a problem with an incomplete write damaging the file system, or really anything else (except for a recent incident when a new motherboard decided to overwrite the partition tables on my RAID5 array, but that’s a different story). And I have UPSs on the server and desktop, and of course the laptop has a battery in it, so the risk of sudden power loss is extremely low.

              The /tmp thing in RAM is interesting. I was reconfiguring my server’s drive the other day, because I didn’t originally allocate enough space to /var - it worked fine for years until I started playing with plex, jellyfin, and Home Assistant (the latter due to the database size). I was shocked to find /tmp only had a few files in it, after running for years. I think I switched the server to Debian in 2018 or 2019, but that’s just a guess based on the file dates I’m seeing. Maybe Debian cleans the /tmp partition regularly.

    • emptyother@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      10 months ago

      Damn I’ve always wanted Windows to have that. Being able to put user folders on another partition, or even another drive, at install time. And being able to use “dynamic disk” (aka software raid) to expand partitions across disks as storage requirements grow. I know it is possible to setup, but with a lot of workarounds and annoying problems.

      • Magickmaster@feddit.de
        link
        fedilink
        arrow-up
        20
        ·
        10 months ago

        Windows user folders are nearly unusable in my opinion, too many programs throw in random folders and files everywhere. Especially the Documents folder, too many games putting incoherent stuff in there

        • emptyother@programming.dev
          link
          fedilink
          arrow-up
          11
          arrow-down
          1
          ·
          10 months ago

          Jup, useless folder. There’s one related thing I’ve complained a lot about lately, so I’m gonna complain some more about it:

          Microsoft got this “great” idea of trying to repeatedly trick me into uploading that Documents folder to the cloud. A folder filled with GBytes of Battlefield and Assassins Creed cache files, Starfield mods, MS database files, etc… A lot of files that are in constant change, or locked the entire session. Annoying as hell. I love Onedrive, but I dont know why its so damn important for them to have those files.

          Sometimes I really wish I could switch to some Linux distro instead.

          • rtxn@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            ·
            edit-2
            10 months ago

            It’s asinine that Onedrive doesn’t have an equivalent of the decades-old gitignore technology…

            There seems to be a workaround, though - archive link. It should work as long as the local and remote conflict remains unresolved, or Microsoft decides to just push the remote onto the local machine and delete your files instead.

      • rtxn@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        10 months ago

        I’m pretty sure you can just mount a volume to C:\Users.

        I definitely wouldn’t recommend changing the userdir paths in the system. Many of the office computers I work with are set up that way and it’s always a pain in the ass when an application expects the home path to be located on C:.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          10 months ago

          when an application expects the home path to be located on C:

          Clarification: does NTFS just suck at understanding that a directory-mapped storage device mounted under C: should be treated as if it were C: when within the mount dir?

          • rtxn@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            10 months ago

            The second paragraph is about changing the path where Windows should look for the user files (analogous to running usermod -h /new/home user to change the user entry in the passwd file), not changing the filesystem. I don’t see any reason why a directory-mapped device would behave any differently than a regular directlry… although in my brief time working with softlinks and directory junctions, I learned not to have expectations of Windows/NTFS.

            I think the issue is that Windows stores the home path in two environment variables – HOMEDRIVE contains the drive letter, and HOMEPATH contains the path relative to the drive’s root (no, I’m not willing to call it an absolute path). If an application only uses the HOMEPATH envvar, the full path will default to whichever drive letter the environment’s working directory belongs to, which is most likely C:. I don’t have a Windows machine to test it though, so I might be wrong.

      • maxprime@lemmy.ml
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        10 months ago

        I remember doing this in macOS, when I got my first SSD. I installed it and kept the os on the SSD and mapped my user directory to my hdd. It made upgrades and re-installs much easier, which was a plus because it was actually a hackintosh.

      • scratchandgame@lemmy.ml
        link
        fedilink
        Tiếng Việt
        arrow-up
        2
        ·
        10 months ago

        It isn’t possible :)

        Windows’ filesystem is different to unix, and it is much flawed.