• 1 Post
  • 31 Comments
Joined 9 months ago
cake
Cake day: March 26th, 2024

help-circle

  • BTRFS is a damn good option too. I’m happy to hear how easy it is to use. I haven’t used it(yet), I went with ZFS because of its flexible architecture. On a desktop level, BTRFS makes sense, but in a server? What is it like in a Hypervisor?

    I’m working on standing up a Cloudstack host as a Hypervisor. Now, I want this host to be able to run 5 kubernetes VMs, so it needs to have quick access to the disks. Now, I do not have a RAID card, only an HBA. In such a scenario, I would typically use a RAID 10. But a ZFS Raid 10 outperforms an mdraid 10 anyways (in terms of writing, not necessarily reading). So that is what I’ve decided. It may not be a good idea, it may not even be feasible. But I’m heckin willing to give it a shot.

    I’m actually jealous that you automatically have built in kernel support though. I am a little curious about BTRFS in terms of how(or if) it connects multiple disks, I’m simply uninformed.

    ZFS Performance Sauce

    Install Ubuntu 24.04 on ZFS RAID 10 - Github Repository

    Edit: There are a few drawdowns to using ZFS, lousy docker performance being one that I’ve heard about. I’m curious how this will be affected if I have docker running inside a VM.


  • That’s fair. I chose ZFS because I’ve used it before. And understand it fairly well already. I know nothing about BTRFS, so perhaps you could educate me a little. I’m working on setting up a cloudstack host using ZFS RAID 10. Does BTRFS have a flexible architecture to where you could do something similar?

    Edit: Perhaps you could also inform me of the speeds of BTRFS too. From what I understand, ZFS outperforms BTRFS in large datasets, but I don’t know where the cutoff is. I’ll let you know it would need to run 12 ea 10TB HDDs.






  • Here is the exact issue that I’m having. I’ve included screenshots of the command I use to list HDDs on the live cd versus the same command run on Ubuntu 24.04. I don’t know anything about what is causing this issue so perhaps this is a time where someone else can assist. Now, the benefit to using /dev/disk/by-id/ is that you can be more specific about the device, so you can be sure that it is connected to the proper disk no matter the state that your environment is in. This is something that you need to do to have a stable ZFS install. But if I can’t do that with scsi disks, then that advantage is limited.

    Windows Terminal for the win, btw.

    Live CD:

    Ubuntu 24.04 Installed:


  • Well… I have to admit my own mistake as well. I did assume it would have faster read and write speeds based upon my raid knowledge and didn’t actually look it up until I was questioned about it. So I appreciate being kept honest.

    While we have agreed on the read/write benefits of a ZFS RAID 10 there are a few disadvantages to a setup such as this. For one, I do not have the same level of redundancy. A raidz2 can lose two full hard drives. A zfs RAID10 can lose one guaranteed and up to two total. As long as an entire mirror isn’t gone, I can lose two. So overall, this setup is less redundant than raidz2.

    Another drawback that it faces is that for some reason, Ubuntu 24.04 does not recognize scsi drives except over live CD. Perhaps someone can help me with this to provide everyone with a better solution. Those same disks that were visible on the live CD are not visible once the system is installed. It still technically works, but zpool status rpool will show that it is using sdb3 instead of the scsi hdds. This is fine technically, my hdds are SATA anyways so I just changed to the SATA hdds. But if I could ensure that others don’t face this issue, it would result in a more reliable ZFS installation for them.






  • Interesting… Though I know nothing about your particular setup, or migrating existing data, I have a similar project in the works. This project is to automatically setup a ZFS RAID 10 on Ubuntu 24.04.

    If you are interested in seeing how I am doing it, I used the openzfs root on Debian/Ubuntu guides.

    Debian

    Ubuntu

    For the code, take a look at this git hub: https://github.com/Reddimes/ubuntu-zfsraid10/

    One thing to note is this runs two zpools, one for / and one for /boot. It is also specifcally UEFI and if you need legacy you need to change the partitioning a little bit(see init.sh)

    BE WARNED THAT THIS SCRUBS ALL FILESYSTEMS AND DELETES ALL PARTITIONS

    To run it, load up a ubuntu-server live cd and run the following:

    git clone --depth 1 https://github.com/Reddimes/ubuntu-zfsraid10.git
    cd ubuntu-zfsraid10
    chmod +x *.sh
    vim init.sh    # Change all disks to be relevant to your setup.
    vim chroot.sh    # Same thing here.
    sudo ./init.sh
    

    On first login, there are a few things I have not scripted yet:

    apt update && apt full-upgrade
    dpkg-reconfigure grub-efi-amd64
    

    There are two parts to automating this, either I need to create a runonce.d service(here). Or I need to add a script to the users profile.d directory which goes ahead and deletes itself. And also I need to include a proper netplan configuration. I’m simply not there yet.

    I imagine in your case you could start a new pool and use zfs send to copy over the data from the old pool. Then remove the old pool entirely and add the old disks to the new pool. I certainly have never done this though and I suspect there may be an issue. The other option you have (if you have room for one more drive) is to configure it into a ZFS RAID 10. Then you don’t need to migrate the data, but just need to add an additional vdev mirror with the additional drive and resilver.

    One thing I tried to do was to make the scripts easily customizable. It still is not yet ready for that, though. You could simply change the zpool commands in the init.sh.