I have a ZFS pool that I made on proxmox. I noticed an error today. I think the issue is the drives got renamed at some point and how its confused. I have 5 NVME drives in total. 4 are supposed to be on the ZFS array (CT1000s) and the 5th samsung drive is the system/proxmox install drive not part of ZFS. Looks like the numering got changed and now the drive that used to be in the array labeled nvme1n1p1 is actually the samsung drive and the drive that is supposed to be in the array is now called nvme0n1.

root@pve:~# zpool status
  pool: zfspool1
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 00:07:38 with 0 errors on Sun Oct 13 00:31:39 2024
config:

        NAME                     STATE     READ WRITE CKSUM
        zfspool1                 DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            7987823070380178441  UNAVAIL      0     0     0  was /dev/nvme1n1p1
            nvme2n1p1            ONLINE       0     0     0
            nvme3n1p1            ONLINE       0     0     0
            nvme4n1p1            ONLINE       0     0     0

errors: No known data errors

Looking at the devices:

 nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme4n1          /dev/ng4n1            193xx6A         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013
/dev/nvme3n1          /dev/ng3n1            1938xxFF         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013
/dev/nvme2n1          /dev/ng2n1            192xx10         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR010
/dev/nvme1n1          /dev/ng1n1            S5xx3L      Samsung SSD 970 EVO Plus 1TB             1         289.03  GB /   1.00  TB    512   B +  0 B   2B2QEXM7
/dev/nvme0n1          /dev/ng0n1            19xxD6         CT1000P1SSD8                             1           1.00  TB /   1.00  TB    512   B +  0 B   P3CR013

Trying to use the zpool replace command gives this error:

root@pve:~# zpool replace zfspool1 7987823070380178441 nvme0n1p1
invalid vdev specification
use '-f' to override the following errors:
/dev/nvme0n1p1 is part of active pool 'zfspool1'

where it thinks 0n1 is still part of the array even though the zpool status command shows that its not.

Can anyone shed some light on what is going on here. I don’t want to mess with it too much since it does work right now and I’d rather not start again from scratch (backups).

I used smartctl -a /dev/nvme0n1 on all the drives and there don’t appear to be any smart errors, so all the drives seem to be working well.

Any idea on how I can fix the array?

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 month ago

    I don’t know anything about ZFS, but in the future you might want to address them by /dev/disks/by-uuid/… or by-id and not by /dev/nvme…

    • Shdwdrgn@mander.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      That is definitely true of zfs as well. In fact I have never seen a guide which suggests anything other than using the names found under /dev/disk/by-id/ or /dev/disk/by-id/uuid and that is to prevent this very problem. If the proper convention is used then you can plug the drives in through any available interface, in any order, and zfs will easily re-assemble the pool at boot.

      So now this begs the question… is proxmox using some insane configuration to create drive clusters using the name they happen to boot up with???

      • qupada@fedia.io
        link
        fedilink
        arrow-up
        8
        ·
        1 month ago

        Generally, you just need to export the pool with zpool export zfspool1, then import again with zpool import -d /dev/disk/by-id zfspool1.

        I believe it should stick after that.

        Whether that will apply in its current degrated state I couldn’t say.

        • Lem453@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 month ago

          Thanks, this worked. I made the ZFS array in the proxmox GUI and it used the nvmeX names by default. Interestingly, when I did zfs export, nothing seemed to happen and it -> I tried zpool import and is said no pools available to import, but then when I did zpool status it showed the array up and working with all 4 drives showing healthy and it was now using device IDs. Odd but seems to be working correctly now.

          root@pve:~# zpool status
            pool: zfspool1
           state: ONLINE
            scan: resilvered 8.15G in 00:00:21 with 0 errors on Thu Nov  7 12:51:45 2024
          config:
          
          		NAME                                                                                 STATE     READ WRITE CKSUM
          		zfspool1                                                                             ONLINE       0     0     0
          		  raidz1-0                                                                           ONLINE       0     0     0
          			nvme-eui.000000000000000100a07519e22028d6-part1                                  ONLINE       0     0     0
          			nvme-nvme.c0a9-313932384532313335343130-435431303030503153534438-00000001-part1  ONLINE       0     0     0
          			nvme-eui.000000000000000100a07519e21fffff-part1                                  ONLINE       0     0     0
          			nvme-eui.000000000000000100a07519e21e4b6a-part1                                  ONLINE       0     0     0
          
          errors: No known data errors
          
      • Shdwdrgn@mander.xyz
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 month ago

        OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…

        zpool offline poolname /dev/nvme1n1p1

        zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename

        Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.

        • Lem453@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Thanks for this! Luckily the above suggestion to export and import worked right away so this was not needed.

          • Shdwdrgn@mander.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            Yeah I figured there would be multiple answers for you. Just keep in mind that you DO want to get it fixed at some point to use the disk id instead of the local device name. That will allow you to change hardware or move the whole array to another computer.

    • Lem453@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Thanks! I got it setup by IDs now. I originally set it up via the proxmox GUI and it defaulted to NVME names

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 month ago

          I believe ZFS is smart enough to automatically find the disk on the system as it looks at all the other information like the disk id. It shouldn’t just lose a drive.

          zpool just shows the original path of the disk when it was added. Behind the scenes ZFS knows your drives. There is a chance I am totally wrong about this.

          What is the output of lsblk? Any missing drives?