Did the BIOS update using "./sum -i bmcIP -u bmcUser -p bmcPW -c UpdateBios --file \ .././bios/BIOS_H12DSU-1B54_20240119_2.8_STDsp_UP/BIOS_H12DSU-1B54_20240119_2.8_STDsp.bin" After power cycle BMC UI shows all nvmes so far, the OS doesn't have a slot mapping for all anymore: > /lib/udev/bayLinksNvme.sh -l Slot Address Bay 0000:c1:00 16 0000:c2:00 17 0000:c3:00 18 0000:c4:00 19 0000:e1:00 12 0000:e2:00 13 0000:e3:00 14 0000:e4:00 15 0 0000:61:00 0 nvme0 1 0000:62:00 1 nvme1 2 0000:63:00 2 nvme2 3 0000:64:00 3 nvme3 8 0000:01:00 4 nvme4 9 0000:02:00 5 nvme5 10 0000:03:00 6 nvme6 8-1 0000:04:00 7 9-1 0000:21:00 8 10-1 0000:22:00 9 11 0000:23:00 10 12 0000:24:00 11 21 0000:a1:00 20 22 0000:a2:00 21 23 0000:a3:00 22 11-1 0000:a4:00 23 nvme14 previously it was: Slot Address Bay 0 0000:61:00 0 nvme0 1 0000:62:00 1 nvme1 2 0000:63:00 2 nvme2 3 0000:64:00 3 nvme3 8 0000:01:00 4 nvme4 9-1 0000:02:00 5 nvme5 10-1 0000:03:00 6 nvme6 8-1 0000:04:00 7 9 0000:21:00 8 10 0000:22:00 9 11 0000:23:00 10 12 0000:24:00 11 0-1 0000:e1:00 12 nvme7 1-1 0000:e2:00 13 nvme8 1-2 0000:e3:00 14 nvme9 1-3 0000:e4:00 15 nvme10 0-2 0000:c1:00 16 nvme11 1-4 0000:c2:00 17 nvme12 2-1 0000:c3:00 18 nvme13 20 0000:c4:00 19 21 0000:a1:00 20 22 0000:a2:00 21 23 0000:a3:00 22 11-1 0000:a4:00 23 nvme14 Currently the script uses the following map: typeset -r AS2124USTNRP=' # Device @ 0 CPLD @ 0 Slot @ 0 .. 11 : Drives 0..11 SYS:0000:61:00=0 # P1 0 SYS:0000:62:00=1 # P1 1 SYS:0000:63:00=2 # P1 2 SYS:0000:64:00=3 # P1 3 SYS:0000:01:00=4 # 2UR68G4-i4XTS: AOC-SLG4-4E4T (SXB3 - hilio: b) SYS:0000:02:00=5 # --"-- SYS:0000:03:00=6 # --"-- SYS:0000:04:00=7 # --"-- SYS:0000:21:00=8 # AOC-SLG4-2E4T (SXB3 - himimili: e) SYS:0000:22:00=9 # --"-- SYS:0000:23:00=10 # AOC-SLG4-2E4T (SXB3 - himio: c) SYS:0000:24:00=11 # --"-- # Device @ 0 CPLD @ 1 Slot @ 1 .. 11 : Drives 0..11 SYS:0000:e1:00=12 # P2 0 SYS:0000:e2:00=13 # P2 1 SYS:0000:e3:00=14 # P2 2 SYS:0000:e4:00=15 # P2 3 SYS:0000:c1:00=16 # RSC-W2-66G4: AOC-SLG4-4E4T (SXB1 - hiliu: i) SYS:0000:c2:00=17 # --"-- SYS:0000:c3:00=18 # --"-- SYS:0000:c4:00=19 # --"-- SYS:0000:a1:00=20 # RSC-WR-6, AOC-SLG4-4E4T (SXB2 - himiure: f) SYS:0000:a2:00=21 # --"-- SYS:0000:a3:00=22 # --"-- SYS:0000:a4:00=23 # --"-- ' and drives are missing from the related zpool: admin.ritter ~ > zpool status -P pool: pool1 state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J scan: scrub repaired 0B in 00:00:01 with 0 errors on Sun Apr 14 00:24:02 2024 config: NAME STATE READ WRITE CKSUM pool1 DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 /dev/chassis/SYS/SSD0p1 ONLINE 0 0 0 10384737669344333986 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD12p1 mirror-1 DEGRADED 0 0 0 /dev/chassis/SYS/SSD1p1 ONLINE 0 0 0 14837303515546542424 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD13p1 mirror-2 DEGRADED 0 0 0 12609951237823867747 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD14p1 /dev/chassis/SYS/SSD2p1 ONLINE 0 0 0 mirror-3 DEGRADED 0 0 0 16426271462223262692 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD15p1 /dev/chassis/SYS/SSD3p1 ONLINE 0 0 0 mirror-4 DEGRADED 0 0 0 3632753568505998180 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD16p1 /dev/chassis/SYS/SSD4p1 ONLINE 0 0 0 mirror-5 DEGRADED 0 0 0 6241085848457138779 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD17p1 /dev/chassis/SYS/SSD5p1 ONLINE 0 0 0 mirror-6 DEGRADED 0 0 0 1034018542743916498 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD18p1 /dev/chassis/SYS/SSD6p1 ONLINE 0 0 0 spares /dev/chassis/SYS/SSD23p1 AVAIL admin.ritter ~ > zpool status -L pool: pool1 state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J scan: scrub repaired 0B in 00:00:01 with 0 errors on Sun Apr 14 00:24:02 2024 config: NAME STATE READ WRITE CKSUM pool1 DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 nvme0n1 ONLINE 0 0 0 10384737669344333986 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD12p1 mirror-1 DEGRADED 0 0 0 nvme1n1 ONLINE 0 0 0 14837303515546542424 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD13p1 mirror-2 DEGRADED 0 0 0 12609951237823867747 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD14p1 nvme2n1 ONLINE 0 0 0 mirror-3 DEGRADED 0 0 0 16426271462223262692 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD15p1 nvme3n1 ONLINE 0 0 0 mirror-4 DEGRADED 0 0 0 3632753568505998180 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD16p1 nvme4n1 ONLINE 0 0 0 mirror-5 DEGRADED 0 0 0 6241085848457138779 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD17p1 nvme5n1 ONLINE 0 0 0 mirror-6 DEGRADED 0 0 0 1034018542743916498 UNAVAIL 0 0 0 was /dev/chassis/SYS/SSD18p1 nvme6n1 ONLINE 0 0 0 spares nvme14n1 AVAIL Trying to figure out, what's going wrong ...