RL 8.5 Install on RAID1 NVMe SSDs

Hello everyone

I’m trying to install RL8.5 (DVD version) on a RAID1 config (2x PCIe NVMe SSDs). I can see the disks in the BIOS but RL install does not see them at all. Is this a driver issue? How can I make RL to see the SSDs and continue with the installation?

Thank you very much

RAID1 as in “Linux software RAID” or as in “motherboard fakeRAID”?

I have a NVMe SSD that is not detected by the kernel because the motherboard’s SATA controller is in “RAID-mode”, rather than in “AHCI-mode”. (Bizarre that fakeRAID can affect NVMe like that.)

it’s motherboard fakeraid.
The only RAID options in my BIOS (HP 800 G4) are two boxes under System Devices.

  1. configure storage controller for RAID - ticked
  2. port prompt for RAID configuration - ticked.

I will unticked those and check if RL can see the SSDs.

No luck :confused:

These ssds are PCIe not SATA. I’ve deleted the RAID1 volume and checked if RL can see the disks individually but still no luck.

OK I’ve forgotten to untick the “configure storage controller for raid” box. Now RL install can see my two SSDs but wondering why it doesn’t see the volume when configured as RAID1.

Like I said: bizarre.

Some speculation: ssd - NVMe Drive not Detected on Linux - Super User

I wonder whether RHEL 9’s kernel has the same issue. :thinking:

[EDIT] CentOS Stream 9 installer (couple months old) does not see the NVMe either.

Thank you jlehtone
I’m sure it will be sorted in the future. I will switch to SATA SSDs for now.

All the best.

Are you saying the SATA controller settings can somehow affect the NVME drive? That does sound odd; is there anything in the MB manual confirming this? Do you know if it’s just this motherboard, or does it affect others. I wonder if this is related to pci controllers sharing “lanes”.

I was not involved in the thread that I did link to. My system has Gigabyte board, @goudeuk has HP with two generations newer Intel chipset, that thread pointed to someone with older Intel board.

I must have misunderstood the text in your forum post from Apr 21 1:56pm, as it looks like plain text, I didn’t see it shown as a link.

I’m interested to know if these disks are seen by Windows (by default), because in earlier Windows versions, you had to “Insert the driver disk” before it would recognize them.

In my case – GB-Z170X-UD3 board – I have Intel NVMe PCIe-card on x16 slot (x8 directly from CPU). That is seen by CentOS 7, AlmaLinux 8, and Windows 10. The system disk.
Second, two SATA HDD, set as Intel fakeRAID mirror. Data volume used by Linux and Windows.

Third, later add-on: Samsung SSD 970 EVO on M.2 slot. NVMe, x4 link. This was immediately seen by Windows, but remains undetected in Linux. (Tested with Alma and now CentOS Stream 9; haven’t booted the CentOS 7 after adding that disk.) As it happens, only the hog (Windows) has need for the 970, so I’m not bothered, just baffled.

[EDIT] The motherboard’s “SATA Mode” has two options:

  • AHCI
  • Intel RST Premium

The board does not see the Intel NVMe – at all, except as boot location.

In RST mode the board sees in RST menu the RAID1 array and the Samsung.
In RST mode Stream 9 installer sees the RAID1 array and the Intel NVMe.
In AHCI mode Stream 9 installer sees both Intel and Samsung NVMe’s. (The fakeRAID array is obviously missing, but /dev/sd[ab] are not there either.)

Apparently (by MB manual) the Intel “RST” fakeRAID firmware can create RAID array from M.2 PCIe (NVMe) SSD’s. But only in UEFI mode and one cannot mix SATA (M.2 or regular SATA) into such array.

That is, the Intel RST can be involved with both SATA and NVMe devices and some of it is on the motherboard. Perhaps RHEL kernels lack suitable RST driver to get full access?

Intel RST is only supported for Windows by Intel Corporation.
So, you should using “Linux software RAID” — mdraid / mdadm, instead of “motherboard fake(soft_by_RST)RAID” in bios. Of course if you have a hardware raid card, soft raid will be unnecessary.

1, modify bios, to make sure you can direct view each individual NVMEs and HDDs
2, while install RL, Custom Storage Configuration > after the Mount point is added, Modify the Volume Group: sellect drives, and set RAID Level to your needs. You can follow this guide installation-with-mdraid to set all mount points (/, /boot, /boot/efi, etc.) to raid.

1 Like

Exactly. In @goudeuk case I would try that with the NVMe drives before purchasing SATA SSD’s.

It is actually mdadm that activates the “fake(soft_by_RST)RAID” array in Linux too. Earlier mdadm did only the software RAID, while fakeRAIDs had their own dmraid(?) utility. Now those are merged into mdadm.

Hummmmm, This does not surprise me. I have the same problem even without using RAID1: BIOS has no problem seeing the the NVMe drive but Rocky can not see the drive that it is on.

SPECULATION: I think this is something that popped up when Red Hat began messing around with GRUB2. Red Hat has changed it so that instead of using GRUB2, where kernels etc are stored, it now uses /boot/loader/entries. When I created a grub-customizer menu, it did not find RL even though I had booted the NVMe drive where RL is located. In short the NVMe drive was INVISIBLE, though clearly the computer was able to find the drive in order to boot from it.

I then put openSUSE on a SATA drive and created a custom Grub-customizer menu. It not only found the SATA drive but also the NVMe [ Corsair MP600 4th Gen., 1 TB] drive that held RL on it. I have since moved openSUSE 15.3 Leap to a second Corsair MP600 1 TB drive, samr thing: openSUSE seems to have no problem seeing both NVMe drives; the RL drive sees only the openSUSE NVMe drive, but not itself.

Here is one thing you could try: Download a copy of KNOPPIX 9.1 – this is a great Utility disk that should be in everyone’s kit. One of the things that is in there is something called GParted that will show you the drive is there, as it will allow you to delete partitions so the entire drive is empty. Likewise if you run fdisk -l that should reveal the disk’s presence. But WHY can’t you find the actual disk itself? The only way is to use something that will cling to the Invisible drive. Just Like the Invisible man if you throw water on it, the water will show you where the Invisible Man is moving.

To prove that the problem is RL and NOT the motherboard, for FUN – and just because you can – see if you can install an OS like openSUSE 15.3 Leap on the NVMe RAID1 drive. Like I said I ran Grub-customizer on RL to create a custom menu, it found openSUSE that is located on a separate drive, but it could not locate itself; reversing the process openSUSE had no problem finding RL as well as itself. Like you my BIOS had no problem showing the drive. My suggestion is simply to run something like KNOPPIX 9.1 and then start GParted and that should reveal all the drives present. Since this is an NVMe drive look for something that says something like /dev/nvme0n1, then during the install phase look for that particular drive, and try to install to it there. While this might be a problem with NVMe drives, more likely than not this is a problem caused by Red Hat. Given that Rocky Linux, and Alma Linux, and… are bug for bug copies of RHEL, any problems that crop up in RHEL, will end up in RL, AL, et al.

If you want to use RAID in Linux, you should either use a “real” RAID controller, or disable RAID in your BIOS. Linux Native Software RAID (with MDADM etc), works perfectly & more reliably than even many real RAID controllers. That way, most SATA controllers should be recognized, as well as all the disks connected to it. But one cotcha remains. You need a Distro that is not too old, if your Mainboard is modern. Otherwise your kernel or it’s modules may not be able to recognize your hardware, & then you maybe wouldn’t be able to see any disks or configure them.

What I’m trying to get at here is that RedHat Linux, & therefore also Rocky which is based on it, is a “stable” OS, which doesn’t use the newest kernels or software. That means you don’t have all the support for the newest hardware, & so sometimes things might not work without a lot of hands-on.

So if you are using a new System, it may be a better Idea to use Fedora, which is more up-to-date, or another similar OS, & then run Rocky as a VM under that OS, as then the underlying hardware is not that much of a problem.