HP DL Gen 7 install issue. Hardware support matrix?

Trying to install Rocky on a HP DL Gen 7 server and when we pick the raid controller logical volume disk it says no disk is selected?

Has anyone successfully installed it on an HP Gen 7 server? Is there a matrix of supported published somewhere? I’ve been looking and can’t find anything. I have a large infrastructure and ned to figure out if Rocky will work on the spectrum of manufacturers and models of servers I have.

1 Like

Really curious, are you using software raid or the full-on HP hardware raid.

Also what DL server is it? Most Gen 7’s went Retired (End of Sale) 2013 and End of Service Life (EOSL) 2018.

Using hardware raid and it’s a DL 580 Gen 7.

The LUN size is something like 3.5T so we’re destroying that and recreating a 2T LUN to see if that’s a boundary limitation.

Red Hat probably has something about RHEL 8. Overall, RHEL does not include drivers for all devices, and particularly for not many old devices. There is only so much hardware that Red Hat is willing to support.

ELRepo has drivers (kernel modules) for many devices that are not included in RHEL. If you know the DeviceID (look up with lspci -nn), then you can check if ELRepo has driver. They package both RPM’s for normal install/upgrade and “driverdisk images” for the installer to use.

The fascinating part in your case is that you see a volume, but can’t select. If there were no driver, then installer should not see/show the volume at all …

1 Like

Yeah. That makes me think it might be a difference in size limitation (feature).

Red Hat’s support page is here:

In any event, as @jlehtone suggested, providing the device ID [xxxx:yyyy] should make the status clear.

Re lspci -nn ; I’d need to get the OS booted do that. I may install Centos again to see what it is if the reduction in disk saize doesn’t work.

When in installer, there should be virtual consoles (switch with Ctrl-Alt-Fn) and at least one of them with shell prompt.

Plan B: Edit kernel command line of the installer before it boots; add rescue.

(This assumes that installer has lspci. Can’t remember whether it does.)

[EDIT]
You might be able to use GPT even if you boot with legacy; boot-loader’s first stage in sector 0 (as in MBR) does not bother GPT.

Alternatively, use the hardware to create two volumes: small that you can boot from (has MBR) and large for data, which definitely can have GPT.

Do you have access to the ILO, and the web-interface to it ?
You should be able to identify the ‘model’ of the RAID controller (ie. P410i chip, or Adaptec add-in card or REM board).
I remember that there ‘were’ issues with specific RAID controllers (and/or the on-board chip) as provided by HP…

Thanks. Yes, I do. I’ll check that.

Also, while you’re at it, check the ILO and BIOS firmware versions… HP may be ‘hiding’ the updates now, but try to keep them both updated…

Updates did not help, A 2TB or smaller lun didn’t work.
The DL580 is using a HP Smart Array Controller P410i

The ‘infamous’ P410i…
Oh queso, do you have access to the ILO, and the ‘utility’ to configure the disk devices via the RAID controller ?

If so, it’s time to experiment…

  1. Can you configure the 2 ‘primary’ (actually primary and secondary) disk devices are RAID ‘0’ (just a bunch O disks) ?
  2. Can you do without the RAID controller, and connect the disk devices directly to the SATA connections on the mommy board itself ?

If so, then you can attempt to install on the ‘primary’ disk device (if the RAID controller had written it’s label onto the disks, you may have to ‘dd’ the initial tracks on the devices to remove it).

If the installation now ‘sees’ the disk devices (and you can install on the primary device), then ‘I’ would assume that the P410i is the problem.

AND, if you can do without the RAID controller (I no longer use them myself), you can configure the 2 ‘primary’ disk devices using MD and LVM2 ‘software’ RAID mirroring…

I’m about to post ‘my’ script which does that ‘automatically’ (haven’t tested it past CentOS 7.9 yet).

I’ll give that a try. Thanks.

Update:

The manual install fails and none of the previous suggestions provided any change. What “worked” is kickstarting the servers. The minimal package works but we’re having issues with add-on packages. The kickstart hangs. We’ll be adding one package at a time to the config and using tcpdump on the kickstage server to try and see what is hanging the kickstart.

I haven’t done a ‘manual’ install in over a decade now… There are a lot of ‘abilities’ that Kickstart provides over the other method.
What ‘device’ did you end up installing on (ie. single disk device) ?

Raid controller lun. I have about 500 gen 7 Hp Dl’s and was worried any move off CentOS would require a hardware change. Never came across a problem building manually though I normally build via kickstart. Just seemed simpler for testing Rocky.
I’ll figure out whats going on with the package I stalls and post back. I asked someone to add some debugging to the kickstart config.

Try adding something like repo --name="DD-1" --baseurl=http://lon.mirror.rackspace.com/elrepo/elrepo/el8/x86_64/ near the top of your kickstart file.

I have an issue with some older Dell servers, where the hardware RAID controller is no longer officially supported (or, at least the driver is not included by default) in RHEL and friends.

ELRepo maintains a repository of these drivers, which might help you on your way.

EL Repo builds modules for the dropped hardware in HP and Dell servers and you can go here Index of /linux/dud/el8/x86_64 and pick your rocky 8 version and it’s an install disk image for your kickstart. If you kernel update you will need also an updated kernel module to install before you reboot or it will fail to find the disks again. It’s the hpsa for the controller you have. You’d just put it somewhere your kickstart can pass it during the install and define it within the kickstart.
Something like this:
driverdisk --source=nfs:imgsrv.nams.net:/sataraid/load/OS/Drivers/dl380g8.iso

Location for kernel modules to install if you update your kernel version to a default 8.X kernel to install before you reboot.
https://elrepo.org/linux/elrepo/el8/x86_64/RPMS/