Rocky 9.2 : existing LVM2 devices are not visible

This topic is closed

but I want to report that I have observed the same problem many times. If you take a set of MDADM+LVM2 RAID disks and move them to another system, MDADM has no problem whatsoever. All the md* devices get reassembled with out difficulty. /proc/mdstat is perfect.

However, LVM2 will not admit that there are any physical devices, volume groups, or logical volumes, on the system until you execute:

lvmdevices --adddev /dev/mdXX

for each of the MDADM RAIDs. Sadly, lvmdevices only accepts one device at a time so you can’t execute

lvmdevices --adddev /dev/md*

if you have more than one MDADM RAID. In my case, I have about 50, so that is annoying.

I have observed this behavior very consistently going back and forth between RHEL 9.1, RHEL 9.2, and Rocky Linux 9.2 systems. It may occur with other distros as well.

I haven’t been able to find any documentation about this behavior of LVM2. Can anyone refer me to the appropriate reference? And when/why did this behavior change, because I have only observed it happening over the last year or so.


It’s been some time since I was running a few mdadm arrays, but this sounds somewhat familiar…

I think I discovered pvscan was skipping my existing physical volumes, but only by running it with two or three extra verbose (-vvv) switches. I know you’ve already gotten things running again by now, but something for next time?

Also, next time, you might need to modify (or even delete?) /etc/lvm/devices/system.devices ?

Again, it’s been some time, but hoping this helps you even a little bit for the next upgrade. Congrats on your several otherwise-successful upgrades over the years on the same box!!! No small feat :grinning:

I just did the inplace upgrade from Centos 8 to Rocky 8 and then did the unsupported in place upgrade from Rocky 8 to 9

I used Elevate to accomplish this.

I have 3 x MDADM arrays

1 x RAID 10 (4 drives)
1 x RAID 1 (2 drives)
1 x RAID 6 (7 Drives) - this a single LV on it made up of a number of PVs - all on this RAID set.

Although my MPTSAS module did not come across for my LSI controller (i was expecting this) - once i installed the KMOD from Elrepo all of LVM setup came across with no issues.

I started with Fedora in the early days and worked my way up over the last 11 years through Centos versions - all the while maintaining and growing the drive sets as money and requirements grew.

My most recent update prior to this was to replace the old Gigabyte motherboard with an ASUS motherboard to enable me to have 10GB fibre back to my main switch

SO not sure why you are seeing the behaviour that you are.


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.