Rocky 9 install : existing logical volumes are not visible

On my home server I have a disk for the os and home folders, and a (software) raid 5 array build up out of 4 disks of 6TB with several logical volumes on the array.
After the os installation, the array is visible (/dev/md0 is present) but none of the lv is visible any more.
I know when I switched from Centos 7 to Centos 8 (and migrated to Rocky 8), all volumes were present after the os installation.
If I do a fdisk of /dev/md0 this is what I get :

The device contains ‘LVM2_member’ signature …

No partitions are visible (same using gdisk).
Anyone has any idea on how to get the lv’s back?

Usually if you install a new system, and LVM’s exist, then the easiest way is to activate them, so:

vgchange -ay

should activate all possible available LVM groups, etc. After this the normal vgs and lvs commands should show your volume groups and volumes.

Thanks, I’ve tried it, but no result.

Can you post the output for the following commands:

pvscan
vgscan
lvscan

we will need to diagnose and debug this, so copy and paste the results from the console (please do not post a screenshots, it’s horrible and difficult to read).

So you’re saying you migrated form Centos 7 to Centos 8 and then to Rocky 8 and “all volumes were present”, so at what point did they disappear??

The raid array and logical volumes were created on Centos 7. Later I installed Centos 8 and it just recognized the existing array and volumes. Switched to Rocky 8 using the migration scripts. All was working as exepected. Now I installed Rocky 9, and I see the array, but no more logical volumes (or volume groups). I was expecting that after the fresh installation of Rocky 9 I would be able to mount the volumes. During installation, the disks in the array were not touched.

Hypothesis: RL9 does assemble raid array correctly but does not scan it for LVM content.
Question: What can we manually see?

This is the output :

$ pvscan
PV /dev/sda2 VG rl lvm2 [<110.79 GiB / 0 free]
Total: 1 [<110.79 GiB] / in use: 1 [<110.79 GiB] / in no VG: 0 [0 ]
$ vgscan
Found volume group “rl” using metadata type lvm2
$ lvscan
ACTIVE ‘/dev/rl/swap’ [7.35 GiB] inherit
ACTIVE ‘/dev/rl/home’ [33.93 GiB] inherit
ACTIVE ‘/dev/rl/root’ [69.50 GiB] inherit

Output only shows entries regarding sda device, nothing related to the other devices (raid array).

Also please:

cat /proc/mdstat

and:

mdadm --detail --scan
mdadm --detail /dev/md0

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[4] sdb1[0] sdc1[1] sdd[5]
17581166592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices:

$ mdadm --detail --scan
ARRAY /dev/md/0 metadata=1.2 name=faulhorn:0 UUID=10373e04:bf6b0e0c:544f5700:689d16f4

$ mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Dec 29 19:56:29 2018
Raid Level : raid5
Array Size : 17581166592 (16.37 TiB 18.00 TB)
Used Dev Size : 5860388864 (5.46 TiB 6.00 TB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

 Intent Bitmap : Internal

   Update Time : Wed Sep 14 13:07:04 2022
         State : clean 
Active Devices : 4

Working Devices : 4
Failed Devices : 0
Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 512K

Consistency Policy : bitmap

          Name : faulhorn:0  (local to host faulhorn)
          UUID : 10373e04:bf6b0e0c:544f5700:689d16f4
        Events : 83351

Number   Major   Minor   RaidDevice State
   0       8       17        0      active sync   /dev/sdb1
   1       8       33        1      active sync   /dev/sdc1
   5       8       48        2      active sync   /dev/sdd
   4       8       65        3      active sync   /dev/sde1

Well, I’m out of ideas of what to suggest, so I’m going to try and replicate this on my setup. So far I’ve done this on Rocky 8, and then I’ll disconnect the disks from the machine, and add to Rocky 9.

First, I’ve created the array, just with 3 x 20gb disks to keep it quick.

[root@rocky8 ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 vdd[3] vdc[1] vdb[0]
      41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
unused devices: <none>

Then set up my LVM groups/volumes:

[root@rocky8 ~]# pvs
  PV         VG         Fmt  Attr PSize  PFree
  /dev/md0   rocky-test lvm2 a--  39.96g    0 

[root@rocky8 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  rocky-test   1   1   0 wz--n- 39.96g    0 

[root@rocky8 ~]# lvs
  LV      VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-test rocky-test -wi-a----- 39.96g  

Mounted it and put some data in it just for proof-of-concept:

[root@rocky8 raid]# mount | grep raid
/dev/mapper/rocky--test-lv--test on /mnt/raid type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=2048,noquota)

[root@rocky8 raid]# ls -lha /mnt/raid/
total 1000M
drwxr-xr-x. 2 root root    22 Sep 15 21:52 .
drwxr-xr-x. 3 root root    18 Sep 15 21:51 ..
-rw-r--r--. 1 root root 1000M Sep 15 21:52 test.img

Deactivating the LVM volumes and stopping the array, and verifying with the LVM commands:

[root@rocky8 ~]# vgchange -an rocky-test
  0 logical volume(s) in volume group "rocky-test" now active

[root@rocky8 ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0

[root@rocky8 ~]# vgs
[root@rocky8 ~]# lvs
[root@rocky8 ~]# pvs

So, now I connected to the Rocky 9 machine all those disks, and installed mdadm, and lvm2:

[root@rocky9 ~]# mdadm --assemble /dev/md0 /dev/vdb /dev/vdc /dev/vdd
mdadm: /dev/md0 has been started with 3 drives.

[root@rocky9 ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 vdb[0] vdd[3] vdc[1]
      41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
unused devices: <none>

so, now we have md0 active.

[root@rocky9 ~]# pvs
  PV         VG         Fmt  Attr PSize  PFree
  /dev/md0   rocky-test lvm2 a--  39.96g    0 

[root@rocky9 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  rocky-test   1   1   0 wz--n- 39.96g    0 

[root@rocky9 ~]# lvs
  LV      VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-test rocky-test -wi------- 39.96g  

so on my Rocky 9 it has detected them perfectly fine. I would suggest you do something like:

mdadm --stop /dev/md0

and then do the assemble command similar to mine, but for yours it would be:

mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd /dev/sde1

and then after that, see if you can see the LVM stuff using pvs, pvscan, vgs, vgscan, lvs, lvscan. You can also try, if you know exactly what the volume group is called to do:

[root@rocky9 ~]# vgchange -ay rocky-test
  1 logical volume(s) in volume group "rocky-test" now active

I’ve used my volume group as an example. And for data verification:

[root@rocky9 ~]# mkdir /mnt/raid
[root@rocky9 ~]# mount /dev/rocky-test/lv-test /mnt/raid

[root@rocky9 ~]# ls -lha /mnt/raid/
total 1000M
drwxr-xr-x. 2 root root    22 Sep 15 21:52 .
drwxr-xr-x. 3 root root    18 Sep 15 22:22 ..
-rw-r--r--. 1 root root 1000M Sep 15 21:52 test.img

shows that all is intact.

There is every possibility though that even after this if it does not show it then it would seem the LVM is somehow damaged perhaps the metadata it is the only thing I can think of if it still does not work.

Thanks for your help.
I’ve tried it, but it didn’t work.
What I do see is this :

$ pvdisplay /dev/md0
Cannot use /dev/md0: device is not in devices file

Try some of the commands listed here, it goes into troubleshooting LVM, maybe even with failed devices: Chapter 17. Troubleshooting LVM Red Hat Enterprise Linux 8 | Red Hat Customer Portal

I wouldn’t particular look at the removing LVM stuff of restoring metadata at this point in time. But the failed commands might help list what you have on that device. As I said, assuming that it’s not irrepairably damaged.

The section on LVM Raid also isn’t to be used, since your raid is on mdadm and not via LVM.

Websearch with lvm "device is not in devices file" does not find much, but there seems to be “lvmdevices(8)”: main - device usage based on devices file - lvm2-commits - Fedora Mailing-Lists
The default seems to be to not use such file:

$ grep use_devicesfile /etc/lvm/lvm.conf
	# Configuration option devices/use_devicesfile.
	# use_devicesfile = 0

Even though file seems to be created on installation:

$ cat /etc/lvm/devices/system.devices
# LVM uses devices listed in this file.
# Created by LVM command lvmdevices pid 3281 at Fri Jul 22 12:26:01 2022
VERSION=1.1.1
IDTYPE=devname IDNAME=/dev/sda1 DEVNAME=/dev/sda1 PVID=3y..YN PART=1

Another hit describes how CentOS Stream 9 had issue with LV on MD: https://bugzilla.redhat.com/show_bug.cgi?id=2002640
Alas, that (on brief glance) had different symptoms.

1 Like

Tried a lot, nothing seems to work.
Restored the Rocky 8 installation, array is healthy, data on the volumes is accessible. I had a recent backup of the data, so there was no stress on losing data. Think I will make a new backup and then have a new go at it.
Would vgexport/vgimport would be an option?

Whhops, I’m to late.
Just had some ideas:
Maybe the RAID is not named /dev/md0 I had something like this earlier on various
Linux-Distros. So if you run into the same, probably run lsblk to see wether it is perhaps
named differently.

Also, (as far as I remember) you might try to add the raid’s UUID to the kernel cmdline
explicitely like rd.md.uuid=<UUID>
Note: The raid “UUID” has a different (unusual) format of 4 hex numbers separated by colon.

You would get the raid uuid before upgrading on rocky8 using the following cmd:

mdadm --detail /dev/mdX | grep UUID
(replace X by the actual device number)

Oh, and I forgot to add: Unless the underlying raid is detected (which should be shown with lsblk), any LVM-related actions are useless.

@felfert in this post: Rocky 9 install : existing logical volumes are not visible - #10 by BGO it’s confirmed as md0. The mdadm detail shows that in that post :slight_smile:

I stand corrected, have somehow overlooked that.

1 Like

Did a fresh Rocky 8.6 install :

$ pvscan
PV /dev/sda2 VG rl lvm2 [110.79 GiB / 0 free]
PV /dev/md0 VG vg-datastore lvm2 [16.37 TiB / 4.66 TiB free]
Total: 2 [16.48 TiB] / in use: 2 [16.48 TiB] / in no VG: 0 [0 ]
$ vgscan
Found volume group “rl” using metadata type lvm2
Found volume group “vg-datastore” using metadata type lvm2

All volumes are also visible and mountable.
Going to try a Rocky 9 again.

1 Like

Did a fresh Rocky 9 installation, result was the same as after the previous 9 install, no volumes on the array visible (but the array was active). The remark of @jlehtone triggered me and this did the trick :

$ lvmdevices --adddev /dev/md0
$ vgchange -ay vg-datastore

After this, the volumes became visible ant mountable!
Is this what we can expect after a fresh install, or should the volumes be ready like after the Rocky 8 installation?

1 Like