Mounting a volume with conflicting vg names on a host

This is a “how to” I’m sharing which arises out of another post but warrants its own (I think).

The details here relate to qcow2 images but the vg conflict / renaming should apply for any volume mounted anyhow.

You can mount qcow2 images on a host as local nbd drives. This can make maintenance easier than working via a vm - they’re “offline” for starters. Though, having said that, it is probably a little used (tested) facility and I seem to recall issues with copying large files possibly due to syncing/buffering stuff - can’t be sure / could’ve been a number of things / may not have been qemu related at all…anyway, usual no warranties etc.

However, you won’t be able to access the files if the volumes have LVM/Volume groups and those groups have names that are in use on the host… you’ll have to rename them…here’s how:

In this example we have a vg on the host named “rl” and a conflicting vg on the img named “rl”. We will temporarily rename the img vg to “vd”.

modprobe nbd                                        # load kernel nbd drivers if not already loaded
qemu-nbd -c /dev/nbd0 -f qcow2 rl-live.qcow2        # connect img as a local nbd device (will not be available over network)
cryptsetup open /dev/nbd0p2 ncrypt                  # optional: open luks volume if you have it

vgs                                                 # show general vg info
vgs -o vg_name,vg_uuid,pv_uuid,pv_name              # interested in PV UUID of our VG for later

# Because there are 2 rl vgs we need to filter or we just get an error for most commands - test our filter works before using it ...

vgs --config 'devices{filter=["a|ncrypt.*|","r|.*|" ]}' -o vg_name,vg_uuid,pv_uuid,pv_name # should return our vg and only our vg. if your PV device is /dev/sdc try sdc instead of ncrypt etc.

vgcfgbackup --config 'devices{filter=["a|ncrypt.*|","r|.*|" ]}' -f rl.cfg rl # create a backup of the vg config

cat rl.cfg # check the data corresponds to the PV - it should contain the correct pv id which identifies the vg we are going to change

sed -i.bak -E 's/^rl(\s*\{)/vd\1/' rl.cfg # replace rl with vd ; change as appropriate/reqd
sdiff -s rl.cfg rl.cfg.bak # check we made the right change  

vgcfgrestore -f rl.cfg vd # rename the vg

You get the following:
  WARNING: VG name rl is used by VGs 8up28R-g2qw-tzJ7-e14F-Zuum-B4RH-JsGU9p and c70c76-sSDO-EjZd-Zfuc-w3Hw-Da3Y-GM9lf6.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  Restored volume group vd.
  
Alarming yes, but nothing to worry about...

vgchange -ay vd # activate the vg

You should now be able to mount any file systems in the vg

Going back..... (which you must do as your startup files will mention rl unless you change them and the vd won't boot with the altered vg name)

unmount <path>      # for each filesystem mounted from /dev/nbd0 or any of its partitions, luks volumes, vgs etc.
vgchange -an vd     # deactivate the vg
vgcfgrestore -f rl.cfg.bak rl # rename the vg back to rl - this one is REALLY SCARY ...
 Volume group rl has active volume: home.
  Volume group rl has active volume: swap.
  Volume group rl has active volume: root.
  WARNING: Found 3 active volume(s) in volume group "rl".
  Restoring VG with active LVs, may cause mismatch with its metadata.
Do you really want to proceed with restore of volume group "rl", while 3 volume(s) are active? [y/n]: y
  WARNING: VG name rl is used by VGs 8up28R-g2qw-tzJ7-e14F-Zuum-B4RH-JsGU9p and c70c76-sSDO-EjZd-Zfuc-w3Hw-Da3Y-GM9lf6.
  Fix duplicate VG names with vgrename uuid, a device filter, or system IDs.
  Restored volume group rl.

..but its ok ... its using the pv in rl.cfg.bak which is the right one.

cryptsetup close ncrypt # close the luks volume

qemu-nbd -d /dev/nbd0 # disconnect img

I have opened an issue with RH to provide a rename option which works under the above circumstances without all the hoopla and to address the ambiguous/alarming warning messages.

EDIT

Alternative approach ( simpler / no filters, same warnings as above )

vgs -o vg_name,vg_uuid,pv_uuid,pv_name              # interested in PV and VG UUIDs of our VG for later

vgrename <VG_UUID> vd   # vgrename will allow rename the inactive "rl" to "vd" if identified by VG_UUID

vgchange -ay vd # activate the vg

You should now be able to mount any file systems in the vg

Going back..... (vgrename will not allow change "vd" back to "rl" as there is an existing "rl" even with --force and "vd" inactive so we have to use the config backup/restore method)

vgcfgbackup -f vd.cfg vd # create a backup of the vd config
cat vd.cfg # check the data corresponds to the PV - it should contain the correct pv id which identifies the vg
sed -i.bak -E 's/^vd(\s*\{)/rl\1/' vd.cfg # replace vd with rl ; change as appropriate/reqd
sdiff -s vd.cfg vd.cfg.bak # check we made the right change  
unmount <path>      # for each filesystem mounted from /dev/nbd0 or any of its partitions, luks volumes, vgs etc.
vgchange -an vd     # deactivate the vg
vgcfgrestore -f vd.cfg rl # rename the vg back to rl
1 Like

If simple maintenance, eg: not needing to copy files between the host and the VM, then you can do this:

virt-rescue -a my_vm.qcow2

and then you can do something like this:

The virt-rescue escape key is ‘^]’.  Type ‘^] h’ for help.

------------------------------------------------------------

Welcome to virt-rescue, the libguestfs rescue shell.

Note: The contents of / (root) are the rescue appliance.
You have to mount the guest’s partitions under /sysroot
before you can examine them.

><rescue> mount /dev/sda2 /sysroot

><rescue> ls /sysroot
bin   dev  home  lib32	libx32	    media  opt	 root  sbin  swap.img  tmp  var
boot  etc  lib	 lib64	lost+found  mnt    proc  run   srv   sys       usr

><rescue> chroot /sysroot /bin/bash
root@(none):/# 
root@(none):/# 
root@(none):/# ls
bin   dev  home  lib32	libx32	    media  opt	 root  sbin  swap.img  tmp  var
boot  etc  lib	 lib64	lost+found  mnt    proc  run   srv   sys       usr

so from my /sysroot directory I can now edit fstab, or whatever else I need to do. I’m now also chrooted, so I can also fix the bootloader (grub) if needed. Obviously I’m not using LVM here, but the same procedure would apply for mounting the appropriate LVM volumes, and then do whatever you need to do without having to change the LVM volume group names, etc.

Obviously if you need to copy files between the machine, then your process can help, or just boot the VM from an ISO image like system-rescue-cd, enable networking, and then you can copy files across this way as well without having to go through the potential minefield of changing LVM stuff that might stop your system from booting later. If it’s a simple case of fixing something that is failing during boot, then virt-rescue can make this a little simpler.

1 Like

Cheers for that @iwalker

I wasn’t aware of virt-rescue so that’ll be useful and in hindsight a vm launch on an alt system vol and using ssh is probably a better approach.

1 Like