Grub reboot problem

I am running into an issue after editing and writing out the boot loader. Upon reboot, I get the GRUB menu, select RL 9.3, then the three dots, then end up on an emergency shell. The /run/initramfs/rdsosreport.txt shows the following:

About 5 seconds into boot:

Scanning devices sda3 for LVM logical volumes rl_localhost-live/root rl_localhost-live/swap
Volume group "rl_localhost-live" not found
Cannot process volume group rl_localhost-live

then, about 2 minutes later:

Warning: dracut-initqueue: timeout, still waiting for following initiqueue hooks:
Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fmapper\ "if ! grep -q /run/systemd/generatorsystemd-cryptsetup@*.service 2>/dev/null; then [ -e "/dev/mapper/rl-root" ] fi"
Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdevb\x2frl_localhost-live\ "[ -e "/dev/rl_localhost-live/root" ]"
Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdevb\x2frl_localhost-live\ "[ -e "/dev/rl_localhost-live/swap" ]"
Warning: dracut-initqueue: starting timeout scripts

it then repeats the following messages continually over the next minute:

Warning: dracut-initqueue: timeout, still waiting for following initiqueue hooks:
Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fmapper\ "if ! grep -q /run/systemd/generatorsystemd-cryptsetup@*.service 2>/dev/null; then [ -e "/dev/mapper/rl-root" ] fi"
Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdevb\x2frl_localhost-live\ "[ -e "/dev/rl_localhost-live/root" ]"
Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdevb\x2frl_localhost-live\ "[ -e "/dev/rl_localhost-live/swap" ]"
Warning: dracut-initqueue: starting timeout scripts

finally ending on:

Warning: Could not boot
Starting Dracut emergency shell

FYI: I did follow the comments regarding writing the bootloader changes:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg -- update-bus-cmdline

Any help would be much appreciated. Obviously this isn’t finding rl_localhost-live, but not sure where to correct from here.

This was moved from another area on this site; for correct context, I was following the instructions on installing Rocky Linux from here:

The Ultimate Rocky Linux Install Guide with NVIDIA Drivers

Good morning, ctozzi. in the grub menu press e on the kernel you are using to boot and see if you see something similar to this for the root mount point. is that name for yout volume group “rl_localhost-live” correct? Seems like that belongs to the live disc and your actual volume group’s name might be something different.

if your logical volume used for root looks different than the real one, edit it, then press ctrl x to boot with your temporary changes and once you are able to have at least the root dir going you can edit the fstab or look for the lv backups in /etc/lvm to restore the volume group if that is the problem

1 Like

As @alexia mentions, boot in rescue mode if it still works from grub, or from a Rocky ISO in rescue mode so that it can find your LVM partitions, etc. Check the LVM volume group name as well as the volume names and fix those changes from the point in the linked article shown below:

sudo nano /etc/default/grub

GRUB_CMDLINE_LINUX="resume=/dev/mapper/rl_localhost--live-swap crashkernel=auto rhgb quiet nouveau.modeset=0 rd.driver.blacklist=nouveau"

the line above is shown for reference on what you need to find and edit. I stress your line should not look like the above - the LVM stuff was unique to their system only.

What the article should have done, was ask to append the nouveau.modeset=0 to the grub line, instead of writing make your line look like the above. Since systems vary, including the choice of leaving the default LVM group/volume names.

The article/forum post has been removed from the forum due to the problems being caused by it.

I’ve updated the video to state to append nouvea.modeset=0 as well. Sorry for the confusion this has caused.

1 Like

Yes, like Ian said, if you are able to boot from a live CD simply type sudo lvs to see your logical volumes and sudo vgscan to see your volume group, so then just fix your /etc/fstab with the correct names for your device mappers

/dev/mapper/myvolumegroup-root /
/dev/mapper/myvolumegrouo-home /home

and so on.

You will have to edit the resume file also, in /etc/initramfs-tools/conf.d/resume to replace the wrong volume group for your actual one,

And lastly you will have to edit /boot/grub/ (or grub2) grub.cfg find and replace all instances of the wrong volume group name for the good one.

save and reboot

1 Like

Thanks all for the replies. My confusion arose from the “rl_localhost-live” in the walkthrough, but on my system, the correct paths were rl/root and rl/swap etc. It took getting some sleep and looking at it with clear eyes to determine the issue, which in retrospect seems simple enough.

Unfortunately I’m still running into issues getting this to work with dual GPUs and dual monitors. I’m trying to migrate over to RHEL/RL as it is recommended as the Visual Effects Reference Platform by industry groups (and it’s what I work on when working at VFX cos.), but admittedly I’ve always run into the roadblock of getting NVIDIA drivers and dual-monitor support working comfortably.

While I can hack away at Linux and keep trying, I’m not an IT Pro by any means, and while I like the challenge, it’s definitely humbling.

1 Like

We’ve all been there :slight_smile: I started back in 2005 and been using it every day since. Things were a lot more hit and miss especially with hardware - wireless for one was a pain back then manually configuring wpa_supplicant, etc. Never got an ATI Radeon working despite the driver existing for Linux. My nvidia’s back then pretty much always worked. The adding of the nvidia drivers to repositories making it easier to install nowadays has helped a lot in many cases. The problems sometimes come from figuring out which driver version is needed, be it the nvidia 340, or whatever number driver. A laptop with Fedora upgraded the nvidia 340 to 490 (forgive me if the numbers are incorrect slightly), and then it failed to work. Reverted back to the 340 and all was good again.

You’ll get there definitely in the end, a little perseverence and all will be good :slight_smile:

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.