Boot process for Rocky Linux 9 taking long time

Good evening. It’s England vs Senegal in the coming hours .

Before then , I would like to post an issue am experiencing with my Rocky Linux 9.1 ( Blue Onyx) running on ASUS N56V, quite an old piece but an excellent performer. I forgot to mention I have a graphics card NVIDIA GEFORCE 740M.

My issue is, from the time I press the power button to the time the OS loads the Kernel, It takes more than 5 minutes. The boot process is very very slow.

Could there be some tweaks to Help speed up the process? Please advise.

Does it still have the original 5400rpm hard disk in it? Whilst disabling services to a certain extent might help I doubt very much it’s due to this. That said, you could press ESC when the system is booting, so you can see all the services starting and see if it spends too much time trying to do something and if so, tell us what service?

Changing your 5400rpm disk for a SSD one does wonders for old machines. Those 5400rpm disks run like a dog (seriously slow). I had one in an MSI laptop, and replaced it with a Samsung EVO 860 1TB.

It might be good having a slow disk in this case, more chance of seeing what’s wrong as the text scrolls up the screen during boot.

That rather sounds like a process is waiting for its timeout (due to …whatever the cause might be) during the startup. @Jil - can you hit the <Esc> key once grub has finished so that there is a chance of having a look whether something’s going wrong?

Edit:
Sorry, I just noticed that @iwalker already asked for exactly the same. :grimacing:

1 Like

Hello Team,

Been quite held up with an AC failure in our Data Center.

I will update us soonest I can breathe.

I have a very similar issue. I have deleted quiet and rhgb from the kernel command line in grub.cfg, so that I get progress updates as boot proceeds. But from the moment that the grub selection screen exits to the first progress report (which is something like “probing EDD”) takes over 60 seconds, during which there is very occasional brief disc activity but otherwise nothing appears to be happening. This started some years ago, I think when I switched to CentOS 8 - all was OK with CentOS 7. I’m sure that this is not due to having slow disks, as most of the waiting time there is no disc activity at all. Any ideas on how to work out what the problem is would be most welcome.
Thanks, Roger.

But the title says Rocky 9, and you are saying CentOS 8, so which exact machine are we talking about, and when exactly did it start?

It first started when I upgraded from CentOS 7 to CentOS 8. Since then I’ve switched to CentOS 8.4, then Rocky 8.4, now Rocky 9.1 - and the issue has persisted ever since it first started.
Sorry if that wasn’t clear to start with.

Does this delay happen when booting the install media?

Assuming that RL9 is a fresh install did this delay happen on initial reboot after install or after you migrated settings from prior OS?

Can you install the system information tool “inxi”? you will need to have the EPEL and crb repos enabled. Example:

 dnf config-manager --set-enabled crb

Once installed issue the command inxi -MG thus:

$ inxi -MG
Machine:
  Type: Desktop System: Gigabyte product: B450M DS3H v: N/A
    serial: <superuser required>
  Mobo: Gigabyte model: B450M DS3H-CF v: x.x serial: <superuser required>
    UEFI: American Megatrends v: F50 date: 11/27/2019
Graphics:
  Device-1: AMD Picasso/Raven 2 [Radeon Vega Series / Radeon Mobile Series]
    driver: amdgpu v: kernel
  Display: server: X.Org v: 1.20.14 driver: X: loaded: modesetting
    dri: radeonsi gpu: amdgpu resolution: 1: 1920x1200~60Hz 2: N/A
  API: OpenGL Message: Unable to show GL data. Required tool glxinfo
    missing.

This will give us a good idea of hardware and age.
Do you know if you are booting in UEFI mode or bios boot?

Are you booting from a spinner or ssd? Is this a raid setup?

Does this delay happen when booting the install media?

Not when booting the install media.

Assuming that RL9 is a fresh install did this delay happen on initial
reboot after install or after you migrated settings from prior OS?

As far as I remember, both.

Once installed issue the command inxi -MG thus:

Instead I ran

inxi -MGDz

which gave

Machine:
Type: Server Mobo: Intel model: S5520UR v: E22554-751 serial:
BIOS: Intel v: S5500.86B.01.00.0064.050520141428 date: 05/05/2014
Graphics:
Device-1: Matrox Systems MGA G200e [Pilot] ServerEngines driver: mgag200
v: kernel
Display: x11 server: X.Org v: 1.20.11 with: Xwayland v: 21.1.3 driver: X:
loaded: modesetting unloaded: fbdev dri: swrast gpu: mgag200
resolution: 1440x900~60Hz
API: OpenGL v: 4.5 Mesa 22.1.5 renderer: llvmpipe (LLVM 14.0.6 128 bits)
Drives:
Local Storage: total: 464.73 GiB used: 378.58 GiB (81.5%)
ID-1: /dev/sda model: MR9240-4i size: 464.73 GiB

This will give us a good idea of hardware and age.
Do you know if you are booting in UEFI mode or bios boot?

Bios boot.

Are you booting from a spinner or ssd? Is this a raid setup?

Spinning disc. RAID setup. In particular, since the start of CentOS 8 I’ve had to use an EPEL kernel module because RHEL sadly removed from the supported RAID drivers the one I needed. Specifically this module is

kmod-megaraid_sas-07.719.03.00-2.el9_1.elrepo-x86_64.rpm

and of course the time this issue started was exactly coincident with starting to need this module. But it clearly isn’t a real requirement that this very long pause happens, as with CentOS 7 and earlier there was no such delay.

Thanks, Roger.

So now I think we have gotten to the point where someone here, not me, can provide some meaningful help. I have no experience with raid arrays but from the described symptoms the initial initram image is having trouble aligning the array, but does in the end. Could be you need an added kernel parameter or the image needs modification to include some module or path to somewhere.
So with this new and valuable information another expert will have to take it from here.
And, may this be the beginning of a happier new year.

I face the same problem. For me I’ve edit this file.

sudo vi /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service

Then find the row that contain

Environment=NM_ONLINE_TIMEOUT=60

And change it to

Environment=NM_ONLINE_TIMEOUT=5s

And then reboot. It should be faster then the past.

Lunthi,

Thank you for these comments.

However, NetworkManager waiting is not the problem - the excessive
amount of time is spent not after the system boots while NetworkManager
is starting up, but between the GRUB screen and the first line of the
boot process - long before the second phase of booting even starts, let
alone NetworkManager.

More than a minute is spent immediately after the GRUB screen disappears
before “Probing EDD…” appears (if “rhgb quiet” has been been removed
from the kernel command line in grub.cfg), after which the boot process
starts.

If anybody can explain this I would really appreciate it.

It would also be useful if anybody else who is using the megaraid_sas
kernel module could tell me whether they also observe this long delay -
as I first noticed it after RHEL withdrew this module from their
standard kernel and I had to install it separately.

Ian, in response to your post below (as the system won’t allow me to reply to you for some reason), I’ve tried your suggestion, and the only change I could see was that the “Probing EDD…” line no longer appears (incidentally confirming that I’d correctly followed your instructions) - the one minute delay between leaving the GRUB screen and starting the boot process proper is unchanged. Thank you for the suggestion though !
Thanks,
Roger.

You could try adding:

GRUB_CMDLINE_LINUX="edd=off"

to /etc/default/grub, updating the grub config using grub-mkconfig, and then see if that helps? Just an idea.

If GRUB_CMDLINE_LINUX already has entries, just append to the existing list. Either that, or you can add it manually when you see the grub boot menu for a one-time-only test, and find the kernel linux - if I remember right, using CTRL-X to edit the existing selected boot entry.

Just to get stuff in chronological order again, thank you Ian for your suggestion - please see the post above your reply for how it didn’t work.

Update: I eventually found, after a lot of work, that the problem appears to be down to an interaction between grub and the particular disks I was using. Although these disks worked fine and without excessive delays once the system was up and running, under grub they appear to need approx 0.5 revolutions extra per block read. On changing the disks - but not the RAID controller - this delay is totally eliminated. Moreover, when both types of disk are present, whichever disk grub itself is booted from, doing a grub initrd command on an initramfs file on the slow-type disks takes more than a minute longer than if the same file is put on the fast-type disk - although copying the file into shared memory, once the system is up, takes the same amount of time from both disk types (substantially less than 1 second). Both disks were initially set up in exactly the same way.