Kernel 5.14.0-362.13.1.el9_3 broken?

the kernel above don’t start.
Grub hangs only with an shim signature error :frowning:

Yes, this is known. Please boot into the previous kernel and uninstall the new kernel. We’re looking into a fix.

Apologies for the inconvenience and delay in messaging. It appears that somehow non-SB signed packages came into the BaseOS repository. We’ve pushed the corrected packages to our tier 0 and active mirrors should now have them synced. We’re investigating how the non-SB packages came into BaseOS/RT/NFV to hopefully avoid this in the future and we may look into potentially pushing out an updated kernel package version bump in attempt to alleviate any other further issues that may crop up.

If you are on an architecture that is NOT x86_64, such as aarch64, the below does not apply to you. aarch64 does not currently have secure boot at this time.

If a user using x86_64 has updated to 5.14.0-362.13.1.el9_3, they are recommended to verify and potentially reinstall the kernel packages (whether or not they’re on secure boot). This should be very far and few between as these updates were just released a few hours ago.

We recommend verifying the build host of the kernel-core package first.

rpm -qi kernel-core-5.14.0-362.13.1.el9_3.x86_64 | grep Build Host
Build Host  :

If the build host does not match above, you likely have the broken packages still installed. If this is the case, you can attempt:

dnf clean all
dnf reinstall kernel*

After, you can verify the package as previously shown.

As far as I can see, 91 mirrors have the fixed packages. Some mirrors never even synced the erroneous packages. Some mirrors are just barely syncing the updates that were pushed throughout the week. The 91 mirrors as far as I can tell are the active mirrors that will show up in a mirror list query.


correct will be on my system:
rpm -qi kernel-core-5.14.0-362.13.1.el9_3.x86_64| grep "Build Host"
Because the filed will contains ans space char.

The issue is still present, even with the correct RPM. It seems to depends on the host hardware, I have the issue on 2 PowerEdge R220, but no issue on a Proxmox virtual machine.

If your physical systems are having issues, the issue will likely be with your system’s firmware, not with our packages. I would ensure that your physical servers are fully up to date.

Except that these systems booted properly with the previous kernel version. If you’re talking about BIOS, HBA firmware, they’re all up to date. Now I will have to move to elrepo’s kernel-ml to keep the kernel up to date without breaking my systems every time I perform an update.

Nothing has changed with how we sign our secure boot packages. It is the same certificates, keys, and infrastructure. If proxmox (and libvirt in my case) are working fine, the issue will be isolated to the systems that are not working.

You are recommended to check every installed kernel package and your system’s firmware. If you manipulated the firmware key store, it is on you to make corrections to it.

I should note that elrepo is not officially signed for secure boot, unless you are manually importing their keys.

Why are you talking about secure boot? I’m not even using it…
I’ve just switched from kernel-core-5.14.0-362.13.1.el9_3.x86_64 to kernel-ml-core-6.6.7-1.el9.elrepo.x86_64, and now it can boot properly, like it did on previous kernel-core release from Rocky repo.

This is precisely what this entire thread is about, a “shim signature error” which is a secure boot issue.

If you are having a completely different, non-secure boot issue, please open a new thread and/or open a bug report:

Well, I think multiple things got broken with this release, in my case, on one host it hanged on “random: crng init done” and on the other it looped with the LVM volume not being detected.

As for the random: crng init done pressing enter should give you a prompt to be able to login. The system is awaitng for input - perhaps this works for you?

you can probably install haveged or rng-tools to assist without having to generate any input yourself.

As for the LVM, we would need more info than it just looped as that doesn’t give very much info to assist.

I’m having problems with this kernel in a virtualization context. Since updating the software, booting requires manual intervention. After several tests, I concluded that it was a problem of contextualization. In fact, the kernel is a bit sick. The recommendation is therefore to prevent any background updates of kernel.

What kind of problem? With the kernel inside a VM? Or with the kernel on a server acting as a VM host eg: kvm/libvirt? I have a Rocky 9 vm running this kernel without problems.

For me the repacked kernel works fine on VMware and KVM.

Not in the Hypervisor in the VM. Aniway the problem doesn’t seem to come back. Doing an upgrade in each build. May be the kernel running now is not exactly the problematic one.

$ rpm -qi kernel-core-5.14.0-362.13.1.el9_3.x86_64 | grep ‘Build Host’
Build Host :

There were issues with the first one, but from your post you have the new one now. That is showing the build host as verification that it’s the correct one. The previous build had issues, but I don’t know what or why exactly. I believe there is a post on this forum about it.

No prompt, but I gave up and switched to elrepo ml kernel, no issue so far.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.