Rocky 9.4 not booting on HyperV when Network Interface Connected

Hi team!
We are having a singular issue with the new recently updated Rocky 9.4 and Hyper-V Gen2 VM.
Once the kernel is updated to the version 5.14.0-427.16, when the machine reboots, It wont start. it gets stucked on the Grub, tries to boot, but instantly goes back to grub. On the 4th try, the VM gets PoweredOff.

  • Tried previous kernel with Rocky 9.3, it works.
  • Tried enabling/disabling “SecureBoot”, not working
  • Tried disconnecting the Network Adapter from VM settings, surprise, IT BOOTS up. And we can connect it once the SO y running and it works perfectly. If we reboot, It won´t boot again.
  • Tried changing the VLAN (so network adapter is connected, but no network), IT WORKS.

Why can this be posible? With Rocky 9.3 everything works fine, no changes were made to to VM. Just kernel upgrade.

Edit: this same behaviour when trying to install Rocky 9.4. If Network Adapter is on and in this case with our 258 VLAN, IT WONT start the installation guide…

1 Like

I’m interested in this part.

When you say it “goes back” to grub, do you think Rocky somehow stops and then goes back to grub, is it that Hyper-V decides something is wrong and reboots it again, making the grub screen appear again?

Is there a setting in Hyper-V that says “reboot machine if it goes wrong”?

Having same issue in hyper-v Gen2 VM after updating from 9.3 to 9.4 it keeps on reloading to grub , you have to select the previous kernel to boot up, and yes when network is off it will boot to kernel 5.14.0-427.16.
Same thing when installing new Rocky 9.4 to hyper-v Gen2 VM.

Hope this can be fix soon.

Having same issue in hyper-v Gen2 VM : after updating from 9.2 to 9.4 it keeps on reloading to grub, I have to select the previous kernel to boot up.
When network is off it boot to kernel 5.14.0-427.16 normally.

Hello All.

Any updates here?
We are using Hyper-V with Rocky VMs which were recently upgraded to 9.4 and that cause CapsLock and ScrollLock flapping and Console access is usless. After some time VM reboots itself… Looks like it might be related. Please advise

P.S I can confirm that previous kernel 9.3 works no issues.

I run today also into this issue.

HyperV on Windows 2016 Servers. Gen2 VM - no SecureBoot.

Currently we have 4 RockyLinux 9.x VMs on these HyperV.

Two servers with iSCSI Disks attached and NFSv4 service active. Both servers have 3 Network cards attached. One for Production LAN and the other two for iSCSI.

The other two VMs are normal clients. One network card.

Now my findings:

The client VMs with one network card can reboot to the latest 9.4 kernel without any problem

The server VMs did already struggle with 9.3 but can after 2-3 “reboots” boot. Sometimes it boots but it can’t connect to the iscsi or it boots but no connection to the production LAN. Several reboots later everything works and is stable until the next reboot.

Now with 9.4 i am unable to boot with the latest kernel. GRUB is showing me the list and then it just goes shortly black and then i see the GRUB menu again. After some cycles the VM is powered off by HyperV.

Similar behavior reported by me. No root cause yet. I can add that I have only one network cart attached.

New kernel did not help. Still the same issue

I checked some logs and i see now this in the Hyper-V Worker logs:

The guest operating system reported that it failed with the following error code: 0x1E. If the problem persists, contact Product Support for the guest operating system.

When i search for that i come to this page:

Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts

When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log:

The guest operating system reported that it failed with the following error code: 0x1E

This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 or later as the host.

And i have a Hyper-V Server 2016… And i guess RHEL 9 will not be better when there was already a problem with RHEL 8…

1 Like

I’m guessing this does not happen on Azure, so does that mean they don’t use Hyper-V (by default) on Azure?

I installed now a new VM with RockyLinux 9.4 latest kernel as Gen1 VM. 3 Networkcards active. Until now i can reboot it everytime. Doesn’t help for existing VMs but at least a way to proceed when Hyper-V is stuck at Windows Server 2016 - like our.

Is this actually this bug: Rocky 9.4 AWS ami not booting after kernel update - #12 by hegyre. Although the post title says AWS it is actually all cloud images.

dnf -y update
grub2-mkconfig -o /boot/grub2/grub.cfg
shutdown --no-wall --reboot 1

And that should fix what?

At least doesn’t help on on-prem HyperV 2016

Yes I can confirm, after migrating my Rocky 8 and 9 VM from Hyperv Windows 2016 to 2019, the boot issue has gone. I will migrate all my Rocky linux to Windows 2019 server. Thanks

I have noticed since updating to kernel version 5.14.0-427.22.1.el9_4.x86_64 that this problem is no longer present. I have tested this on 4-5 Rocky Linux VMs running on Hyper-V 2016 and the results are promising. I believe the previous repo release (427.20) was still broken.

Would like to know if others have seen the same behaviour.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.