I am adapting a workflow where I use Packer to generate an OVF image of CentOS 7 + extra RPMs to do the same for Rocky Linux 9 (with RPMs compiled for it, of course), fully scripted with a kickstart file to run unattended. I run the build on an ESXi server, and the OVF imports fine into another ESXi server.
For local testing, I have been running VirtualBox locally on my machine, and imported the OVF there (this is also something I know is used downstream from us), and this has been working fine. With the Rocky Linux 9 OVF, however, the kernel boots up but is unable to find the LVM. Grub sees the hard disk and boots the kernel, but the kernel cannot find the hard disk.
I did a side-by-side comparison of the generated .ovf file, and except for the file names and sizes, the VM settings are identical.
Has anyone seen anything similar and have ideas?
Running the installer in VirtualBox directly generates a VM that works fine, so I know VirtualBox can run Rocky 9.
Yes, the rd.lvm.vg parameter is correct, so that’s not it. From the emergency shell that eventually opens, I can not see any disks whatsoever, it is like the kernel doesn’t load any drivers.
I have tried experimenting changing the controller type for the hard disk image, but I haven’t found any combination that works yet.
If I install directly in VirtualBox, it comes up just fine. This happens when I build on ESXi and export as OVF, and then import this into VirtualBox.
I am migrating from a CentOS 7 based setup, where that works just fine, but the Rocky build is not playing along. Maybe it is indeed missing some modules that used to be included; I will have to look at whether I can have the kickstart configuration force it to include all modules.