Installing a Rocky 8.4 Virtual Box VM as a qcow2 Openstack Image

I built a Rocky 8.4 VM on virtualbox (vmdk format) and hardened it using Red Hat Standard Linux profile - this requires a specific disk partitioning to be implemented. I converted the vmdk format to qcow2 using qemu-img tools for Windows successfully.

When I install on the Openstack platform, the qcow2 image instead boots into initramfs system and dracut-initqueue timeout is experienced. The timeout throws an error that the root disk cannot be found. When I check for the devices with the blkid command, no disks are found. But I know the disks are there because the image does install on other Virtualbox platforms on a colleagues laptop.

Could this error be caused by the hardening profile?

Not sure what you did exactly, but did you do any of the steps as outlined here? Example: CentOS image — Virtual Machine Image Guide documentation

You need to use the cloud-init tools to prepare an image for using with OpenStack. There are other steps as well in that link that should help, for example running virt-sysprep also.

If you did all that is in the link then theoretically the image should work, but if not, maybe it is because of selinux or the hardening profile. I have made images before, Debian though, and didn’t use selinux nor any hardening profiles.

Thanks for your pointers. I will have a look at the guide closely.

Oh sorry for the duplicate question. Actually I should not have hijacked someone else’s thread.

1 Like

I think a lot of the problems will come from not running virt-sysprep and not having the cloud-init tools installed, which help prepare the image for use under OpenStack. Generally, following the guides like that one from the OpenStack docs should get you a working image. This might be why you had issues booting, etc. Failing that, I would make two images, one normal, one hardened following that guide, and if one works, and the other doesn’t then at least we know it’s because of the hardening.

Thank you very much for your feedback. It does give me comfort to see that your suggestions on two images re: hardening confirm the actions I elected to do last week. I work in a prod environment where there is at least 14 days lead time to do installation on any VM image. The hardened VM image failed to install in December. I then created 2 normal images for installation early in Feb (one image with automatic partitioning of disk and one with custom partitioning of disk similar to hardening requirements though not hardened). I will now wait for Feb to see what happens.

On another note, I am learning about the virt-sysprep and cloud-init tools. I will explore them. Thank you. I have in the past created my VMs (CentOS 6.x) on Virtual Box and installed successfully. I did not know these tools you mention, but I did in one case add the cloud-init package in the VM. Upon installation, the VM crashed. I also had issues with the CentOS 6.x VM if I did custom partitions. But I now see my life would be much easier if I use the tools you point me to. I will get working on them asap.

The virt-sysprep will generally remove stuff like MAC addresses, etc that would clog up udev, possibly would do the same for other things like partitions, although haven’t really investigated too deep on everything it does. The cloud-init is to help with the image to inject SSH keys to the VM when it is being used on OpenStack, or allow you to reset/configure passwords for the default user in the VM. For example, a lot of the images available publicly from CentOS, Ubuntu, Debian, etc, will usually have a standard user - usually the name of the distro, so centos, rocky, debian, ubuntu of which the SSH key can be injected into. And/or reset the user password if not injecting an SSH key. That user should also have the ability to run sudo commands by default to get to the root user without requiring a password.

So when creating your own image, you’ll want to ensure you have a default/standard user with sudo privileges to run without requiring a password. Of course, if you are only using for yourself, and set a standard password that you and/or your team would know, then it’s not so important. I’ve tended to try and keep to the standards that the public images adhere to, and follow the OpenStack docs for preparing images.

The only images I generally have had problems with are making Windows-based ones. Haven’t really had that work successfully from the password side of things. But then, I spend more of my time on Linux anyway, so Windows for me isn’t all that important :slight_smile:

Hi,
Thanks again for holding my hand here. After removing the hardening, it seems to be clear that hardening is not the issue. The issue I pickup when running journalctl is that the swap partition is not picked up this time. This is a bit better compared to last time though the below screenshot shows that the other volumes arent present. I believe at this point I need to run virt-sysprep tool to prepare the image properly so that I remove any faults that I maybe adding when using my old method.

Most, if not all OpenStack images don’t have swap partition. The disk is normally one single partition without swap, and all of this is allocated to / partition.

Normally, what I do with images that I have downloaded that are already prepared for Openstack, and I want to have swap on my machine, I do this:

fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile

change 4G for however much swap you want, you can then add /swapfile to /etc/fstab like this:

/swapfile swap swap defaults 0 0

and then when you reboot the machine in Openstack you will have swap, albeit a file on the disk rather than a physical partition. This kind of behaviour is normal for Openstack images not having a physical swap partition.

2 Likes

Thank you @iwalker. I agree with you: all OpenStack running images I have encountered have one single partition. I in particular refer to the Glance Catalogue images I have seen like on Digital Ocean. But the system I am working with an adapted Huawei Fuzionsphere with no Glance Catalogue.

I have one big root disk (with /boot on /dev/sda1 slice) on CentOS 6.x when I built the image on Virtual Box as shown below. Installation of this Virtual Box image onto the Openstack Fuzionsphere worked fine after converting to qcow2.

With Rocky 8.4, the partition structure is complex even when I try to configure one big single disk. I end up with a strange structure when I allow for autoconfig like below. I have a /, home partition instead. I also notice that when I use Automatic partitioning option when installing Rocky, I cannot choose the file system. Nor does it give a single partition.

This is because you should choose during installation to make your own partitions, and then make one single / partition. This is most likely why you are experiencing so many problems by going through the installer using the defaults. Otherwise /home and /boot would already be on a single / partition.

Also it’s ideal also when making that single partition for / that you use ext4 as the filesystem. If it is forcing you to make EFI partitions, then it’s best that the VM is set for legacy boot or MBR, that way it will then allow a single / partition.

This is what my Rocky KVM virtual machine looks like because I specifically enabled EFI and a single LVM partition:

[root@rocky ~]# fdisk -l
Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FE808B7D-0C77-4ADA-823B-0DAAC57C8F13

Device       Start      End  Sectors  Size Type
/dev/vda1     2048  1050623  1048576  512M EFI System
/dev/vda2  1050624  2099199  1048576  512M Linux filesystem
/dev/vda3  2099200 83884031 81784832   39G Linux LVM

Disk /dev/mapper/vgrocky-root: 39 GiB, 41871736832 bytes, 81780736 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@rocky ~]# lvs
  LV   VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root vgrocky -wi-ao---- <39.00g      

See image below on my partition configuration for a new install that I do right now. First image, you see I choose custom:

next I change from LVM to standard:

then I click the + button to add a partition and specify it’s size, note / and 20GiB:

I now make sure to choose ext4 as the filesystem:


I click done, and get a warning because of no swap, but that’s OK, so I click done again:

I now have to confirm the partition changes, and I continue my installation as normal. After my system has booted, this is how it looks for the partitions:

[root@rocky-mbr ~]# fdisk -l
Disk /dev/vda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x91ac48db

Device     Boot Start      End  Sectors Size Id Type
/dev/vda1  *     2048 41943039 41940992  20G 83 Linux

as you can see, it’s possible. You must do your own partitions, must disable LVM by changing to standard partitions, since you must prepare the image with cloud-init. And resizing partitions with OpenStack/cloud-init is done via growpart, which means LVM and all that stuff isn’t needed.

Once the image is made, prepped as per the Openstack docs, sysprep, etc, it can then be uploaded to Glance, and you can then use it for creating Openstack instances with.

1 Like

Thanks @iwalker this helps me a lot in saving my situation. I have another installation window coming early next week and after comparing what is installed on the FuzionSphere, and what I now have, the match is better. I also notice I didnt have a swap partition.

Thank you. I am more optimistic this time. And it looks like I have a lot of documentation to do including sysprep etc.

1 Like

Thank you @iwalker for the advice. Initially I thought I failed my initial install window this past Tuesday as the OS failed to boot up with the normal kernels. I tried booting via rescue disk and this worked. From there, I could rebuild the initramfs and this allowed normal boot.

I need to investigate why this initramfs had to be rebuilt.

I did update that I could boot up the VM once using the Rescue Kernel option. I used the dracut tool to repair the initramfs images and all is fine. Power cycle tests are good.

What was curious though was that for me to be able to boot the Rescue Kernel, I had to issue the CTRL+ALT+DLT command from console to trigger a reboot from the emergency dracut console. But, I then had to wait for almost 40 minutes to reboot due to waiting for a disk activity that has no timeout value.