I’ve created a Rocky Linux 9.5 VM on VirtualBox Version 7.1.0 with virtio-scsi for storage controller and virtio-net for the network adapter. After that, I used qemu-img tool to convert the VirtualBox VDI to QCOW2 so that I can import it to my Apache CloudStack cluster. However, it doesn’t boot successfully and goes to dracut shell.
I can confirm that I’ve selected the correct disk controller on CloudStack which in this case is virtio. I also created an Ubuntu 24.04.1 QCOW2 from the same process above and it can boot with virtio storage controller. I don’t think this is because the VirtIO driver is not loaded since I can use it for network adapter on CloudStack. Refer here: Imgur: The magic of the Internet
Further investigation found that selecting virtio as rootDiskController, the driver loaded is virtio_blk meanwhile selecting scsi as rootDiskController, the driver loaded is virtio_scsi.
virtio as rootDiskController on Ubuntu: Imgur: The magic of the Internet
scsi as rootDiskController on Rocky: Imgur: The magic of the Internet
Only virtio_scsi works for Rocky Linux and not virtio_blk. Is this expected?
When you install a system of Rocky Linux, only the current needed drivers are enabled in the initramfs. Ubuntu very likely does not do this and has everything available in the initramfs. The implementation in virtualbox is virtio_iscsi, which exposes disks as /dev/sdX devices rather than /dev/vdX devices.
virtio_blk not working for you is not a surprise in this case, because it appears Ubuntu loads everything.
With the implementation that virtualbox is used, it is not a surprise you are running into this.
[root@xmpp01 ~]# lsmod | grep virtio
virtio_gpu 98304 0
virtio_dma_buf 12288 1 virtio_gpu
drm_shmem_helper 28672 1 virtio_gpu
drm_kms_helper 274432 2 virtio_gpu
virtio_balloon 28672 0
drm 782336 4 drm_kms_helper,drm_shmem_helper,virtio_gpu
virtio_net 81920 0
virtio_blk 32768 9
virtio_console 45056 1
net_failover 24576 1 virtio_net
My suggestion is before you convert the disk, you need to add the drivers to the initramfs. See below for an example.
% cat /etc/dracut.conf.d/virtio.conf
add_drivers+=" virtio_blk "
% dracut -f
1 Like
Oh ya supposed so. Ubuntu in the future might use Dracut and this issue might affect them later (depends on how they load stuff). Just wondering, in terms of cloud images, I know there are pros and cons but what’s the most common VirtIO driver being used for disk? virtio_blk or virtio_scsi?
Also did cloud images normally used LVM or just standard partition?
The most common driver used is virtio_blk where the reason traditionally is performance, though the factor may be fairly minor (and this assumes nvme isn’t being used). That and virtio_blk came before virtio_scsi. Using one or the other is going to depend on your use case, and ensuring both virtio_blk and virtio_scsi is available would be the right idea here.
Cloud images, in majority of cases, use standard partitions. Even with the images we provide are primarily in a non-LVM format, but we offer LVM images for those who prefer it.
1 Like
I’m currently building cloud templates/images for our CloudStack deployment for customer use. I’m gonna make sure that both virtio_blk and virtio_scsi available on the images but default to use virtio_blk at start.
In terms of LVM vs standard I might go with LVM due to the features and flexibility that it offers.
Of course all of these gonna be documented somewhere on our site for those customers that wanna use our provided images.
Thanks @nazunalika !