Persistent Disk naming not working, Rocky 9.1

When I add a disk to a Rocky 9.1 vmware vm, the new disk always takes over as /dev/sda. Obviously this ends up breaking things. Is there something wrong with systemd-udevd? Or am I doing something wrong here?

Before adding disk:

[myhost01 ~]# fdisk -l
Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x80aaa989
Device     Boot    Start       End  Sectors Size Id Type
/dev/sda1  *        2048   2099199  2097152   1G 83 Linux
/dev/sda2        2099200  12584959 10485760   5G 82 Linux swap / Solaris
/dev/sda3       12584960 104857599 92272640  44G 8e Linux LVM
Disk /dev/mapper/vgroot-lv_root: 44 GiB, 47240445952 bytes, 92266496 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

after adding disk:

[myhost01 ~]# fdisk -l
Disk /dev/sda: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x80aaa989
Device     Boot    Start       End  Sectors Size Id Type
/dev/sdb1  *        2048   2099199  2097152   1G 83 Linux
/dev/sdb2        2099200  12584959 10485760   5G 82 Linux swap / Solaris
/dev/sdb3       12584960 104857599 92272640  44G 8e Linux LVM
Disk /dev/mapper/vgroot-lv_root: 44 GiB, 47240445952 bytes, 92266496 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Before I file this as a bug report, is there some new setting on el9 im missing here? systemd-udevd is running

What exactly breaks? The described above is nowadays a normal behaviour. Your services should use UUID or LABEL to ident the storage - to get an idea ls -1 /dev/disk/

1 Like

If a new disk is showing up as sda instead of sdb on vmware, that will be how vmware is reporting its hardware information to the kernel. It won’t be something we can correct. This unfortunately isn’t the first time this sort of thing has happened with the vmware platform.

Echoing Ritov here - your disks should be using UUID or LABEL (preferably LABEL) to boot, so that they can be moved between different hypervisors. E.g., on Xen-like systems, you have xvda instead of an sda/vda.

Thanks for the response, I want to add this does not happen with centos7 or Rocky8 on the same vmware server setup. Are you sure its related to vmware and not to something else? I would expect the same functionality on Centos7/Rocky8 if that were the case. This is very consistent here, even adding second scsi interfaces and attaching the disk it always takes the sda spot for the new disk and renames the old one. I will look further at vmware but I wanted to add that as it does not seem correct to me.

Breaking things is subjective here, my automation scripts for a shared disk are broken here, I keep overwriting the wrong disk in this case.

Or call blkid. (The lsblk can show same data too with suitable format options.)

The filesystems that are specified in installer do get UUID in /etc/fstab.


When one has mirror, RAID1, both copies do have same UUID (and LABEL), but that is not an issue as long as the array gets assembled.

If one “clones” a filesystem and leaves both copies attached, then one does have an issue (if one tries to mount one by UUID).

This happens also to bare metal. The enumeration of the kernel depends on various conditions, on bus type, storage type, interface type. I had a case where my phone while attached via usb and rebooting the system got sda assigned. So, try to keep your automation as flexible as possible …

Im going with more flexible in this case, ive been using this since el5 without many updates at all. But I think this is half el9 and half vmware at this point. Ill test more to be sure but it looks like its el9 and this specific version of vmware (its older 6.5.0), ill verify against newer versions to be sure.

I can confirm this is happening on both VmWare and bare-metal systems. We deploy hundreds of systems using Ansible, and we’ve recently starting using Rocky 9. We’ve never had this issue in previous versions of Rocky 8, CentOS 8, CentOS 7, and CentOS 6.

Some of our automated Ansible configuration adds a 2nd disk. We look for the new device called “/dev/sdb” so that we can setup LVM, format the filesystem, and mount.

However, because sometimes Disk2 is named /dev/sda instead of /dev/sdb; the automation of adding the 2nd disk fails for us in Ansible.

I can confirm performing reboots will yield different results in Disk device name. For example, I just rebooted a Rocky 9 linux system…

This is how the Disks are supposed be presented (sda3 being the primary disk1 - a 14.41gb disk)

   PV         VG  Fmt  Attr PSize   PFree
   /dev/sda3  rl  lvm2 a--   14.41g    0 
   /dev/sdb   opt lvm2 a--  <50.00g    0

A reboot renamed devices in reverse (sdb3 is now the 1st disk, and sda is now the 2nd disk):

  /dev/sda   opt lvm2 a--  <50.00g    0 
  /dev/sdb3  rl  lvm2 a--   14.41g    0

I can continue rebooting and will always yield different results. Is there a special setting or UDEV that we can make this persistent?

I experienced this in both RL8 and RL9. It is the result of systemd mounting devices in parallel. I read through several chapters of RHEL9 documentation on device mounting to see if I could achieve what you desire here and was discouraged in the end. It may be possible to write a systemd unit file to do this but it may be different for each machine you install to.

Can you gather the facts with Ansible and from them determine the transient name of the additional drive?

We already wrote an Ansible fix similar to what you described.

My point is that this is very strange how randomly the drives get assigned in Rocky 9. Is this happening in other Linux distros?

If i added 3 disks with the same sizes, how would ever know Disk 1 is sda, or sdb, etc… logically this makes no sense.

It could be that kernel-5.* and/or systemd do things differently in el9.

Is anything in these persistent/predictable:

ls -l /dev/disk/*/