Odd boot problem after installation

Hello. I have installed Rocky 9.2 and I am experiencing a strange /boot partition problem. After installation (done via kickstart) when powered down, I add a second disk.

Here is my basic kickstart config:

# Generated using Blivet version 3.6.0
ignoredisk --only-use=sda
# Partition clearing information
clearpart --none --initlabel
# Disk partitioning information
part pv.515 --fstype="lvmpv" --ondisk=sda --size=1571839
part /boot --fstype="xfs" --ondisk=sda --size=1024

When the install is completed, and powered off, add the second disk and boot the server back up, my boot partition seems to have been moved to /dev/sdb1. How can that be?

[root@mars ~]# df -h /boot
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1      1014M  273M  742M  27% /boot
[root@mars ~]# cat /etc/fstab | grep boot
UUID=79c66ebc-996a-4fe4-9569-0969b403bf1f /boot                   xfs     defaults        0 0

The install happens with only a single disk. How does it manage to carve out a slice of the second disk? I have the second disk sticktly for data, and thus the second disk becomes useless.

Here are my vm options for the server:

Anything obvious I am missing? Any help would be greatly appreciated.


With systemd the assignment of /dev/sdx is arbitrary and done at boot. That is why you must use the UUID for all your mount points. I have three disks and it is common for sda to be assigned to a different disk than the prior boot.

The installer does add UUID-based entries into fstab, except for LVM volumes,
which are referred to with /dev/mapper/vg-lv style paths.

I presume that the LVM auto-detects PVs and assembles VGs and LVs based on
metadata within PVs. That way the devname is not critical.

One can run:

lsblk -f

and look into /dev/disk/by-* to see how kernel sees things today.

Thanks for the tips/pointers. I was unaware that /dev/sdx can be arbitrary, I had never seen that before.

With that in mind, now I am trying to rewrite some scripts for automation and apparently, I can get the UUID, but then it seems to change during the script run.

First I determine which disk is the boot disk, no problem.

Then on the unformatted disk:

mkfs.xfs -f /dev/sda
meta-data=/dev/sda               isize=512    agcount=4, agsize=16777216 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=67108864, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.

Then get the UUID of the disk, in this case /dev/sda:

uuid=$(ls -l  /dev/disk/by-uuid/ | grep sda | awk '{print $9}')

And make the entry into /etc/fstab:

UUID=12aa9e34-e0cc-4b45-97f9-63e6198dffce /data     xfs     defaults     0 0

reload systemctl daemon and try to mount the partition.

mount: /data: can't find UUID=12aa9e34-e0cc-4b45-97f9-63e6198dffce.

What, why? Looks like the UUID is changed:

ls -l  /dev/disk/by-uuid/ | grep sda | awk '{print $9}'

I can then mount that manually and it is fine. Somewhere in the process I am missing something.

I’m going to guess that immediately after making the file system on sda that the information in /dev/ is stale and the UUID that you see first is that of sda1 prior to making the new file system. So you would need to rescan the devices before running your command to get the UUID and populate fstab.
Another thing that troubles me is that usually you create a file system on a partition of a device not the device itself even if the partition spans the entire device. So the command should be:
mkfs.xfs -f /dev/sda1
This also could be confusing the results of your script

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.