Questions about partitions that were unexpectedly created after installing Rocky Linux 8.9

I am a beginner to Rocky Linux. I used rufus 4.1 to create a USB boot drive containing Rocky Linux 8.9 (partition scheme GPT) to install on a server. Prior to Rocky Linux 8.9, I used rufus with this same USB drive to flash with other operating system installers (Windows 11, Rocky Linux 8.5 and Rocky Linux 8.9 in partition scheme MBR).

I configured the Rocky Linux 8.9 installer to create custom partitions and mountpoints, but not without some difficulties. Installation of Rocky Linux 8.9 was successful, and subsequent system restarts and general usage of Rocky Linux has been working without issue.

I used lsblk to check all the partitions and mountpoints, and I noticed the device containing the rl-root partition had 2 other unexpected partitions.

sdc 8:32 0 223.5G 0 disk
├─sdc1 8:33 0 200M 0 part
├─sdc2 8:34 0 2G 0 part
├─sdc3 8:35 0 1G 0 part /boot/efi
├─sdc4 8:36 0 1G 0 part /boot
└─sdc5 8:37 0 74G 0 part
├─rl-root 253:0 0 70G 0 lvm /
└─rl-swap 253:1 0 4G 0 lvm [SWAP]

I executed parted on the device which showed the following unexpected partitions (bolded below):

Disk /dev/sdc: 240GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 211MB 210MB fat32 EFI system partition boot, esp
2 211MB 2358MB 2147MB fat32 Basic data partition msftdata
3 2358MB 3432MB 1074MB fat32 EFI System Partition boot, esp
4 3432MB 4506MB 1074MB xfs
5 4506MB 84.0GB 79.5GB lvm

I have some questions.

  1. In the future, I intend to upgrade this server’s operating system from Rocky Linux 8.9 to the latest version of Rocky Linux 9. Does-or-could the existence of Partition 1 and Partition 2 complicate in-place upgrade efforts (using guides similar to ELevate, etc.)?
  2. Is it recommended that Partition 1 and Partition 2 be removed?
  3. If yes to removing Partition 1 and Partition 2, what is the safest way to proceed?

Thank you for your consideration. Any insight is appreciated.

You mentioned you had difficulties doing the custom partitioning. Chances are is when these partitions were created. Either that, or they were already on the disk before you started creating additional partitions.

You could remove them, but then it’s only 2GB. You won’t gain much by deleting them. If it’s a new install, delete all partitions and start again from zero.

1 Like

Those two have fat32, so they can be mounted just like the /boot/efi. Mount them somewhere and check if they have any files that could revealmore about those partitions. For example:

mkdir /mnt/test
mount -v /dev/sdc1 /mnt/test
ls -l /mnt/test

The ‘parted’ told that both sdc1 and sdc3 are EFI system partitions.
Does efibootmgr -v show any entries that would point to sdc1?

lsblk -f and blkid tell about UUIDs too.

Microsoft does not recommend having two ESP on the same drive.

2 Likes

Thank you for your suggestions. I wanted to add that this is a Dell server, as one of the outputs below will indicate the hardware vendor.

I created directories to mount /dev/sdc1 and /dev/sdc2, and executed tree -a to view all directories and files.

$ mount -v /dev/sdc1 /test1
mount: /dev/sdc1 mounted on /test1.

$ tree -a /test1
/test1
└── EFI
├── BOOT
│ └── bootx64.efi
└── Dell
└── BootOptionCache

4 directories, 1 file

$ mount -v /dev/sdc2 /test2
mount: /dev/sdc2 mounted on /test2.

$ tree -a /test2
/test2

0 directories, 0 files

I executed blkid followed by efibootmgr -v to look for any UUIDs matching /dev/sdc1. There was only a PARTUUID that matched /dev/sdc1 that was common in both outputs.

$ blkid
/dev/sdc1: LABEL=“ESP” UUID=“6294-A52D” BLOCK_SIZE=“512” TYPE=“vfat” PARTLABEL=“EFI system partition” PARTUUID=“4c053948”

$ efibootmgr -v
Boot0001* EFI Fixed Disk Boot Device 1 PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/SCSI(0,0)/HD(1,GPT,4c053948,0x800,0x64000)

I’m not exactly certain what these findings mean. The /dev/sdc1 partition is not mounted, but it is present when efibootmgr -v is executed.