I have plenty of experience with CentOS up to and including 7, but no experience with CentOS or Rocky Linux 8.
I want to install a new physical server using software RAID1. Does the installer support this? If I need a separate /boot partition, can this be on a RAID partition?
Any notes on the process of configuring RAID1 during installation?
Yes, if Raid 1 then this is fine. Other Raid levels would not work for this.
Here’s my documentation about using RAID 1 with two disks on Rocky Linux 8. On a side note, I’m not a big fan of the graphical installer, so I boot into recovery mode, setup my RAID arrays manually using good old tools like
mdadm. Then I start the installer and use the preconfigured arrays.
Oh yes. Even the RAID1 has more than one metadata format and that can affect /boot too, as
UEFI booting and RAID1 « codeblog did notice:
Alas, the RHEL 8 doc does not say anything about the metadata format. Perhaps the bootloader “understands”?
(Relatively easy to test: do install and see if you can boot. If not, then reinstall and use a workaround.)
Not had that issue on mine with UEFI:
root@s01:~# cat /etc/fstab | grep -i boot
# /boot = /dev/md0
UUID=9f720a1f-7257-4a30-889d-8d06cfc02e74 /boot ext2 noatime 0 2
# /boot/efi = /dev/nvme0n1p1
UUID=1080-29D6 /boot/efi vfat umask=0077 0 1
root@s01:~# mdadm --detail /dev/md0
Version : 1.2
Creation Time : Mon Jun 7 09:57:23 2021
Raid Level : raid1
Array Size : 498688 (487.00 MiB 510.66 MB)
Used Dev Size : 498688 (487.00 MiB 510.66 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
shows 1.x. Grub 2 seems to support that.
The only biggest issue here is /boot/efi - it cannot be Raid. So that will only exist on one disk, and would obviously cause problems if it disappeared. TBH haven’t looked into resolving that, other than copying the contents of /boot/efi. And not even attempted to test how UEFI would behave during a disk failure.
That said, I use ansible anyway, so to spin up a new server wouldn’t be a big issue, and restore data if I needed.
Can’t? I would have assumed that EFI reads ESP like GRUB reads /boot – unaware of RAID. If so, then it sees filesystem in partition and is happy (which might require that 0.9).
Except, the EFI boot menu entry does specify disk. If that is unique (not mirrored), then one needs backup entry to the other ESP.
Or, have sufficient bootloader in /boot/efi/EFI/BOOT/ as that is the default that EFI should look in ESP.
All things that ansible can ensure to be there as plan B.
From searching now, I see a lot of mixed results, so without testing it, can’t confirm: EFI system partition - ArchWiki
This hints that it is possible for /boot/efi on raid providing it’s metadata is 1.0 - so that it is at the end of the disk.
I have seen other articles using dd between partitions, and then adding the second partition with efibootmgr. Again, not tested it.
It’s been a while since I did this, so would have to check in a lab environment. There is the chance when I did mine, that it was defaulting to metadata 1.2 and hence why it wouldn’t work. So I opted with standard partition without raid for UEFI.
I ran a test installation using a VM so that I could familiarize myself with setting up RAID through the installer. I gave the guest 2 virtual hard drives.
This was successful, although removal of either one of the virtual drives results in the boot process hanging for a minute or two in a couple of places.
Assuming I create an EFI partition on both drives, can I use dd to copy the active EFI partition to the other drive?
I think you need metatdata version 0.9 to have the metadata at the end of the partition.
Looking at a similar system and the manual for the server I will be installing, I think that I won’t have an EFI partition, but instead will boot in legacy mode and will need a BIOSBOOT partition.
I suppose I can use grub-install on both drives to make sure the BIOSBOOT partition is populated?