I have plenty of experience with CentOS up to and including 7, but no experience with CentOS or Rocky Linux 8.
I want to install a new physical server using software RAID1. Does the installer support this? If I need a separate /boot partition, can this be on a RAID partition?
Any notes on the process of configuring RAID1 during installation?
Here’s my documentation about using RAID 1 with two disks on Rocky Linux 8. On a side note, I’m not a big fan of the graphical installer, so I boot into recovery mode, setup my RAID arrays manually using good old tools like fdisk, gdisk and mdadm. Then I start the installer and use the preconfigured arrays.
root@s01:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 7 09:57:23 2021
Raid Level : raid1
Array Size : 498688 (487.00 MiB 510.66 MB)
Used Dev Size : 498688 (487.00 MiB 510.66 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
shows 1.x. Grub 2 seems to support that.
The only biggest issue here is /boot/efi - it cannot be Raid. So that will only exist on one disk, and would obviously cause problems if it disappeared. TBH haven’t looked into resolving that, other than copying the contents of /boot/efi. And not even attempted to test how UEFI would behave during a disk failure.
That said, I use ansible anyway, so to spin up a new server wouldn’t be a big issue, and restore data if I needed.
Can’t? I would have assumed that EFI reads ESP like GRUB reads /boot – unaware of RAID. If so, then it sees filesystem in partition and is happy (which might require that 0.9).
Except, the EFI boot menu entry does specify disk. If that is unique (not mirrored), then one needs backup entry to the other ESP.
Or, have sufficient bootloader in /boot/efi/EFI/BOOT/ as that is the default that EFI should look in ESP.
All things that ansible can ensure to be there as plan B.
This hints that it is possible for /boot/efi on raid providing it’s metadata is 1.0 - so that it is at the end of the disk.
I have seen other articles using dd between partitions, and then adding the second partition with efibootmgr. Again, not tested it.
It’s been a while since I did this, so would have to check in a lab environment. There is the chance when I did mine, that it was defaulting to metadata 1.2 and hence why it wouldn’t work. So I opted with standard partition without raid for UEFI.
I ran a test installation using a VM so that I could familiarize myself with setting up RAID through the installer. I gave the guest 2 virtual hard drives.
This was successful, although removal of either one of the virtual drives results in the boot process hanging for a minute or two in a couple of places.
Assuming I create an EFI partition on both drives, can I use dd to copy the active EFI partition to the other drive?
Looking at a similar system and the manual for the server I will be installing, I think that I won’t have an EFI partition, but instead will boot in legacy mode and will need a BIOSBOOT partition.
I suppose I can use grub-install on both drives to make sure the BIOSBOOT partition is populated?