Created a RAID with MDADM and now I can't boot to the desktop

Created a RAID 5 on a 5 bay external enclosure and now it won’t boot to the desktop. Any help would be greatly appreciated. The RAID was working fine when I was in the system.

Does not boot to desktop

Where does it stall? Can you boot to multi-user/single/rescue/emergency?

Sounds like you did create an additional data volume. If so, then the system at large does not depend on the volume.
If the volume is listed in /etc/fstab as “must be checked and mounted on boot”, then yes
it can block progress. One could comment the line out, if one gets console access.


One can boot with the install media and select “Troubleshooting”. There you get console and can mount (/) filesystem for edit.

Thank you so much for the reply. I was able to delete the RAID in rescue mode and then I had to delete the lines in /etc/fstab and it did boot. I did set it up to auto mount at boot could that have been the issue? This was my first time using MDADM and everything looked great until I tried to restart and it wouldn’t boot…It was stalling while it was looking for the RAID that’s why I deleted it. I would love to get it to work properly. I was using an external enclosure, not sure if that was the issue?

Two things:

  • In principle, when all is ok, the array is assembled and can be mounted on boot
  • “auto mount” more commonly refers to automount that differs from “mount at boot”
UUID=e9..f4 /opt   ext4  defaults        1 2
UUID=29..0e /sysC7 ext4  defaults,noauto,nofail,x-systemd.automount,x-systemd.idle-timeout=300 1 2

In the above, the first volume is mounted to /opt during boot. If something fails in that, the boot stalls.

The second volume is (most likely) not mounted during boot. Instead the SystemD will generate a unit file, a special “service”, that will automatically mount the volume if and when something tries to access the /sysC7. It will also automatically umount the volume whenever it has been unused for a while (idle-timeout=300 seconds). Therefore, failure to mount it is less likely to freeze the whole system.

Alternative to systemd.automount is autofs.service. Both “automount on need”.


I would create the array and filesystem but not add anything to the /etc/fstab yet. Then reboot.
If the array is assembled (see cat /proc/mdstat) after boot, then check whether you can manually mount the filesystem in it.
If the array is not automatically assembled, then resolve that first.

This is awesome! Thank you so much for this information. I’m a little traumatized from the 1st go around as I need this system for work, but I’m going to save this information and do some backups and give it another go. thank you again

Hey there, I have a question related to all of this…I am running an external 2 drive hardware raid that mounts automatically at boot but it performs poorly…but if I unmount it and remount it, it performs as it should…Was thinking that adding a delay in the mount argument might be worth a try? Is that something you think might help and I thought that would be simple to find but I can’t figure out how to do that.

I don’t know the reason why a remount does “help”.


With the automounters mounts do not happen at boot. Merely the automounters are up:

$ findmnt -a | grep -E "fast1|site"
|-/mnt/fast1      systemd-1                autofs rw,relatime,...
|-/site           auto.site                autofs rw,relatime,...

Only after visiting paths have the mounts been done:

$ findmnt -a | grep -E "fast1|site"
|-/mnt/fast1      systemd-1                autofs rw,relatime,...
| `-/mnt/fast1    nas.localdomain:/fast1   nfs4   rw,relatime,vers=4.1,rsize=65536,...
|-/site           auto.site                autofs rw,relatime,...
| `-/site/mirrors nas.localdomain:/mirrors nfs4   ro,nosuid,nodev,relatime,vers=4.1,...

So these (both systemd.automount and autofs.service) do have “delay until use”.

Since the latter is a service, a “systemd unit file”, it should be possible to make it start later, after other units that are about last that do start at boot. Or, have a timer unit that triggers (starts) the service at later timepoint.

As I wrote, the fstab entries that contain x-systemd.automount do generate mount units that the systemd starts. One should be able to write and store such units, remove the corresponding entry from fstab, and then tune when the units are actually started, just like for services.

I have not ventured into that.


The delay helps only if the initial mount and later remount occur in different circumstances, that difference is the reason for the change in performance, and the circumstances do change even if no initial mount&umount is performed.

Thank you again for the information! I’ll process this today and try to get to the bottom of it

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.