Stop automounting second hard disk during boot before /etc/fstab is read

I have a computer named nexus with two hard drives, /dev/sdb0 and /dev/sdb1. My /etc/fstab is:

#
# /etc/fstab
# Created by anaconda on Sat Sep 30 15:29:58 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rl-root     /                       xfs     defaults        0 0
UUID=2db34dbb-e0ee-41dd-9e8f-06aacb27ea08 /boot                   xfs     defaults        0 0
/dev/mapper/rl-home     /home                   xfs     defaults        0 0
/dev/mapper/rl-swap     none                    swap    defaults        0 0
/dev/sdb1 /disk2	ext4	defaults	1 2

The last line mounts my second disk as /disk2, and usually this works fine. However, every third or fourth time I boot, it fails because the system has already mounted /dev/sdb1 as /boot. The relevant /var/log/messages entries from the last time this happened are:

Feb  2 13:39:03 nexus systemd[1]: Reached target Preparation for Local File Systems.
Feb  2 13:39:03 nexus systemd[1]: Mounting /boot...
Feb  2 13:39:03 nexus systemd[1]: Mounting /home...
Feb  2 13:39:03 nexus systemd[1]: Starting File System Check on /dev/sdb1...
Feb  2 13:39:03 nexus kernel: XFS (dm-2): Mounting V5 Filesystem 3bededc8-ba90-4f2d-9458-80d4ca32fa9b
Feb  2 13:39:03 nexus kernel: XFS (sdb1): Mounting V5 Filesystem 2db34dbb-e0ee-41dd-9e8f-06aacb27ea08
Feb  2 13:39:04 nexus kernel: XFS (dm-2): Ending clean mount
Feb  2 13:39:04 nexus systemd[1]: Mounted /home.
Feb  2 13:39:04 nexus kernel: XFS (sdb1): Ending clean mount
Feb  2 13:39:04 nexus systemd[1]: Mounted /boot.
Feb  2 13:39:04 nexus systemd-fsck[833]: /dev/sdb1 is mounted.
Feb  2 13:39:04 nexus systemd-fsck[833]: e2fsck: Cannot continue, aborting.
Feb  2 13:39:04 nexus systemd-fsck[815]: fsck failed with exit status 8.
Feb  2 13:39:04 nexus systemd-fsck[815]: Ignoring error.
Feb  2 13:39:04 nexus systemd[1]: Finished File System Check on /dev/sdb1.
Feb  2 13:39:04 nexus systemd[1]: Mounting /disk2...
Feb  2 13:39:04 nexus kernel: /dev/sdb1: Can't open blockdev
Feb  2 13:39:04 nexus mount[834]: mount: /disk2: /dev/sdb1 already mounted on /boot.
Feb  2 13:39:04 nexus systemd[1]: disk2.mount: Mount process exited, code=exited, status=32/n/a
Feb  2 13:39:04 nexus systemd[1]: disk2.mount: Failed with result 'exit-code'.
Feb  2 13:39:04 nexus systemd[1]: Failed to mount /disk2.
Feb  2 13:39:04 nexus systemd[1]: Dependency failed for Local File Systems.
Feb  2 13:39:04 nexus systemd[1]: Dependency failed for Mark the need to relabel after reboot.
Feb  2 13:39:04 nexus systemd[1]: selinux-autorelabel-mark.service: Job selinux-autorelabel-mark.service/start failed with result 'dependency'.
Feb  2 13:39:04 nexus systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.

Does anyone know why this happens, and what I can do to prevent from occurring in the future? I’m running Rocky Linux release 9.5 (Blue Onyx). Thanks to all who respond.

Try using a UUID instead of the block path. You can get the UUID by running blkid and replacing /dev/sdb1 with UUID="value" in /etc/fstab.

1 Like

Hi,
Dont use device name ( like /dev/sdb1 etc ) for mounting devices in fstab. Now Linux kernel have tendency to assign the device names which are not persistant across reboot.

Always mount devices by

  1. Use UUID of device
    2 - Or can assign and use LABEL
    3- in case of LVM , can use LVM device name ( like /dev/DATA/DISK1 etc )

The way system names things, /dev/sdb is one hard drive and the /dev/sdb0 and /dev/sdb1 are partitions on that drive. If there were two drives, they would be /dev/sda and /dev/sdb and the /dev/sdb1 would be the name for partition that holds either the /boot or the `/disk2’.

You can see all drives/partitions/volumes with:

lsblk

and also their filesystem type, LABEL, and UUID with:

lsblk -f

I’d look the UUID from that and update the /etc/fstab.


PS. On terminology, there have been many implementations of “automounter”. The two available in Rocky are SystemD’s automounter and separate autofs service. Both “automount”, which with them means that the “drive” is not mounted by default – not on boot. Rather, these services watch whether any process tries to access something from the mount point. In your example in or under /disk2. Only on access the mount is actually done (if possible), and it can also automatically umount if nothing does access the filesystem for a while.

It is possible to add noauto option to fstab entry to prevent it from being mounted during boot. Then one could manually mount (e.g. mount /disk2) or use the automounters. None of these naturally help with the non-predictable/persistent name ‘/dev/sdb1’.

Thank you nazunalika, linuxlover, and jlehtone for your helpful and informative responses. I changed my /etc/fstab file to use the second drive’s UUID, and when I rebooted, my system came up correctly.

This line is really strange, as it looks like your file system on /dev/sdb1 has the same UUID as the file system on /boot.
Can you check if for some reason both file systems have the same UUID or if this is an internal error in the software assigning the wrong UUID to /dev/sdb1?

Not really. It was already explained that the /dev/sdX names are generated randomly.
Therefore, on some boot the /dev/sdb1 can refer to the filesystem that “has /boot” (and hence both show the same UUID), and on some to the other filesystem.

You are right though that same UUID on multiple filesystems is possible and usually a problem.[1]
As @lhouk did already imply, that does not seem to be an issue here.

[1] RAID 1 and multipathing typically have that, but there it is not an issue as it is intentional (and handled by said setups).