The old (hardware) system that I use for a local fileserver stopped booting after I attempted to install support for ntsf drives.
The hardware has a SATA controller with 4 plugin bays. It has been happily running for months with an
xfs filesystem mounted as
/mnt/internal_hd0. The physical volume is mounted in the top slot of the SATA controller.
I know that the volume I want to mount has an ntfs filesystem with about 1TB of stuff on it. I therefore used
dnf to add
ntsf support. I then shutdown the system, installed the second volume in the second slot, and attempted to start the system.
Here are the specific commands I issued (as root) to add ntfs support:
dnf -y update dnf -y install ntfs-3g dnf install ntfsprogs -y shutdown -r now
All seemed fine.
The system now fails to mount
/mnt/internal_hd0 on startup. On the console, I see a complaint that says:
[ Time ] Timed out waiting for device dev-disk-byx2duuid-1f97ecf3x2d71dbx2d43c4x2d82c3x2d9d4750354b4b.device.
It then boots into “emergency mode”.
Is this a hardware or software issue? I note that
/mnt/internal_hd0 by UUID, and the UUID is:
In “emergency mode”, I’m able to examine
/dev/disk/by-uuid. When I do, I see three entries – none of which match the UUID specified in
I thought that the UUID was not supposed to EVER change! What happened?
I’ve now removed the two packages:
dnf remove ntfsprogs dnf remove ntfs-3g
This had no apparent effect.
Is this a hardware or software failure?
I’d like suggestions about how to:
- Get the system running again (hopefully without trashing the contents of
- Mount an ntsf volume (perhaps on