My notes from setting up fstrim on the older ssd’s that I have on my other computers tell me to use hdparm to check if trim is supported, but this is what I get on this new machine:
I enter the command “hdparm -I /dev/nvme0n1”
The output is “/dev/nvme0n1:”
That’s it. No other wording at all.
So this leads me to a few questions.
First, does this mean that there’s no need to run fstrim on this drive? Or is there something else I should do to check to find out if it’s needed?
If there is a need, then how do I do it? The method I used previously was to change the line “issue_discards=0” to “issue_discards=1” in /etc/lvm/lvm.conf, then run “dracut -f” to rebuild initramfs, then I put a little bash script into a weekly cronjob for root.
However, at least some of my reading tonight indicates that maybe none of this is actually required any more, and all I need to do with today’s Linux, i.e. el9, is to enable the fstrim.timer service and call it a day. fstrim.timer calls fstrim.service service and that apparently checks all of the filesystems and runs a trim on anything that needs/supports it all by itself without having to edit lvm.conf and use dracut to set it up?
Or have I the wrong idea altogether here?
So how do I check to see if this drive supports fstrim and how do I activate it if it’s required?
I see. The drive partitioning part of that installer program tries to be so user friendly that it’s not entirely clear what it’s actually doing (to me, anyway).
Oh well. I’ve got these things set up and they’re working so I’ll call it good enough. If there’s a next time I’ll see if I can do it differently.
Part of your / is on sda. Files on that part are probably technically slower to access than the files on nvme part, but in practice you might not notice that.
The more serious risk is that if either drive breaks, then all filesystems that have even a part on it will break.
The XFS does not support shrink. Therefore, the LVs are not trivial to lvresize/pvmove.
If /usr or /var is partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a Device is busy error when powering off or rebooting.
This limitation only applies to /usr or /var, not to directories under them. For example, a separate partition for /var/www works without issues.
Important
Some security policies require the separation of /usr and /var, even though it makes administration more complex.
I’ve been putting /var on a separate partition for as long as I’ve been using ssd’s for boot drives.
My two computers that I just replaced and these new ones all have a ssd boot drive and a spinning rust drive for data.
My theory is/was that /var gets written to more or less constantly, while almost everything else under / (other than /home) gets written to more rarely.
So I put /var and /home on the rust and everything else on the ssd.
How about /tmp? That is written too (and not only to /var/tmp), and it is embarrassing to fill the / with temp files. There is some “service” option to put /tmp in RAM though.
The point on RH’s warning is probably about, for example, /var/run and /var/lock that were in RHEL 6 real directories (but since RHEL 7 they have been symlinks into /run and /run/lock, where /run is a tmpfs).
There may still be services out there that point to old location. IIRC, SystemD warns about munged’s unit file in el9. Hence, the symlinks better appear early and last to near shutdown.
I doubt all subdirs of /var get frequent writes or need much space, but some definitely do.
/tmp in ram is the default setting for new installs for the past few releases. I know that’s the default in 8 (and 9) for sure and I can’t remember if it defaulted that way in 6 and 7 or not.
Anyway, at some point between 5 and 7 the default changed. It was always one of the first things that I “fixed” on a new install, though, ever since that option became available.
I will confirm (for anyone else who was in doubt like me) that fstrim “just works” on el9.
All you have to do is enable the fstrim.timer service; nothing else needs to be configured or changed at all.
That’s all I did and fstrim ran automatically on these computers last night. There are now entries in /var/log/messages stating how much was trimmed on each drive.
Whoopee. It works and no fancy hokey pokey required any more.
It used to be that you had to set issue_discards=1 in /etc/lvm/lvm.conf and rebuild initramfs to make fstrim work.
At some point between back-when and el9 that has apparently changed since the issue_discards line in lvm.conf is still set to 0 by default but fstrim works when you enable fstrim.timer and take no other action.
I have no idea why it’s not enabled by default since most of today’s computers probably have a ssd of some kind and Red Hat certainly isn’t being bashful about pushing things like CPU ISA minimums so support for older hardware apparently isn’t the reason.
/etc/lvm/lvm.conf does have this interesting note in it:
Not all storage will support or benefit from discards, but SSDs and thinly provisioned LUNs generally do. If enabled, discards will only be issued if both the storage and kernel provide support.
So it looks like fstrim will just go ahead and work on whatever needs it and ignore anything that doesn’t.
That being the case, I don’t see what would be lost by enabling fstrim by default.
Nothing really is lost now. Previously it was all done by fstab entries and cronjobs. Now, there is no need to set the entries in fstab or create cronjobs. Now with fstrim.timer enabled, that’s enough.
I did google when this thread started, and there’s a lot of info on the fact that edits to fstab aren’t required when fstrim.timer is used. There’s no need to do both as it doesn’t give any benefits.
It seems, at least from what I read, fstrim.timer replaces the fstab/cronjob stuff from years before. And it deals with all filesystem types, be it ext4, btrfs, xfs, etc, etc.
Enabled by default would just be for convenience – one less thing that needs to be changed on a new setup.
mlocate-updatedb.timer used to disabled by default too and when I went to enable it in el9 I discovered that I didn’t have to. Obviously two different things but it could be the same convenience principle.