Problem with mount options in Rocky on AWS

We’ve been rolling out Rocky in Amazon, but I’m having some trouble with re-implementing quotas, which is what we use on EC2, in lieu of partitions. Anyone familiar with AWS2 EC2 knows that you can’t really partition the root volume of an instance, and I’ve experimented with attaching multiple volumes to simulate disk partitions, but I quickly discovered that this just gave every instance multiple points of failure.

My work-around? XFS Project Quotas. In CentOS 7, they worked fine, but for some reason, I can’t seem to enable uquota and pquota on my downstream AMI from the Rocky8 Marketplace image.

Here’s a sample root mountpoint from /etc/fstab:

UUID=4b094269-bea5-4dc2-b327-490d7eacc07f / xfs defaults 0 0

As you can see, nothing crazy. Here’s the problem:

[root@azdlscout01:~]# mount | egrep ‘xfs’
/dev/nvme0n1p1 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

This is a snippet from ‘man fstab’:

                 use  default  options: rw, suid, dev, exec, auto, nouser,
                 and async.

noquota should not be in ‘defaults’. What’s more, if I explicitly specify mount options and exclude ‘noquota’ in my fstab, and append uquota,pquota to the list, noquota still shows up.

Here’s a test system where I’ve got that set:

[root@azdlmagcom01:~]# egrep xfs /etc/fstab
UUID=4b094269-bea5-4dc2-b327-490d7eacc07f / xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,uquota,pquota 0 0
[root@azdlmagcom01:~]# mount | egrep xfs
/dev/nvme0n1p1 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
[root@azdlmagcom01:~]# blkid
/dev/nvme0n1p1: UUID=“4b094269-bea5-4dc2-b327-490d7eacc07f” BLOCK_SIZE=“512” TYPE=“xfs” PARTUUID=“1606c10c-01”

I also include blkid to make perfectly clear we’re talking about the same device. This is after both a reboot, and a cold stop/start. Any sage advice as to what’s bollixing my mount?