Trouble with growfs on XFS in Rocky Linux 9

I’m trying to grow the partition in Rocky Linux.

nvme0n1p5 was initially 10G and I have increased to 28.9G. The /dev/mapper/rocky-root can be extended from 8.9G to 28.9G. So here I go.

# lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
nvme0n1                 259:1    0   30G  0 disk 
├─nvme0n1p1             259:2    0   99M  0 part /boot/efi
├─nvme0n1p2             259:3    0 1000M  0 part /boot
├─nvme0n1p3             259:4    0    4M  0 part 
├─nvme0n1p4             259:5    0    1M  0 part 
└─nvme0n1p5             259:6    0  8.9G  0 part 
  └─rocky-root          253:0    0  8.9G  0 lvm  /
# growpart /dev/nvme0n1 5
CHANGED: partition=5 start=2265088 old: size=18704384 end=20969471 new: size=60649439 end=62914526

Now nvme0n1p5 is 28.9G.

# lsblk
nvme0n1                 259:1    0   30G  0 disk 
├─nvme0n1p1             259:2    0   99M  0 part /boot/efi
├─nvme0n1p2             259:3    0 1000M  0 part /boot
├─nvme0n1p3             259:4    0    4M  0 part 
├─nvme0n1p4             259:5    0    1M  0 part 
└─nvme0n1p5             259:6    0 28.9G  0 part 
  └─rocky-root          253:0    0  8.9G  0 lvm  /
# mount | grep root
/dev/mapper/rocky-root on / type xfs

It is XFS, so I use xfs_growfs -d /

# xfs_growfs -d /
meta-data=/dev/mapper/rocky-root isize=512    agcount=4, agsize=584448 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0 nrext64=0
data     =                       bsize=4096   blocks=2337792, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data size unchanged, skipping

Here the data size is unchanged. Why is that?

It says rocky-root is LVM, so I tried lvextend too.

# lvextend -l 100%FREE /dev/mapper/rocky-root
  Volume group "rocky" not found
  Cannot process volume group rocky

What is wrong here? Please help.

Try:

lvresize -l +100%FREE /dev/mapper/rocky-root
xfs_growfs /dev/mapper/rocky-root

Also check if you need to do pvresize on the partition or vgresize on the volume group. Usually when running vgs, it should show the amount of free space that can be allocated to volumes within the volume group.

That didn’t work.

# lvresize -l +100%FREE /dev/mapper/rocky-root
  Volume group "rocky" not found
  Cannot process volume group rocky

Even though it shows lvm, this doesn’t come up in pvdisplay, vgdisplay or lvdisplay.

@iwalker Thanks for your support. I took a closer look at why lvm was not showing up. I had to add the device explicitly.

lvmdevices --adddev /dev/nvme0n1p5

Then the regular way of resizing LV worked.

Seems strange to use the adddev command, especially if LVM was already on this partition. But at least it’s working.

man lvmdevices writes:

The LVM devices file lists devices that lvm can use. The default file is /etc/lvm/devices/system.devices, and the lvmdevices(8) command is used to add or remove device entries. If the file does not exist, or if lvm.conf includes use_devicesfile=0, then lvm will not use a devices file.

Therefore, the --adddev merely updates the “these you may use” list. Yes, if system has already used a device, then it is odd that it has partially forgotten about it, particularly because we are talking about the /.


Logically, the steps should have been:

  1. Change partition’s “last sector”
  2. Update PV to use entire partition (with pvresize)
  3. Update LV to use available extents (with lvextend or lvresize)
  4. Update filesystem to use entire LV

The lvextend, lvresize, and lvreduce commands do have option --resizefs that would do the step 4 for you.
Note though that XFS does not support shrink/reduce at all.

It is strange indeed. This happened on two servers in a three-node cluster I’m using. Its HA cluster for GFS2 ( again using an LVM2 with another volume ) in AWS. Maybe that has to do something with the issue. LVM was all good in the primary node though.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.