Azure vm image disk size issues

Using the latest rockylinux-x86_64 9-lvm image from the Azure marketplace spun up a vm with the OS disk size at 8.9 GB. It seems most of the space is residing on another disk though the Azure vm only has one 128 GB disk. How can I extend the /dev/rocky/root logical volume to use this space in the root mountpoint?

Here’s what I’m looking at:

NAME           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda              8:0    0  128G  0 disk
├─sda1           8:1    0   99M  0 part /boot/efi
├─sda2           8:2    0 1000M  0 part /boot
├─sda3           8:3    0    4M  0 part
├─sda4           8:4    0    1M  0 part
└─sda5           8:5    0  8.9G  0 part
  └─rocky-root 253:0    0  8.9G  0 lvm  /
sdb              8:16   0   75G  0 disk
└─sdb1           8:17   0   75G  0 part /mnt
Filesystem             Type      Size  Used Avail Use% Mounted on
devtmpfs               devtmpfs  4.0M     0  4.0M   0% /dev
tmpfs                  tmpfs     3.8G   41M  3.8G   2% /dev/shm
tmpfs                  tmpfs     1.6G   17M  1.5G   2% /run
/dev/mapper/rocky-root xfs       8.9G  5.8G  3.1G  66% /
/dev/sda2              xfs       936M  462M  475M  50% /boot
/dev/sda1              vfat       99M  7.1M   92M   8% /boot/efi
/dev/sdb1              ext4       74G   28K   70G   1% /mnt
tmpfs                  tmpfs     769M     0  769M   0% /run/user/1000
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               rocky
  PV Size               <8.92 GiB / not usable 0
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2283
  Free PE               0
  Allocated PE          2283
  PV UUID               VtYqiy-LKfB-HUNG-1JMw-z1J2-erkv-J1DxAs
  --- Volume group ---
  VG Name               rocky
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <8.92 GiB
  PE Size               4.00 MiB
  Total PE              2283
  Alloc PE / Size       2283 / <8.92 GiB
  Free  PE / Size       0 / 0
  VG UUID               aP15yq-8nxi-GVlJ-6dey-pCNW-lc9p-NArYOl
  --- Logical volume ---
  LV Path                /dev/rocky/root
  LV Name                root
  VG Name                rocky
  LV UUID                O25bnO-xTrM-0g4D-oBew-3Ces-360G-LXKJV1
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2023-11-13 15:54:37 +0000
  LV Status              available
  # open                 1
  LV Size                <8.92 GiB
  Current LE             2283
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

Also when running fdisk -l I get this error (including the full output)

GPT PMBR size mismatch (20971519 != 268435455) will be corrected by write.
The backup GPT table is not on the end of the device.
Disk /dev/sda: 128 GiB, 137438953472 bytes, 268435456 sectors
Disk model: Virtual Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 941C4570-5254-4F0D-AF19-315A371D1B7B

Device       Start      End  Sectors  Size Type
/dev/sda1     2048   204799   202752   99M EFI System
/dev/sda2   204800  2252799  2048000 1000M Linux filesystem
/dev/sda3  2252800  2260991     8192    4M PowerPC PReP boot
/dev/sda4  2260992  2263039     2048    1M BIOS boot
/dev/sda5  2265088 20969471 18704384  8.9G Linux LVM


Disk /dev/sdb: 75 GiB, 80530636800 bytes, 157286400 sectors
Disk model: Virtual Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xba9c16a6

Device     Boot Start       End   Sectors Size Id Type
/dev/sdb1        2048 157284351 157282304  75G  7 HPFS/NTFS/exFAT


Disk /dev/mapper/rocky-root: 8.92 GiB, 9575596032 bytes, 18702336 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

I’m a little rusty on my Linux skills so I’d appreciate any help I can get.

It seems that the partion table has been created for a 10 GB disk. Therefore you need to expand the GPT partition table, the LVM partition, the PV, the LV and the xfs file system on the partition.

Beware that a mistake may make your machine unusable, so be sure to backup the machine, or work on one that can be destroyed to start over!

It should go more or less like this:

  1. Open fdisk /dev/sda and when prompted with GPT PMBR size mismatch… hit w to write. It should overwrite the last sector of the partition table to match the size of the disk.
  2. Quit fdisk and execute partprobe to make the kernel aware of the new partition table without having to reboot.
  3. Go back to fdisk /dev/sda and hit d to delete the partition and enter 5 to delete the 5th partiton (/dev/sda5)
  4. Create a new partition by hitting n and proceeding with the preset first and last sectors of the partition. Set the type to 8e for LVM. Hit w to write the changes and quit fdisk.
  5. Execute pvresize /dev/sda5 to expand the physical volume.
  6. Now expand your LV with the root partition by executing lvextend -L +20G /dev/sda5. You should add only the amount of space you really need, because xfs cannot be shrunk. Therefore leave the rest of the space unallocated for later use, another partition in the future etc.
  7. Grow the filesystem by executing xfs_growfs /

In addition to what @hs303 wrote, you asked about extending the LVM across both disks by using /dev/sdb. Whilst you can do that, I wouldn’t recommend it because if you lose one disk, then you lose both of them. Better would be to use this disk and mount it elsewhere for your data only. Usually the system disk, /dev/sda you would resize the instance you are using to a larger one, which would give a larger system disk of which you could resize like @hs303 wrote. I would try what he wrote first, just in case the partitions haven’t been expanded - which should normally happen anyway when you install an instance with a larger system disk since cloud-init takes care of this. At least it always has done for me.

This is where I’m a little confused, because in Azure there’s only one 128 GB disk attached to the vm and it’s the OS disk. So I’m not sure why it created two disks.

I’m receiving this error during step 3. Any insight @hs303 ?

This disk is currently in use - repartitioning is probably a bad idea.
 It's recommended to umount all file systems, and swapoff all swap
 partitions on this disk.

I tried pushing past it but when I got to step 5 I couldn’t resize the PV as it was in use. Restarting at that point prevented booting so I had to restore from a backup. So I’m back to where I started but currently I’m stuck, presumably because that disk/partition is mounted as root (/).

If /dev/sda5 is resizable, you can just do:

growpart /dev/sda 5

yes, there is a space between /dev/sda and 5 - that is how you use growpart. You can then use the LVM tools to resize the pv/vg where appropriate and then space should be available - assuming that growpart did it’s job. Deleting and adding partitions again using fdisk is old school method that no longer is needed when you have growpart.

If growpart isn’t on your install, then do:

dnf install cloud-utils-growpart

Ayyy that’s what I was missing. I was able to solve the space issues by running:

  1. growpart /dev/sda 5
  2. lvextend -L +60G /dev/rocky/root
  3. xfs_growfs /

Thanks for the help you two! I really appreciate the support and patience.

For posterity’s sake please let me know if I missed anything or should have done things any differently. I think I’m good now but for future readers I’d like to avoid any misdirection by following the steps I took and provided above based on your guidance.

2 Likes

Usually the disk partitions should resize automatically, since the images are prepped with cloud-init installed. So whatever instance size is chosen, the image has a default size of about 10gb, so any additional space should automatically resize. At least it does when images use standard partitions without LVM. Whether the addition of LVM complicates this process or not I’ve no idea. I make my own images for use with OpenStack, and I never use LVM for them but that’s my preference. I know growpart will do what it needs to do to resize the partition.

So no, you’ve not really missed anything, you just had some manual steps to complete to get that extra space.

1 Like