Why is most of my SSD LVM?

I installed rocky 8.8 on a 1TB M2 SSD, I am still relatively new to linux and was wondering why is 999GB of my SSD an LVM2 PV that is not mounted anywhere? How do I access this space?

It may still be hard to tell because it depends on how the installation was done, but the output of lsblk may be illuminating.

Do show output of lsblk and also look at lsblk --fs

The default[1] partitioning done by installer does create about three partitions:

  • EFI System partition (mounted to /boot/efi)
  • Partition with filesystem (mounted to /boot)
  • The PV that takes the rest of space

Within the PV three LV are created:

  • For /
  • For /home
  • For swap

Personally, I never accept the default and rather use my custom scheme.

[1] See Appendix B. Partitioning reference Red Hat Enterprise Linux 9 | Red Hat Customer Portal

After that if there remains mystery as to why the installer made the decisions it did, you can try reading /var/log/anaconda/storage.log (and the other logs in that directory), but they can be a bit verbose.

I see, so the LV is accessible through the / and /home. I just don’t understand why this is the default installation?

lsblk gives

sda           8:0    0   1.8T  0 disk 
├─sda1        8:1    0    16M  0 part 
└─sda2        8:2    0   1.8T  0 part 
nvme1n1     259:0    0 931.5G  0 disk 
├─nvme1n1p1 259:1    0   600M  0 part /boot/efi
├─nvme1n1p2 259:2    0     1G  0 part /boot
└─nvme1n1p3 259:3    0 929.9G  0 part 
  ├─rl-root 253:0    0    70G  0 lvm  /
  ├─rl-swap 253:1    0  31.5G  0 lvm  [SWAP]
  └─rl-home 253:2    0 828.5G  0 lvm  /home
nvme0n1     259:4    0 465.8G  0 disk 
├─nvme0n1p1 259:5    0   100M  0 part 
├─nvme0n1p2 259:6    0    16M  0 part 
├─nvme0n1p3 259:7    0 465.2G  0 part 
└─nvme0n1p4 259:8    0   499M  0 part 

lsblk --fs gives

NAME        FSTYPE  LABEL      UUID                                   MOUNTPOINT
└─sda2      ntfs    Hard Drive 204414204413F76E                       
├─nvme1n1p1 vfat               6176-4B9B                              /boot/efi
├─nvme1n1p2 xfs                56095192-6cf4-4a96-ad2a-48a92ebabc6e   /boot
└─nvme1n1p3 LVM2_me            paNLZn-ctdk-snOr-I4EU-XedL-x3Dc-54CekQ 
  ├─rl-root xfs                892fdcc3-902b-44eb-8185-c04d2b41661d   /
  ├─rl-swap swap               10665f67-3098-4e72-9343-89884671e141   [SWAP]
  └─rl-home xfs                6ca29e76-4d27-498c-a44d-a45f9ff98460   /home
├─nvme0n1p1 vfat               5E53-E1BF                              
├─nvme0n1p3 ntfs    Boot       3C0861AD0861673C                       
└─nvme0n1p4 ntfs               A42EBB692EBB32E2 

Why it’s the default to use LVM, you mean? I can’t say for a fact, but I would assume it’s to give you more flexibility post-installation (e.g. with LVM you could reformat sda there and pvmove your current installation over to it, or reformat nvme0n1 and extend your current filesystems onto it, all without rebooting). Personally I think they’re giving up a significant chunk of the flexibility by fully allocating the volume group, particularly since XFS cannot be shrunk, but this way it still retains some back-pocket advantages over disk partitions without requiring that you immediately learn how to extend LVM LVs and grow filesystems.


Thank you this clarifies it for me

There are two views: “technical” and “admin” that might affect.

On the “admin” side is the question of why more than one filesystem? (Almost) everything could be in one “volume” with no “artificial fences”. Almost, as the ESP for UEFI has to be separate – legacy mode does not need it. Then again, (older) bootloaders had trouble with big volumes. Even the swap can be in a file, just like Windows does.

When everything is in one volume, anything can use the space. This is usually convenient, but if something accidentally uses all the space, then all the processes hit the “no free space” issue. Separate filesystems or quotas help ensure that system’s space is not taken by user data.

Furthermore, system files are “expendable” – they can be reinstalled from packages. User data is “precious”. Separate filesystem for user data (/home) gives more options for backup. Also, if (and more likely when) you want to install a different distro, then it is easy to wipe a separate ‘/’ volume as that does not touch the user data volume. (Note: there can be “user data” in /etc/passwd, /var/www/ and similar locations within the root volume that require backup.)

On the “technical” side the legacy BIOS did use “MBR” aka “DOS” *partition table format that allowed max four partitions (of which at most one could be “extended” with multiple “logical drives” in it). The LVM did evolve to add flexibility, both to work around MBR limits, multiple (small) drives, and to help with the “admin” reasons to adjust volumes. The UEFI does use GUID Partition Table (GPT) format that supports way more partitions, but does not have the “live move/resize” flexibility of LVM.

Therefore, the default to LVM continues to be useful. As @quartsize said, “could” is better than “can’t”.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.