Expanding the Rocky Linux official AWS AMI

I’m not sure this is the right place for this, it is very AWS-specific.

I attempted to expand the Rocky Linux EC2 instance I created by adding several new volumes, and broke the instance in the process. I’m looking for guidance about what I did wrong.

The new instance was created and launched just fine. It comes with a 10G root volume out of the box, which is just fine.

I want to expand the storage with new EBS volumes as follows:

  • /var: 150 G
  • /opt: 50 G
  • /home: 50 G
  • /var/lib/neo4j: 100 G

I use neo4j (the graph DB), and the last mount point is the directory where it will install itself.

I made filesystems (xfs) on the four new volumes and all seemed well. I mounted the two volumes that will shadow existing root volumes in temporary mount points (/mnt/new_var and /mnt/new_home). I mounted them and used rsync to copy the existing volumes:

# mkdir /var/lib/neo4j 
# rsync -axv /var/* /mnt/new_var/
# rsync -axv /var/* /mnt/new_var/

I unmounted those two volumes, then edited /etc/fstab with entries for the four volumes. I got the UUID values from blkid. Here are the lines I added to fstab:

UUID=c6528e1f-bb9a-4afe-a4ba-108dc0503254      /opt              xfs     defaults,nofail        0 0
UUID=af0d2963-ddf3-4993-9c9b-06e4195defce       /var              xfs     defaults,nofail        0 0
UUID=d853afae-2cbe-48a8-99c9-119af50cbfc0       /var/lib/neo4j    xfs     defaults,nofail        0 0
UUID=a6417b6f-dbe8-4c52-9b0e-c563c9f758f0       /home             xfs     defaults,nofail        0 0

I ran mount -a and it offered no complaints.

My first sign of an issue is that I was no longer able to use SSH to connect to the instance.

I rebooted, and the new instance fails the second status check – the telltale sign of a broken fstab.

What did I miss in this process? Is it easier to expand the root volume?

I appreciate your attention.

Just off the top of my head, SELinux is probably enabled and the files probably didn’t retain their contexts after the rsync since you didn’t include -X.

Try: rsync -axvX <src> <dst>

2 Likes

I appreciate your reply. I repeated the exercise doing one volume at a time, starting with /home.

The second time around, I just turned off selinux (by editing /etc/selinux/config) and used cp -dRf instead of rsync (for simplicity).

I saw hard SSH authentication failures (on the rocky account) while the new volume was mounted as /home. Those went away when I unmounted it. I stopped the new instance last night, and restarted it today.

Interestingly (to me), today everything worked just fine. Same EC2 volume, same instance, but today it worked.

I did make one change that should have no effect before mounting the new volume – I added a password to the rocky account from the root user (# password rocky …).

At least for now, I’m going to chalk this up to some sort of AWS EC2 weirdness.

I appreciate the reply. While I marked it as a solution, it looks to me as though the issue may well have resolved itself independently from the changes I made to selinux.

1 Like