I’m not sure this is the right place for this, it is very AWS-specific.
I attempted to expand the Rocky Linux EC2 instance I created by adding several new volumes, and broke the instance in the process. I’m looking for guidance about what I did wrong.
The new instance was created and launched just fine. It comes with a 10G root volume out of the box, which is just fine.
I want to expand the storage with new EBS volumes as follows:
-
/var
: 150 G -
/opt
: 50 G -
/home
: 50 G -
/var/lib/neo4j
: 100 G
I use neo4j
(the graph DB), and the last mount point is the directory where it will install itself.
I made filesystems (xfs
) on the four new volumes and all seemed well. I mounted the two volumes that will shadow existing root volumes in temporary mount points (/mnt/new_var
and /mnt/new_home
). I mounted them and used rsync
to copy the existing volumes:
# mkdir /var/lib/neo4j
# rsync -axv /var/* /mnt/new_var/
# rsync -axv /var/* /mnt/new_var/
I unmounted those two volumes, then edited /etc/fstab
with entries for the four volumes. I got the UUID
values from blkid
. Here are the lines I added to fstab:
UUID=c6528e1f-bb9a-4afe-a4ba-108dc0503254 /opt xfs defaults,nofail 0 0
UUID=af0d2963-ddf3-4993-9c9b-06e4195defce /var xfs defaults,nofail 0 0
UUID=d853afae-2cbe-48a8-99c9-119af50cbfc0 /var/lib/neo4j xfs defaults,nofail 0 0
UUID=a6417b6f-dbe8-4c52-9b0e-c563c9f758f0 /home xfs defaults,nofail 0 0
I ran mount -a
and it offered no complaints.
My first sign of an issue is that I was no longer able to use SSH to connect to the instance.
I rebooted, and the new instance fails the second status check – the telltale sign of a broken fstab.
What did I miss in this process? Is it easier to expand the root volume?
I appreciate your attention.