Hi.
I see the 8.7 repo includes a number of new images, including:
Rocky-8-EC2-LVM.latest.x86_64.qcow2
are there any plans to add an LVM based AMI to the AWS Marketplace?
Thanks.
Hi.
I see the 8.7 repo includes a number of new images, including:
Rocky-8-EC2-LVM.latest.x86_64.qcow2
are there any plans to add an LVM based AMI to the AWS Marketplace?
Thanks.
Hi there,
Thank you for asking! It is planned to release them, but I want to do some testing on them to ensure they are working properly with expanding the disk… something I just ran into with Oracle and want to make sure I resolve before publishing. If you’d like to help, I can provide some instructions on how to import the image yourself from that .qcow2 file.
Best,
Neil
Hi.
Thanks for the response.
At the moment we create our own LVM AMIs from qcow2 images created using QEMU. I was hoping to avoid this step by using an official AMI.
I am currently in the process of planning our migration to RL9. Until the official AMIs are available, I would be more than happy to test using the Rocky-9-EC2-LVM.latest.x86_64.qcow2 image and report back anything of interest.
Just in case my process is different to yours. I simply convert the image to raw format using “qemu-img convert” and then import the image as a snapshot, as per:
If you would like me to test something else, just let me know.
yep, that is the exact same process.
I appreciate your testing!
Disk images converted successfully:
ls -al Rocky-*
-rw-r--r--. 1 user user 3047555072 Dec 22 12:32 Rocky-8-EC2-LVM.latest.x86_64.qcow2
-rw-r--r--. 1 user user 10737418240 Dec 22 13:14 Rocky-8-EC2-LVM.latest.x86_64.raw
-rw-r--r--. 1 user user 1691025408 Dec 22 12:50 Rocky-9-EC2-LVM.latest.x86_64.qcow2
-rw-r--r--. 1 user user 10737418240 Dec 22 13:15 Rocky-9-EC2-LVM.latest.x86_64.raw
file Rocky-*
Rocky-8-EC2-LVM.latest.x86_64.qcow2: QEMU QCOW Image (v3), 10737418240 bytes (v3), 10737418240 bytes
Rocky-8-EC2-LVM.latest.x86_64.raw: DOS/MBR boot sector, extended partition table (last)
Rocky-9-EC2-LVM.latest.x86_64.qcow2: QEMU QCOW Image (v3), 10737418240 bytes (v3), 10737418240 bytes
Rocky-9-EC2-LVM.latest.x86_64.raw: DOS/MBR boot sector, extended partition table (last)
Successfully uploaded to S3 and imported into EC2:
aws ec2 describe-snapshots --snapshot-ids snap-047e882c64fc7da22
{
"Snapshots": [
{
"Description": "",
"Encrypted": false,
"OwnerId": "889705491250",
"Progress": "100%",
"SnapshotId": "snap-047e882c64fc7da22",
"StartTime": "2022-12-22T13:45:07.426000+00:00",
"State": "completed",
"VolumeId": "vol-ffffffff",
"VolumeSize": 10,
"Tags": [
{
"Key": "Name",
"Value": "Rocky-8-EC2-LVM.latest.x86_64.raw"
}
],
"StorageTier": "standard"
}
]
}
aws ec2 describe-snapshots --snapshot-ids snap-0f4a4207f093320e6
{
"Snapshots": [
{
"Description": "",
"Encrypted": false,
"OwnerId": "889705491250",
"Progress": "100%",
"SnapshotId": "snap-0f4a4207f093320e6",
"StartTime": "2022-12-22T13:50:28.487000+00:00",
"State": "completed",
"VolumeId": "vol-ffffffff",
"VolumeSize": 10,
"Tags": [
{
"Key": "Name",
"Value": "Rocky-9-EC2-LVM.latest.x86_64.raw"
}
],
"StorageTier": "standard"
}
]
}
Volumes (with increased size) created successfully:
aws ec2 describe-volumes --volume-ids vol-0027ec7293d00f710
{
"Volumes": [
{
"Attachments": [],
"AvailabilityZone": "eu-west-2c",
"CreateTime": "2022-12-22T13:53:29.152000+00:00",
"Encrypted": false,
"Size": 50,
"SnapshotId": "snap-047e882c64fc7da22",
"State": "available",
"VolumeId": "vol-0027ec7293d00f710",
"Iops": 3000,
"Tags": [
{
"Key": "Name",
"Value": "Rocky-8-EC2-LVM.latest.x86_64.raw"
}
],
"VolumeType": "gp3",
"MultiAttachEnabled": false,
"Throughput": 125
}
]
}
aws ec2 describe-volumes --volume-ids vol-000d96f29c8550ae3
{
"Volumes": [
{
"Attachments": [],
"AvailabilityZone": "eu-west-2c",
"CreateTime": "2022-12-22T13:54:00.260000+00:00",
"Encrypted": false,
"Size": 50,
"SnapshotId": "snap-0f4a4207f093320e6",
"State": "available",
"VolumeId": "vol-000d96f29c8550ae3",
"Iops": 3000,
"Tags": [
{
"Key": "Name",
"Value": "Rocky-9-EC2-LVM.latest.x86_64.raw"
}
],
"VolumeType": "gp3",
"MultiAttachEnabled": false,
"Throughput": 125
}
]
}
I attached the RL8 volume as the root volume to an instance and it has booted successfully.
I can connect to the instance successfully using ssh, but I can’t authenticate.
I had a look in:
but couldn’t see any details as to the credentials I should use.
If you could let me know, I can log in and take a look around.
Instead of using an existing instance, I created new AMIs from the snapshots and then created new instances specifying an ssh key pair.
I am now able to log into both the new RL8 and RL9 instances using the “rocky” user and the ssh key.
LVM on the RL8 instance looks OK:
sudo vgdisplay -v
--- Volume group ---
VG Name rocky
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <8.90 GiB
PE Size 4.00 MiB
Total PE 2278
Alloc PE / Size 2278 / <8.90 GiB
Free PE / Size 0 / 0
VG UUID 9iK0Hu-xzEt-6Qlu-OZyY-HmfP-lmVQ-tmlUju
--- Logical volume ---
LV Path /dev/rocky/root
LV Name root
VG Name rocky
LV UUID rJXe3p-B7Eu-9WYb-L8ph-iADs-14zW-HWtIDB
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2022-11-29 23:35:02 +0000
LV Status available
# open 1
LV Size <8.90 GiB
Current LE 2278
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Physical volumes ---
PV Name /dev/nvme0n1p3
PV UUID NJH5zg-9A3Z-AuCA-QmNv-EGNw-SNhh-2jHRJQ
PV Status allocatable
Total PE / Free PE 2278 / 0
but not on the RL9 instance:
sudo vgdisplay -v
Devices file PVID apAFgtozy2cM9cxyKZpvhzsy701exw5H last seen on /dev/vda3 not found.
No volume groups found.
The image contains the following:
/etc/lvm/devices/system.devices:
# LVM uses devices listed in this file.
# Created by LVM command lvmdevices pid 2305 at Wed Nov 30 00:40:00 2022
VERSION=1.1.2
IDTYPE=devname IDNAME=/dev/vda3 DEVNAME=/dev/vda3 PVID=apAFgtozy2cM9cxyKZpvhzsy701exw5H PART=3
Recursively deleting /etc/lvm/devices/ and rebooting resolves the problem.
Also both volumes have the following partition:
/dev/nvme0n1p4 20967424 20969471 2048 1M BIOS boot
even though I specified UEFI when creating the AMIs. I assume I can delete this partition?