But I noticed that the sha-256 checksum in the file is not matching the actual image. The reason for this appears to be that the image has been replaced after May 23/24 for some reason, but the sha-256 CHECKSUM file is still the one for the older (now missing) image from May 23.
The image that is now available as May 25 is actually from June 05 and appears to be identical to the image with the latest tag.
It is probably nothing serious, just a bit annoying. I only started noticing this myself because I keep my images in terraform with the sha-256 checksums and terraform started noticing that the size of the image had changed.
Anyone any ideas what might have gone wrong when building / uploading the image and calculating the checksum?
I just checked back with the wiki. It says with respect to the dated cloud images:
The first format will always be a constant. Cloud images will appear in this format in majority of cases and there may be more than one at any given time. Updates can occur for newer kernels or to address issues in previous versions.
But it clearly didn’t stay constant here. Any ideas what has happened? It looks like it was symlinked to the latest image for some reason.
“Latest” will always be the latest image we built and pushed out. This also has a checksum which should also match the latest dated image there. If you are downloading “latest” and you are not getting a checksum you are expecting, it is likely the CDN has old cache. It should be noted that the latest should be pointing to the June images rkat were published over the weekend.
As indicated, I am not trying to download latest but a dated image. The link can be found above.
The problem is: that dated image had changed. Which terraform informed me about. And it should not have changed. And since it had changed, it no longer matched it’s CHECKSUM file, because that file has remained.
The weird thing is: it appears to have changed back again. I made another screenshot.
We don’t change nor replace already created and published images. We would increment .0 to .1 if we built a new image on the same exact day. When I look at our builds we’ve done, this is what I see.
# aws s3 ls s3://resf-empanadas/buildimage-9.4-x86_64/ | grep Generic
PRE Rocky-9-GenericCloud-Base-9.4-20240507.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240508.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240509.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240515.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240516.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240522.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240523.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240605.0.x86_64/
PRE Rocky-9-GenericCloud-Base-9.4-20240609.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240508.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240509.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240515.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240516.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240522.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240523.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240605.0.x86_64/
PRE Rocky-9-GenericCloud-LVM-9.4-20240609.0.x86_64/
And when I look into Rocky-9-GenericCloud-Base-9.4-20240523.0.x86_64 I see only one build.
It can be very likely that the CDN cache, the checksum files, and the checksums themselves could be misaligned. The checksum files being incorrect is rare. With that said, our automation when we build new images will redo checksums on all files it finds. So in the event something “rare” happens, it will be corrected.
Yes, that was because the filesize of the dated image was identical to latest all of a sudden. To me that looked like the building of the latest image had also replaced the dated image I was trying to use in my infra.
So in the event something “rare” happens, it will be corrected.
That seems indeed what has happened now, the dated image matches the checksum again. Terraform is happy again, and so am I
Anyways, thanks for getting back to me on this. Should I experience something similar in the future, I’ll let you guys know again I guess.
I can see how this can be confusing. We’ve had cases in the past where users may be downloading/interacting with our images while in the background an rsync may be occurring.
And good to know you have it working again! Apologies for the trouble.