Podman is dead man

Hi,

I’ve had Podman running on this RL 8.6 machine with a couple of containers for many months, and zero problems. Today, Podman is completely borked (see errors below)

I have automatic updates enabled, so it might be due to a recent update, although I’m not 100% certain. What I am certain of is that it was working perfectly yesterday, and I didn’t even SSH into the machine between now and yesterday (and I’m the only user), so automatic updates seem like a likely culprit.

Every single podman command (e.g. podman ps) fails with this same error:

fatal error: index out of range

runtime stack:
runtime.throw({0x555a2a74f363, 0x555a2bd1d5a0})
/usr/lib/golang/src/runtime/panic.go:1198 +0x71 fp=0x7ffc53f89458 sp=0x7ffc53f89428 pc=0x555a2958dad1
runtime.panicCheck1(0x7ffc53f89628, {0x555a2a74f363, 0x12})
/usr/lib/golang/src/runtime/panic.go:36 +0x8b fp=0x7ffc53f89480 sp=0x7ffc53f89458 pc=0x555a2958a94b
runtime.goPanicIndexU(0x80000000c5d90, 0x4da318)
/usr/lib/golang/src/runtime/panic.go:93 +0x34 fp=0x7ffc53f894c0 sp=0x7ffc53f89480 pc=0x555a2958aad4
runtime.findfunc(0x555a2b89a078)
/usr/lib/golang/src/runtime/symtab.go:769 +0x13e fp=0x7ffc53f894e0 sp=0x7ffc53f894c0 pc=0x555a295ab77e
runtime.gentraceback(0x7f7ae0119700, 0x7ffc53f89950, 0x7f7ae0119700, 0x7ffc53f899e0, 0x4, 0x7ffc53f89968, 0x20, 0x0, 0x0, 0x0)
/usr/lib/golang/src/runtime/traceback.go:255 +0x6f9 fp=0x7ffc53f89850 sp=0x7ffc53f894e0 pc=0x555a295b2cf9
runtime.callers.func1()
/usr/lib/golang/src/runtime/traceback.go:891 +0x52 fp=0x7ffc53f898b8 sp=0x7ffc53f89850 pc=0x555a295b54b2
runtime.callers(0x4, {0x7ffc53f89968, 0x555a2d1335f0, 0x555a2a729360})
/usr/lib/golang/src/runtime/traceback.go:890 +0xa7 fp=0x7ffc53f89920 sp=0x7ffc53f898b8 pc=0x555a295b5407
runtime.mProf_Malloc(0xc000000b60, 0x1a0)
/usr/lib/golang/src/runtime/mprof.go:342 +0x6a fp=0x7ffc53f89a90 sp=0x7ffc53f89920 pc=0x555a2958464a
runtime.profilealloc(0x7f7b09b17108, 0x1a0, 0x188)
/usr/lib/golang/src/runtime/malloc.go:1270 +0x85 fp=0x7ffc53f89ac8 sp=0x7ffc53f89a90 pc=0x555a295655c5
runtime.mallocgc(0x188, 0x555a2af8b500, 0x1)
/usr/lib/golang/src/runtime/malloc.go:1149 +0x725 fp=0x7ffc53f89b48 sp=0x7ffc53f89ac8 pc=0x555a29565165
runtime.newobject(0x15066087ea8)
/usr/lib/golang/src/runtime/malloc.go:1234 +0x27 fp=0x7ffc53f89b70 sp=0x7ffc53f89b48 pc=0x555a29565447
runtime.malg(0x800)
/usr/lib/golang/src/runtime/proc.go:4220 +0x28 fp=0x7ffc53f89bb0 sp=0x7ffc53f89b70 pc=0x555a29598788
runtime.newproc1(0x555a2afa0ca8, 0xc0000001a0, 0x6c770, 0x555a295c3333, 0x555a29590365)
/usr/lib/golang/src/runtime/proc.go:4305 +0x94 fp=0x7ffc53f89c00 sp=0x7ffc53f89bb0 pc=0x555a29598a74
fatal error: index out of range
panic during panic

runtime stack:
runtime.throw({0x555a2a74f363, 0x555a2bd1d5a0})
/usr/lib/golang/src/runtime/panic.go:1198 +0x71 fp=0x7ffc53f88d48 sp=0x7ffc53f88d18 pc=0x555a2958dad1
runtime.panicCheck1(0x7ffc53f88f18, {0x555a2a74f363, 0x12})
/usr/lib/golang/src/runtime/panic.go:36 +0x8b fp=0x7ffc53f88d70 sp=0x7ffc53f88d48 pc=0x555a2958a94b
runtime.goPanicIndexU(0x80000000c5d90, 0x4da318)
/usr/lib/golang/src/runtime/panic.go:93 +0x34 fp=0x7ffc53f88db0 sp=0x7ffc53f88d70 pc=0x555a2958aad4
runtime.findfunc(0x555a2b89a078)
/usr/lib/golang/src/runtime/symtab.go:769 +0x13e fp=0x7ffc53f88dd0 sp=0x7ffc53f88db0 pc=0x555a295ab77e
runtime.gentraceback(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x64, 0x0, 0x0, 0x0)
/usr/lib/golang/src/runtime/traceback.go:255 +0x6f9 fp=0x7ffc53f89140 sp=0x7ffc53f88dd0 pc=0x555a295b2cf9
runtime.traceback1(0x555a2a74ac7e, 0x10, 0x7ffc53f89328, 0x555a2bd8e3a0, 0x10)
/usr/lib/golang/src/runtime/traceback.go:823 +0x187 fp=0x7ffc53f89300 sp=0x7ffc53f89140 pc=0x555a295b4d27
runtime.traceback(0x555a2a74ac7e, 0x0, 0x0, 0xf4240)
/usr/lib/golang/src/runtime/traceback.go:777 +0x1b fp=0x7ffc53f89338 sp=0x7ffc53f89300 pc=0x555a295b4abb
runtime.dopanic_m(0x555a2bd8e3a0, 0x1, 0x1)
/usr/lib/golang/src/runtime/panic.go:1394 +0x211 fp=0x7ffc53f893b0 sp=0x7ffc53f89338 pc=0x555a2958e311
runtime.fatalthrow.func1()
/usr/lib/golang/src/runtime/panic.go:1253 +0x48 fp=0x7ffc53f893f0 sp=0x7ffc53f893b0 pc=0x555a2958dd88
runtime.fatalthrow()
/usr/lib/golang/src/runtime/panic.go:1250 +0x50 fp=0x7ffc53f89428 sp=0x7ffc53f893f0 pc=0x555a2958dd10
runtime.throw({0x555a2a74f363, 0x555a2bd1d5a0})
/usr/lib/golang/src/runtime/panic.go:1198 +0x71 fp=0x7ffc53f89458 sp=0x7ffc53f89428 pc=0x555a2958dad1
runtime.panicCheck1(0x7ffc53f89628, {0x555a2a74f363, 0x12})
/usr/lib/golang/src/runtime/panic.go:36 +0x8b fp=0x7ffc53f89480 sp=0x7ffc53f89458 pc=0x555a2958a94b
runtime.goPanicIndexU(0x80000000c5d90, 0x4da318)
/usr/lib/golang/src/runtime/panic.go:93 +0x34 fp=0x7ffc53f894c0 sp=0x7ffc53f89480 pc=0x555a2958aad4
runtime.findfunc(0x555a2b89a078)
/usr/lib/golang/src/runtime/symtab.go:769 +0x13e fp=0x7ffc53f894e0 sp=0x7ffc53f894c0 pc=0x555a295ab77e
runtime.gentraceback(0x7f7ae0119700, 0x7ffc53f89950, 0x7f7ae0119700, 0x7ffc53f899e0, 0x4, 0x7ffc53f89968, 0x20, 0x0, 0x0, 0x0)
/usr/lib/golang/src/runtime/traceback.go:255 +0x6f9 fp=0x7ffc53f89850 sp=0x7ffc53f894e0 pc=0x555a295b2cf9
runtime.callers.func1()
/usr/lib/golang/src/runtime/traceback.go:891 +0x52 fp=0x7ffc53f898b8 sp=0x7ffc53f89850 pc=0x555a295b54b2
runtime.callers(0x4, {0x7ffc53f89968, 0x555a2d1335f0, 0x555a2a729360})
/usr/lib/golang/src/runtime/traceback.go:890 +0xa7 fp=0x7ffc53f89920 sp=0x7ffc53f898b8 pc=0x555a295b5407
runtime.mProf_Malloc(0xc000000b60, 0x1a0)
/usr/lib/golang/src/runtime/mprof.go:342 +0x6a fp=0x7ffc53f89a90 sp=0x7ffc53f89920 pc=0x555a2958464a
runtime.profilealloc(0x7f7b09b17108, 0x1a0, 0x188)
/usr/lib/golang/src/runtime/malloc.go:1270 +0x85 fp=0x7ffc53f89ac8 sp=0x7ffc53f89a90 pc=0x555a295655c5
runtime.mallocgc(0x188, 0x555a2af8b500, 0x1)
/usr/lib/golang/src/runtime/malloc.go:1149 +0x725 fp=0x7ffc53f89b48 sp=0x7ffc53f89ac8 pc=0x555a29565165
runtime.newobject(0x15066087ea8)
/usr/lib/golang/src/runtime/malloc.go:1234 +0x27 fp=0x7ffc53f89b70 sp=0x7ffc53f89b48 pc=0x555a29565447
runtime.malg(0x800)
/usr/lib/golang/src/runtime/proc.go:4220 +0x28 fp=0x7ffc53f89bb0 sp=0x7ffc53f89b70 pc=0x555a29598788
runtime.newproc1(0x555a2afa0ca8, 0xc0000001a0, 0x6c770, 0x555a295c3333, 0x555a29590365)
/usr/lib/golang/src/runtime/proc.go:4305 +0x94 fp=0x7ffc53f89c00 sp=0x7ffc53f89bb0 pc=0x555a29598a74
fatal error: index out of range
stack trace unavailable

But Rocky 8.6 hasn’t existed for “many months” or has it?

Lol idk, like I said, I have auto updates enabled. What I meant was:

  • This machine has been running and stable for many months
  • This machine is currently on RL 8.6

I don’t track what version is one closely, hence why I enabled automatic updates. As long as my containers are working, I don’t need to worry about it :slight_smile:

OK, that makes more sense.
In that case, I wonder if the update from 8.5 to 8.6 is what caused this, although it’s unlikely if you are using only the official Rocky repos.
One thing you could check is the ‘dnf’ logs to see exactly what was updated and when, e.g. related to podman and golang.

I’m running podman on several Rocky 8.6 instances without any issue.

Run dnf history to see if something was upgraded.
Then dnf history info <ID> to see the details of a transaction.

I fixed the problem by simply uninstalling and reinstalling podman. My containers are back up and running, though I’m not 100% sure what might have caused it.

Looking at the history like @olista suggested shows that there was an automatic update installed this morning at 2022-06-21 06:15. The dnf history info for that event is this:

Transaction ID : 38
Begin time : Tue 21 Jun 2022 06:15:36 AM EDT
Begin rpmdb : 807:4ba8c53dc4ae3872e1405cf1bb80a140ccfb1e23
End time : Tue 21 Jun 2022 06:18:00 AM EDT (144 seconds)
End rpmdb : 807:688a101b5d7ffb482cee50a6f3bef35e792b35a7
User : System
Return-Code : Success
Releasever : 8
Command Line :
Comment :
Packages Altered:
Upgrade grub2-common-1:2.02-123.el8_6.8.noarch @baseos
Upgraded grub2-common-1:2.02-123.el8.noarch @@System
Upgrade grub2-pc-1:2.02-123.el8_6.8.x86_64 @baseos
Upgraded grub2-pc-1:2.02-123.el8.x86_64 @@System
Upgrade grub2-pc-modules-1:2.02-123.el8_6.8.noarch @baseos
Upgraded grub2-pc-modules-1:2.02-123.el8.noarch @@System
Upgrade grub2-tools-1:2.02-123.el8_6.8.x86_64 @baseos
Upgraded grub2-tools-1:2.02-123.el8.x86_64 @@System
Upgrade grub2-tools-efi-1:2.02-123.el8_6.8.x86_64 @baseos
Upgraded grub2-tools-efi-1:2.02-123.el8.x86_64 @@System
Upgrade grub2-tools-extra-1:2.02-123.el8_6.8.x86_64 @baseos
Upgraded grub2-tools-extra-1:2.02-123.el8.x86_64 @@System
Upgrade grub2-tools-minimal-1:2.02-123.el8_6.8.x86_64 @baseos
Upgraded grub2-tools-minimal-1:2.02-123.el8.x86_64 @@System
Upgrade gitlab-runner-15.1.0-1.x86_64 @runner_gitlab-runner
Upgraded gitlab-runner-15.0.0-1.x86_64 @@System
Scriptlet output:
1 GitLab Runner: detected user gitlab-runner
2 Runtime platform arch=amd64 os=linux pid=3304364 revision=76984217 version=15.1.0
3 gitlab-runner: Service is running
4 Runtime platform arch=amd64 os=linux pid=3304420 revision=76984217 version=15.1.0
5 gitlab-ci-multi-runner: the service is not installed
6 Runtime platform arch=amd64 os=linux pid=3304455 revision=76984217 version=15.1.0
7 Runtime platform arch=amd64 os=linux pid=3304510 revision=76984217 version=15.1.0
8 INFO: Docker installation not found, skipping clear-docker-cache

It seems like gitlab-runner was updated, which is not an official Rocky package, and it also doesn’t officially support podman (I’m using a custom runner someone made).

…although, since gitlab runner has nothing to do with podman, I don’t see why the update script would affect it. It is a Go project though, so maybe it messed up some common Go libraries? I guess I’ll find out next time there’s a gitlab-runner update and I encounter the same issue.

There doesn’t seem to be a conflict at the moment though. My containers are running and my CI pipelines are passing, so both Podman and Gitlab Runner are working fine.

Great you could fix it. :slight_smile:

The gitlab-runner upgrade would have cleared the cache, if docker was used:
8 INFO: Docker installation not found, skipping clear-docker-cache

As podman is used, it was not done. Maybe it’s related to the issue? Difficult to know exactly afterwards.

I’m not a fan of dnf automatic, even when it is officially supported. Some updates require a reboot and that’s not handled by dnf automatic. Thus it can leave the system in an unstable state. An issue can appear weeks later and then it’s impossible to determine which update is the cause.

My stable systems are updated manually or automatically every few months and then rebooted.

1 Like