Docker Installation failed on RHEL 10

Hi everybody, I am trying to install Docker on my Rocky Linux 10, but every time I get the following error. Do you know what might be causing this error and how can I resolve that ?

For the installation, I followed the Docker tuto like below:

sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker

Is there really a difference between RHEL and CentOS repositories?

$ sudo systemctl status docker
Ă— docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Tue 2025-12-09 23:06:01 CET; 12min ago
 Invocation: 3c144637e90647ef926cfd24ec32262e
TriggeredBy: Ă— docker.socket
       Docs: https://docs.docker.com
    Process: 63832 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
   Main PID: 63832 (code=exited, status=1/FAILURE)

Dec 09 23:06:01 localhost systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Dec 09 23:06:01 localhost systemd[1]: docker.service: Start request repeated too quickly.
Dec 09 23:06:01 localhost systemd[1]: docker.service: Failed with result 'exit-code'.
Dec 09 23:06:01 localhost systemd[1]: Failed to start docker.service - Docker Application Container Engine.

Here are the logs from journalctl -xeu docker.service:

Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.071611169+01:00" level=info msg="Starting up"
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.071956321+01:00" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.072008009+01:00" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi  
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.072016471+01:00" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi      
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.076524885+01:00" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.086044853+01:00" level=info msg="Loading containers: start."
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.086102700+01:00" level=info msg="Starting daemon with containerd snapshotter integration enabled"
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.086928057+01:00" level=info msg="Restoring containers: start."
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.099241071+01:00" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.106238041+01:00" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
Dec 09 23:05:59 localhost dockerd[63832]: time="2025-12-09T23:05:59.189469242+01:00" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Dec 09 23:05:59 localhost dockerd[63832]: failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: failed to add jump rules to ipv4 NAT table: failed to append jump rules to nat-PREROUTING:  (iptables failed: iptables --wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER: Warning: Extension addrtype revision 0 not supported, missing kernel module?
Dec 09 23:05:59 localhost dockerd[63832]: iptables v1.8.11 (nf_tables):  RULE_APPEND failed (No such file or directory): rule in chain PREROUTING
Dec 09 23:05:59 localhost dockerd[63832]:  (exit status 4))
Dec 09 23:05:59 localhost systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE

Thank you for your help ! :grin:

Hi picjuju,

Unable to reproduce your issue on a fresh install of 10.1, using these instructions:-

$ systemctl status  docker
â—Ź docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)
Active: active (running) since Tue 2025-12-09 23:11:10 GMT; 8s ago
Invocation: e2e5e00bebe94ef39bf0b267bfdcf48a
TriggeredBy: â—Ź docker.socket
Docs: https://docs.docker.com
Main PID: 6249 (dockerd)
Tasks: 9
Memory: 26.5M (peak: 28.4M)
CPU: 362ms
CGroup: /system.slice/docker.service
└─6249 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Dec 09 23:11:08 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:08.780720855Z” level=info msg=“Deleting nftables IPv6 rules” error=“exit status 1”
Dec 09 23:11:08 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:08.856788620Z” level=info msg=“Firewalld: created docker-forwarding policy”
Dec 09 23:11:10 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:10.207660388Z” level=info msg=“Loading containers: done.”
Dec 09 23:11:10 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:10.240228313Z” level=info msg=“Docker daemon” commit=de45c2a containerd-snapshotter=true storage-driver=overlayfs version=29.1.2
Dec 09 23:11:10 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:10.240939540Z” level=info msg=“Initializing buildkit”
Dec 09 23:11:10 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:10.271909441Z” level=warning msg=“git source cannot be enabled: failed to find git binary: exec: "git": executable file not found in $PATH”
Dec 09 23:11:10 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:10.283646534Z” level=info msg=“Completed buildkit initialization”
Dec 09 23:11:10 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:10.308225344Z” level=info msg=“Daemon has completed initialization”
Dec 09 23:11:10 localhost.localdomain dockerd[6249]: time=“2025-12-09T23:11:10.308477992Z” level=info msg=“API listen on /run/docker.sock”
Dec 09 23:11:10 localhost.localdomain systemd[1]: Started docker.service - Docker Application Container Engine.

Also I can run a docker container with no issues. Although on your system docker appears to be trying to use iptables, rather than firewalld. Could you please try issuing the following and provide the output:-

sudo modprobe ip_tables
sudo systemctl restart docker
systemctl status docker

Regards Tom.

Hi Tom, thank you for your response, however it seems that I haven’t ìp_tables module on RHEL 10.

$ sudo modprobe ip_tables
modprobe: FATAL: Module ip_tables not found in directory /lib/modules/6.12.0-55.14.1.el10_0.x86_64

Hi Tom, I think the problem comes from RHEL 10 himself. I just tried with RHEL 9 (9.7) and it works perfectly. Perhaps we need to wait for a few more fixes before Docker is fully supported by RHEL 10.

RHEL10 does have the module:

root@rhel10:~# modprobe ip_tables
root@rhel10:~# lsmod | grep tables
ip_tables              32768  0
nf_tables             393216  439 nft_ct,nft_reject_inet,nft_fib_ipv6,nft_fib_ipv4,nft_chain_nat,nft_reject,nft_fib,nft_fib_inet
nfnetlink              20480  3 nf_tables

however I’m using a newer RHEL kenel than you are. Perhaps update your system fully and reboot into the new kernel? Same also works on an updated Rocky 10 install too.

1 Like

Hi iwalker, thank you for your response. When I try to install iptables, the service failed to start.

After bootstrap of RHEL 10:

$ lsmod | grep tables
nf_tables             389120  0
nfnetlink              20480  2 nf_tables

After iptables installation:

$ sudo systemctl status iptables
Ă— iptables.service - IPv4 firewall with iptables
     Loaded: loaded (/usr/lib/systemd/system/iptables.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Wed 2025-12-10 14:11:50 CET; 14s ago
 Invocation: 1cd1933c306f4cb8b25d96fa04f061bd
    Process: 66546 ExecStart=/usr/libexec/iptables/iptables.init start (code=exited, status=1/FAILURE)
   Main PID: 66546 (code=exited, status=1/FAILURE)
   Mem peak: 1.8M
        CPU: 6ms

Dec 10 14:11:50 localhost iptables.init[66552]: Warning: Extension REJECT revision 0 not supported, missing kernel module?
Dec 10 14:11:50 localhost iptables.init[66552]: iptables-restore v1.8.11 (nf_tables):
Dec 10 14:11:50 localhost iptables.init[66552]: line 8: RULE_APPEND failed (No such file or directory): rule in chain INPUT
Dec 10 14:11:50 localhost iptables.init[66552]: line 11: RULE_APPEND failed (No such file or directory): rule in chain INPUT
Dec 10 14:11:50 localhost iptables.init[66552]: line 12: RULE_APPEND failed (No such file or directory): rule in chain INPUT
Dec 10 14:11:50 localhost iptables.init[66552]: line 13: RULE_APPEND failed (No such file or directory): rule in chain FORWARD
Dec 10 14:11:50 localhost iptables.init[66546]: [FAILED]
Dec 10 14:11:50 localhost systemd[1]: iptables.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 14:11:50 localhost systemd[1]: iptables.service: Failed with result 'exit-code'.
Dec 10 14:11:50 localhost systemd[1]: Failed to start iptables.service - IPv4 firewall with iptables.
$ sudo journalctl -xeu iptables.service
â–‘â–‘
â–‘â–‘ A start job for unit iptables.service has begun execution.
â–‘â–‘
â–‘â–‘ The job identifier is 89023.
Dec 10 14:04:56 localhost iptables.init[65058]: iptables: Applying firewall rules:
Dec 10 14:04:56 localhost iptables.init[65064]: Warning: Extension state revision 0 not supported, missing kernel module?
Dec 10 14:04:56 localhost iptables.init[65064]: Warning: Extension state is not supported, missing kernel module?
Dec 10 14:04:56 localhost iptables.init[65064]: Warning: Extension tcp revision 0 not supported, missing kernel module?
Dec 10 14:04:56 localhost iptables.init[65064]: Warning: Extension REJECT revision 0 not supported, missing kernel module?
Dec 10 14:04:56 localhost iptables.init[65064]: iptables-restore v1.8.11 (nf_tables):
Dec 10 14:04:56 localhost iptables.init[65064]: line 8: RULE_APPEND failed (No such file or directory): rule in chain INPUT
Dec 10 14:04:56 localhost iptables.init[65064]: line 11: RULE_APPEND failed (No such file or directory): rule in chain INPUT
Dec 10 14:04:56 localhost iptables.init[65064]: line 12: RULE_APPEND failed (No such file or directory): rule in chain INPUT
Dec 10 14:04:56 localhost iptables.init[65064]: line 13: RULE_APPEND failed (No such file or directory): rule in chain FORWARD
Dec 10 14:04:56 localhost iptables.init[65058]: [FAILED]
Dec 10 14:04:56 localhost systemd[1]: iptables.service: Main process exited, code=exited, status=1/FAILURE

Hi @picjuju!

I had the same Docker error on Rocky Linux 10. In my case, it was a fresh install (cloud image). I was following the official Rocky Linux Docker guide (https://docs.rockylinux.org/10/gemstones/containers/docker/), but after installing Docker, I got this error when trying to start it:

failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register “bridge” driver: failed to add jump rules to ipv4 NAT table: failed to append jump rules to nat-PREROUTING: (iptables failed: iptables --wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER: Warning: Extension addrtype revision 0 not supported, missing kernel module?

Here’s what worked for me:

First, update to Rocky Linux 10.1

sudo dnf update -y

Install the missing kernel modules

sudo dnf install -y kernel-modules-extra

Load the required module

sudo modprobe xt_addrtype

Restart Docker

sudo systemctl restart docker

Test

sudo docker run hello-world

The issue was missing kernel-modules-extra package. Docker needs the xt_addrtype module from it for networking. After these steps, Docker runs perfectly!

In other words, Docker creates iptables/netfilter rules that use addrtype module, an iptables extension.

RHEL, starting from RHEL 8 has defaulted to nf_tables rulesets, deprecating use of iptables.
Ideally, everybody (e.g. Docker) would switch to use nf_tables directly.

Thanks for the technical details! You’re right about nf_tables being the default in RHEL/Rocky 8+.

In my case, the issue was simply that kernel-modules-extra wasn’t installed by default, and Docker (even with nf_tables) still needs the xt_addrtype module for its networking rules.

As noted in Docker’s nftables documentation Docker with nftables | Docker Docs , nftables support is still experimental, and Docker currently uses the iptables compatibility layer which still requires these kernel modules.

The steps I shared worked for getting Docker running on a fresh Rocky Linux 10 install, regardless of whether it’s using iptables or nf_tables backend. Hopefully Docker will fully transition to native nf_tables soon!

1 Like

Am I the only one who can’t get Docker to work ?

I try to provide Rocky Linux 10 via Terraform on my proxmox machine to deploy a gitlab runner. So I installed the latest version of the image https://dl.rockylinux.org/pub/rocky/10/images/x86_64/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2

Then here is my config for terraform. Maybe I have an error in my code that I don’t see.

main.tf

# Ensure snippets dir exists and upload cloud-init YAMLs to the Proxmox node
resource "null_resource" "snippets" {
  triggers = {
    name = var.name
  }

  provisioner "remote-exec" {
    inline = [
      "mkdir -p /var/lib/vz/snippets"
    ]
    connection {
      type  = "ssh"
      host  = var.pve_host
      user  = var.pve_user
      agent = true
    }
  }

  provisioner "file" {
    content = templatefile("${path.module}/cloudinit/system-config.yaml.tmpl", {
      hostname     = var.name
      user         = var.ssh_user
      disable_root = var.disable_root
      ssh_pubkeys  = var.ssh_public_keys
      packages     = var.packages
      timezone     = var.timezone
    })
    destination = "/var/lib/vz/snippets/${var.name}-system-config.yml"
    connection {
      type  = "ssh"
      host  = var.pve_host
      user  = var.pve_user
      agent = true
    }
  }

  provisioner "file" {
    content = templatefile("${path.module}/cloudinit/network-config.yaml.tmpl", {
      ip          = var.ipv4_cidr
      gateway     = var.gateway4
      nameservers = var.nameservers
      iface       = "ens18"
    })
    destination = "/var/lib/vz/snippets/${var.name}-network.yml"
    connection {
      type  = "ssh"
      host  = var.pve_host
      user  = var.pve_user
      agent = true
    }
  }
}

# Create VM from template and point to our snippets
resource "proxmox_vm_qemu" "runner" {
  name        = var.name
  target_node = var.target_node
  clone       = var.template_name
  full_clone  = true
  vmid        = var.vm_id
  onboot      = true
  os_type     = "cloud-init"
  scsihw      = "virtio-scsi-pci"
  boot        = "order=scsi0"
  agent       = 1
  qemu_os     = "l26"
  memory      = var.memory_mb
  tags        = "gitlab-runner"

  cpu {
    cores = var.cores
    type  = "host"
  }

  disks {
    scsi {
      scsi0 {
        disk {
          storage = var.disk_storage
          size    = var.disk_size_gb
          format  = "raw"
        }
      }
    }
    ide {
      ide2 {
        cloudinit {
          storage = var.cloudinit_storage
        }
      }
    }
  }

  network {
    model  = "virtio"
    bridge = var.bridge
    tag    = var.vlan_tag
    id     = 0
  }

  cicustom = "user=${var.snippets_storage}:snippets/${var.name}-system-config.yml,network=${var.snippets_storage}:snippets/${var.name}-network.yml"

  depends_on = [null_resource.snippets]

  lifecycle {
    ignore_changes = [network[0].macaddr, disks, smbios, define_connection_info]
  }
}

# Install GitLab Runner inside the new VM and register it
resource "null_resource" "install_runner" {
  depends_on = [proxmox_vm_qemu.runner]

  connection {
    type  = "ssh"
    host  = chomp(split("/", var.ipv4_cidr)[0])
    user  = var.ssh_user
    agent = true
  }

  provisioner "remote-exec" {
    inline = [
      # Wait for cloud-init and systemd to finish and be ready
      "sudo cloud-init status --wait || true",
      "i=0; while [ $i -lt 60 ]; do s=$(systemctl is-system-running 2>/dev/null || true); case \"$s\" in running|degraded) break;; esac; sleep 3; i=$((i+1)); done",

      # Install GitLab Runner (official repo for RPM-based distros)
      "curl -LJO https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/rpm/gitlab-runner-helper-images.rpm",
      "curl -LJO https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/rpm/gitlab-runner_amd64.rpm",
      "sudo dnf install -y gitlab-runner-helper-images.rpm gitlab-runner_amd64.rpm",
      "sudo systemctl enable --now gitlab-runner",
      "sudo gitlab-runner status || true",

      # Register the runner
      local.register_cmd,

      # Install missing kernel modules to start Docker
      "sudo dnf install -y kernel-modules-extra",
      "sudo modprobe xt_addrtype",

      # Install Docker
      "if [ \"${var.install_docker}\" = \"true\" ]; then",
      "  sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo",
      "  sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin",
      "  sudo systemctl enable --now docker",
      "  sudo usermod -aG docker gitlab-runner",
      "  sudo systemctl restart gitlab-runner",
      "  sudo docker info",
      "  sudo gitlab-runner status",      
      "  sudo systemctl daemon-reload",
      "fi",
    ]
  }
}

local.tf

locals {

  register_cmd = trimspace(
    format(
      "sudo gitlab-runner register --non-interactive --url %q --registration-token %q --executor %s --tag-list %q --description %q %s",
      var.gitlab_url,
      var.gitlab_registration_token,
      var.gitlab_executor,
      join(",", var.runner_tags),
      var.runner_description,
      var.gitlab_executor == "docker"
      ? format("--docker-image %s %s", var.docker_default_image, var.docker_privileged ? "--docker-privileged" : "")
      : ""
    )
  )
}

network-config.yaml.tmpl

version: 2
ethernets:
  ${iface}:
    dhcp4: false
    addresses:
      - ${ip}
    gateway4: ${gateway}
    nameservers:
      addresses: [${join(", ", nameservers)}]

system-config.yaml.tmpl

#cloud-config
hostname: ${hostname}
manage_etc_hosts: true

users:
  - name: ${user}
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL
    lock_passwd: true
    ssh_authorized_keys:
%{ for k in ssh_pubkeys ~}
      - ${k}
%{ endfor ~}

ssh_pwauth: false
disable_root: ${disable_root}

package_update: true
package_upgrade: true
packages:
%{ for p in packages ~}
  - ${p}
%{ endfor ~}

timezone: ${timezone}

# Ensure cloud-init doesn't reset NIC names
write_files:
  - path: /etc/udev/rules.d/80-net-setup-link.rules
    permissions: "0644"
    content: |
      # Keep predictable interface names (ens*), no legacy rename

runcmd:
  - [ sh, -c, "systemctl daemon-reload || true" ]
  - [ sh, -c, "systemctl enable sshd || systemctl enable ssh || true" ]

Thank you for your responses if you see an error in my code :grin:

Have you tried running sudo dnf update -y first, before the kernel-modules-extra install? In my experience, the fresh Rocky 10.1 cloud image might boot with a newer kernel, but the package repositories on the VM aren’t fully synced yet. Running the update first ensures you get the package version that matches your running kernel.

Same error as the beginning

 sudo systemctl status docker.service
Ă— docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)
     Active: failed (Result: exit-code) since Wed 2025-12-10 16:27:39 CET; 1min 26s ago
 Invocation: 1841664ac2924108b72f7d8296fa6e31
TriggeredBy: Ă— docker.socket
       Docs: https://docs.docker.com
   Main PID: 35472 (code=exited, status=1/FAILURE)

Dec 10 16:27:39 localhost systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Dec 10 16:27:39 localhost systemd[1]: docker.service: Start request repeated too quickly.
Dec 10 16:27:39 localhost systemd[1]: docker.service: Failed with result 'exit-code'.
Dec 10 16:27:39 localhost systemd[1]: Failed to start docker.service - Docker Application Container Engine.
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.071343389+01:00" level=info msg="Starting up"
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.071659232+01:00" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.071696797+01:00" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.071703549+01:00" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.076234611+01:00" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.088221839+01:00" level=info msg="Loading containers: start."
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.088236983+01:00" level=info msg="Starting daemon with containerd snapshotter integration enabled"
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.088987670+01:00" level=info msg="Restoring containers: start."
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.095501819+01:00" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.107474162+01:00" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
Dec 10 16:27:37 localhost dockerd[35472]: time="2025-12-10T16:27:37.206785351+01:00" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Dec 10 16:27:37 localhost dockerd[35472]: failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: failed to add jump rules to ipv4 NAT table: failed to append jump rules to nat-PREROUTING:  (iptables failed: iptables --wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER: Warning: Extension addrtype revision 0 not supported, missing kernel module?
Dec 10 16:27:37 localhost dockerd[35472]: iptables v1.8.11 (nf_tables):  RULE_APPEND failed (No such file or directory): rule in chain PREROUTING
Dec 10 16:27:37 localhost dockerd[35472]:  (exit status 4))
Dec 10 16:27:37 localhost systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE

can you run sudo modprobe xt_addrtype manually?

Yes but it returns an error

$ sudo modprobe xt_addrtype
modprobe: FATAL: Module xt_addrtype not found in directory /lib/modules/6.12.0-124.8.1.el10_1.x86_64

can you try running:

sudo dnf distro-sync -y

This will align all installed packages (including kernel-modules-extra) with the currently running kernel version. After it finishes, test again:

sudo modprobe xt_addrtype

If the module is still not found, try rebooting the VM and repeat the modprobe check — sometimes cloud-init upgrades the kernel but the system doesn’t reboot, so you end up booting an older kernel while the modules belong to the newer one.

By the way, I noticed your VM is running kernel:

6.12.0-124.8.1.el10_1.x86_64

While on my Rocky Linux 10.1 instance, after a normal update + reboot, the kernel becomes:

6.12.0-124.16.1.el10_1.x86_64

So it looks like your system has mixed package versions (kernel upgraded by cloud-init, but not rebooted).
You can confirm your OS version with:

cat /etc/os-release

Just to ensure the image and the kernel version are fully in sync.

Here is the result of the commands just after the vm finishing the boot.

$ sudo dnf distro-sync -y
Last metadata expiration check: 0:01:35 ago on Wed 10 Dec 2025 04:59:16 PM CET.
Dependencies resolved.
Nothing to do.
Complete!

$ sudo dnf install -y kernel-modules-extra
Last metadata expiration check: 0:01:43 ago on Wed 10 Dec 2025 04:59:16 PM CET.
Dependencies resolved.
==========================================================================================================================================
 Package                                Architecture             Version                                   Repository                Size
==========================================================================================================================================
Installing:
 kernel-modules-extra                   x86_64                   6.12.0-124.16.1.el10_1                    baseos                   2.8 M
Installing dependencies:
 kernel-modules                         x86_64                   6.12.0-124.16.1.el10_1                    baseos                    41 M

Transaction Summary
==========================================================================================================================================
Install  2 Packages

Total download size: 43 M
Installed size: 39 M
Downloading Packages:
(1/2): kernel-modules-extra-6.12.0-124.16.1.el10_1.x86_64.rpm                                              11 MB/s | 2.8 MB     00:00    
(2/2): kernel-modules-6.12.0-124.16.1.el10_1.x86_64.rpm                                                    47 MB/s |  41 MB     00:00    
------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                      43 MB/s |  43 MB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                  1/1 
  Installing       : kernel-modules-6.12.0-124.16.1.el10_1.x86_64                                                                     1/2 
  Running scriptlet: kernel-modules-6.12.0-124.16.1.el10_1.x86_64                                                                     1/2 
  Installing       : kernel-modules-extra-6.12.0-124.16.1.el10_1.x86_64                                                               2/2 
  Running scriptlet: kernel-modules-extra-6.12.0-124.16.1.el10_1.x86_64                                                               2/2 
  Running scriptlet: kernel-modules-6.12.0-124.16.1.el10_1.x86_64                                                                     2/2 
Running: dracut -f --kver 6.12.0-124.16.1.el10_1.x86_64

  Running scriptlet: kernel-modules-extra-6.12.0-124.16.1.el10_1.x86_64                                                               2/2 

Installed:
  kernel-modules-6.12.0-124.16.1.el10_1.x86_64                     kernel-modules-extra-6.12.0-124.16.1.el10_1.x86_64

$ sudo modprobe xt_addrtype
modprobe: FATAL: Module xt_addrtype not found in directory /lib/modules/6.12.0-124.8.1.el10_1.x86_64

Here are also some outputs to some commands if it can help

[rocky@localhost ~]$ lsmod | grep tables
[rocky@localhost ~]$ 

[rocky@localhost ~]$ ls /lib/modules/
6.12.0-124.16.1.el10_1.x86_64  6.12.0-124.8.1.el10_1.x86_64

[rocky@localhost ~]$ sudo dnf list installed | grep kernel
kernel-core.x86_64                       6.12.0-124.8.1.el10_1          @1e4afeabef7c4dfe953e6a91a0578595
kernel-core.x86_64                       6.12.0-124.16.1.el10_1         @baseos
kernel-modules.x86_64                    6.12.0-124.16.1.el10_1         @baseos
kernel-modules-core.x86_64               6.12.0-124.8.1.el10_1          @1e4afeabef7c4dfe953e6a91a0578595
kernel-modules-core.x86_64               6.12.0-124.16.1.el10_1         @baseos
kernel-modules-extra.x86_64              6.12.0-124.16.1.el10_1         @baseos
kernel-tools.x86_64                      6.12.0-124.16.1.el10_1         @baseos
kernel-tools-libs.x86_64                 6.12.0-124.16.1.el10_1         @baseos

You’re currently still running the old kernel:

6.12.0-124.8.1.el10_1.x86_64

but kernel-modules-extra was installed for the new kernel:

6.12.0-124.16.1.el10_1.x86_64

So modprobe will always fail until you reboot into the new kernel.

Please run:

sudo reboot

Then after the VM comes back, check:

uname -r

Make sure it shows:

6.12.0-124.16.1.el10_1.x86_64

After that:

sudo modprobe xt_addrtype

should work normally.

1 Like