KVM - libvirt - bridged networking on RL9

Hello friends. I’m trying to configure a bridge network for a virtual machine. My problem is that the virtual machine does not connect to the external network. I can’t PING the router. My topology looks like this:

Router 192.168.3.1/24 > Swtich (transparent) > Rocky Linux 9 192.168.3.2/24 (name test)(KVM server) - Virtual machine (name localhost) 192.168.3.45/24

If it makes any difference, Rocky Linux 9 is installed as a virtual machine on ESXI 7 :slight_smile: .

I use this script:

#!/bin/bash
yum -y install bridge-utils
yum -y groupinstall "Virtualization Tools"
export MAIN_CONN=ens192
bash -x <<EOS
systemctl stop libvirtd
nmcli c delete "$MAIN_CONN"
nmcli c delete "Wired connection 1"
nmcli c add type bridge ifname br0 autoconnect yes con-name br0 stp off
nmcli c modify br0 ipv4.addresses 192.168.3.2/24 ipv4.method manual
nmcli c modify br0 ipv4.gateway 192.168.3.1
nmcli c modify br0 ipv4.dns 192.168.3.20
nmcli c add type bridge-slave autoconnect yes con-name "$MAIN_CONN" ifname "$MAIN_CONN" master br0
systemctl restart NetworkManager
systemctl start libvirtd
systemctl enable libvirtd
echo "net.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/99-ipforward.conf
sysctl -p /etc/sysctl.d/99-ipforward.conf
EOS

A little more information from server:

[root@test ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
    link/ether 00:0c:29:48:f5:6d brd ff:ff:ff:ff:ff:ff
    altname enp11s0
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:02:c5:83 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:48:f5:6d brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.2/24 brd 192.168.3.255 scope global noprefixroute br0
       valid_lft forever preferred_lft forever
    inet6 fe80::f2dc:bece:1a7b:5084/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
7: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:30:87:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe30:874d/64 scope link
       valid_lft forever preferred_lft forever

[root@test ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.000c2948f56d       no              ens192
                                                        vnet2
virbr0          8000.52540002c583       yes

[root@test ~]# virsh edit rl9
<interface type='bridge'>
      <mac address='52:54:00:30:87:4d'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

[root@test ~]# nmcli c show --active
NAME    UUID                                  TYPE      DEVICE
br0     dcc79069-894a-4373-8141-cee0615cb7d3  bridge    br0
lo      84c1dbb7-44ac-4898-86fb-f119637767b4  loopback  lo
virbr0  d2a17fd8-5c73-4539-9118-86debe3abd90  bridge    virbr0
vnet0   66d7d8b0-598f-45f6-bed5-dfada6ecf9e9  tun       vnet0
ens192  5f768977-b376-4fe6-be1a-39935e911fc9  ethernet  ens192

[root@test ~]# ping 192.168.3.45
PING 192.168.3.45 (192.168.3.45) 56(84) bytes of data.
64 bytes from 192.168.3.45: icmp_seq=1 ttl=64 time=1.04 ms
64 bytes from 192.168.3.45: icmp_seq=2 ttl=64 time=0.527 ms

[root@test ~]# ping 192.168.3.1
PING 192.168.3.1 (192.168.3.1) 56(84) bytes of data.
64 bytes from 192.168.3.1: icmp_seq=1 ttl=255 time=0.495 ms
64 bytes from 192.168.3.1: icmp_seq=2 ttl=255 time=0.370 ms

A little more information from virtual machine:

  • Can the Rocky ping 192.168.3.1?
  • Does the ESXI allow traffic from both 00:0c:29:48:f5:6d and fe:54:00:30:87:4d?
  • Is the guest on br0 and not on virbr0? (The latter is the “default” network that libvirt autostarts.)

If you tcpdump -nn -vv -i br0 on Rocky while the guest pings, what do you see?


is unnecessary, because a bridge is a switch – the Rocky does not need to route bridged traffic.

It was necessary to switch the network card to ESXi.

THX for help :slight_smile: .

I’ve never used ESXi, but “blaming something outside Rocky” felt like a good start.

ESXi is a hypervisor that created VM (for Rocky). It is kind of good practice to limit
what the VM can do. Additional machines that pop up out of nowhere (like the VM in
Rocky) are not on the default “what VM can do” list. In that sense libvirtd might be
rather “liberal”.


The “default” network that libvirtd has defined. You can stop and remove (or disable
autostart) it, if you don’t need it. That network does on start:

  • Enable routing (ip_forward) so that host will route between 192.168.122.0/24 and outside ( 192.168.3.2/24)
  • Add firewall rules to host that the routed traffic is allowed through
  • Add masquerade/sNAT so that 192.168.122.0/24 is not visible to outside

Your script could do with less calls to nmcli. Furthermore, if IPv6 is not used, then you can disable it entirely (for that connection):

nmcli c add type bridge ifname br0 autoconnect yes con-name br0 stp off ipv4.addresses 192.168.3.2/24 ipv4.method manual ipv4.gateway 192.168.3.1 ipv4.dns 192.168.3.20 ivp6.method disabled
nmcli c add type bridge-slave autoconnect yes con-name "$MAIN_CONN" ifname "$MAIN_CONN" master br0