Since RHL uses nmcli now rather than network-scripts, I’ve bash scripted handling this “migration” by using the nmcli connection migrate command. However, this command doesn’t pick up on recently created /etc/sysconfig/network-scripts/ scripts such as ifcfg-br0 that my script creates when setting up kvm. However, if you reboot Rocky Linux 9, if you run the migration command again nmcli connection migrate, this time it picks up on the existence of the ifcfg-br0 and ifcfg-br1 files and migrates them properly. So, what command causes the list to be reloaded? What command do I run so that nmcli connection migrate will pick up on recently added network script files that need to be migrated without having to reboot first? Obviously, some systemd script or some script somewhere is providing some sort of cache for nmcli, I think.
If anyone can help, I would greatly appreciate it!
I just tried both (setting up a new server as we speak running my script), and the nmcli connection migrate command still doesn’t do anything with the ifcfg-br0 and ifcfg-br1 files.
It instead states that enp6s0f0 and enp6s0f1 have been migrated (the two physical ethernet connections), but they’ve already been migrated previously. It’s like the migrate script uses some kind of cache and doesn’t refresh or recheck for different network-scripts files, but again, it will find them on reboot, which is weird.
I do use “a script” called Ansible. Red Hat promotes System Roles and one of those is ‘network’.
Incidentally, the example on that page is for network.
One does need (at least) dnf install ansible-core rhel-system-roles to run such “playbook”.
The example below describes two connections (bridge-br0 and enp6s0f0). The latter is for device that has MAC address 00:11:22:33:44:55. The bridge gets IPv4 config from DHCP and entirely disables IPv6 for this connection: