Rocky9 as AWS EC2, secondary network interface doesn't work

Hi,

I am looking to migrate some of our existing workloads to Rocky9, and have been having a heck of a time getting a second network interface to work.

My POC is very simple, one EC2 instance, with two network interfaces, eth0 and eth1. Each network interface has 1 private IP and 1 public IP attached, and both network interfaces have the same security group attached. Everything works on eth0, but nothing works on eth1 (except localhost ping). I am honestly not very familiar with NetworkManager, and am reading up on the documentation right now, but hopefully someone can point out any obvious mistake before I start considering moving back to network-scripts lol.

This is what some of the ip commands look like:

# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 06:9a:55:83:54:59 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    altname ens5
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 06:f5:8c:e0:34:d1 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    altname ens6
# nmcli device status
DEVICE  TYPE      STATE                   CONNECTION  
eth0    ethernet  connected               System eth0 
eth1    ethernet  connected               System eth1 
lo      loopback  connected (externally)  lo          
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:9a:55:83:54:59 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    altname ens5
    inet 10.114.32.46/26 brd 10.114.32.63 scope global dynamic noprefixroute eth0
       valid_lft 2574sec preferred_lft 2574sec
    inet6 fe80::49a:55ff:fe83:5459/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
    link/ether 06:f5:8c:e0:34:d1 brd ff:ff:ff:ff:ff:ff
    altname enp0s6
    altname ens6
    inet 10.114.32.21/26 brd 10.114.32.63 scope global dynamic noprefixroute eth1
       valid_lft 3591sec preferred_lft 3591sec
    inet6 fe80::d175:a811:d29f:203d/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Hello @tonyswu

Firewall zone perhaps? The reason I’m guessing this is that the first interface up (probably eth0) will be in the “public” zone and have default services (at minimum) attached to it.

Assuming firewalld, what does firewall-cmd --list-all show?

ip ro
nmcli

Particularly the first.

More than one address in same subnet is usually an issue.

Using the factory AMI, looks like firewalld isn’t installed by default. Would you like me to install that and find out?

Output for ip ro:

# ip ro
default via 10.114.32.1 dev eth0 proto dhcp src 10.114.32.46 metric 10 
default via 10.114.32.1 dev eth1 proto dhcp src 10.114.32.21 metric 11 
default via 10.114.32.1 dev eth0 proto dhcp src 10.114.32.46 metric 100 
default via 10.114.32.1 dev eth1 proto dhcp src 10.114.32.21 metric 101 
10.114.32.0/26 dev eth0 proto kernel scope link src 10.114.32.46 metric 100 
10.114.32.0/26 dev eth1 proto kernel scope link src 10.114.32.21 metric 101 

The IPs are indeed from the same subnet.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.