Firewalld/NetworkManager Internet Routing Not working in Rocky Linux 9.x

Hi,

I’m having troubles in Rocky Linux 9.0 setting up a server that acts as a simple internet router/relay for my home network.

Without internet, my copy/paste at this time is limited to what i can type on my phone. So bear with me.

Symptoms

From any PC on my network, ICMP works beautifully (the server in question correctly relays the packets for a full round trip).

But TCP ports only partially work. The connection is established , but nothing is returned. The connection just hangs open as no returned data is received.

Setup

  • Firewalld us set up for an internal and external zone and are correctly associated with their respective ethernet card. I can see this with --get-active-zones
  • external has masquerade and forwarding enabled.
    internal has forwarding enabled
  • net.ipv4.ip_forward is set correctly to 1
  • i have created a firewalld policy (with target accept and priority 100) setting ingress to internal and egress to external.

Worthy Notes

  • The internet works great at the physical computer/server in question I’m trying to set up. It’s just the routing in question.
  • The server in question can easily access/ping PCs on the network it’s a part of as well as internet sites. So basic route table is working as expected. Port ranges work fine here too (such as testing with a curl command)
  • SElinux is in enforcing mode at the server, but the situation still doesn’t work with it set to permissive.
  • i tried disabling the firewalld forwarding on the external zone without any luck.
  • I also tried adding a second (firewalld) policy identical to the one already created but just reversing the internal and external zones (in respective ingress/egress assignments) without any luck.

What am i missing? :slightly_frowning_face: I would love any advice!

Chris

Could be you are missing what you want to allow out of the network, for example:

firewall-cmd --policy internal-external --add-service=http --permanent
firewall-cmd --policy internal-external --add-service=https --permanent
firewall-cmd --policy internal-external --add-service=dns --permanent

that I believe when I did it myself, allows outbound http, https and dns.

[root@rocky-fw ~]# firewall-cmd --info-policy internal-external
internal-external (active)
  priority: -1
  target: CONTINUE
  ingress-zones: internal
  egress-zones: external
  services: dns http https
  ports: 
  protocols: icmp
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

“policy with target accept” should allow all.

You can connect to the server with ssh? Could you add option -D 5000 to the ssh command? With that ssh acts as SOCKS5 proxy. Then you have to tell your browser to use SOCKS5 on localhost, port 5000. That way you can use browser on machine in home network, even though the router does not route. Easier than phone.

Look at output of sudo nft list ruleset. Is there anything that could explain the blockade / do those rules look like they should allow outgoing, new connections?

1 Like

Thank you for the SOCKS5 trick! :).

So new outbound connections do work. The problem is after the connection is established, it basically just times out on my end. Such as: tcpdump shows the wget call (as an example) connect, and send the GET /… But that’s where it ends. So port 80 is open for outbound. Content isn’t routed back. However, icmp is working fine.

In your example for the policty you’re using CONTINUE ast the target. i’m using ACCEPT as the target. want all traffic issued for outbound within the local network to go as normal. So i don’t belive the exclusive service adds (http, https, etc) are required. not to mention the traffic is going out… it’s just hanging… here is an example:
wget yahoo.ca

--2022-09-18 16:55:33--  http://yahoo.ca/
Resolving yahoo.ca (yahoo.ca)... 74.6.136.150, 98.136.103.23, 212.82.100.150
Connecting to yahoo.ca (yahoo.ca)|74.6.136.150|:80... connected.
HTTP request sent, awaiting response... 
<hung>
# But you can see the connection is established... wireshark shows the GET / going out too
# No response back... :(

Edit:
One difference i see is you performing your policy at -1 while i was using 100. It tried moving it to -1 without any luck. I also tried just adding http as a service to it and still same results… ICMP works, Connection works, but beyond just establishing the connection, that’s the end of it :frowning:

Here are some more outputs for you.

firewall-cmd --get-active-zones

docker
  interfaces: docker0
external
  interfaces: eno8303
internal
  interfaces: eno8403

firewall-cmd --zone=external --list-all

external (active)
  target: default
  icmp-block-inversion: no
  interfaces: eno8303
  sources: 
  services: http https ssh
  ports: 
  protocols: 
  forward: no
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

firewall-cmd --zone=internal --list-all

internal (active)
  target: default
  icmp-block-inversion: no
  interfaces: eno8403
  sources: 
  services: cockpit dhcp dhcpv6-client dns http https mdns mysql ntp postgresql samba samba-client ssh
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

firewall-cmd --list-all-policies

allow-host-ipv6 (active)
  priority: -15000
  target: CONTINUE
  ingress-zones: ANY
  egress-zones: HOST
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
	rule family="ipv6" icmp-type name="neighbour-advertisement" accept
	rule family="ipv6" icmp-type name="neighbour-solicitation" accept
	rule family="ipv6" icmp-type name="router-advertisement" accept
	rule family="ipv6" icmp-type name="redirect" accept

int_to_ext (active)
  priority: 100
  target: ACCEPT
  ingress-zones: internal
  egress-zones: external
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

nft list ruleset

table inet firewalld {
	ct helper helper-netbios-ns-udp {
		type "netbios-ns" protocol udp
		l3proto ip
	}

	chain mangle_PREROUTING {
		type filter hook prerouting priority mangle + 10; policy accept;
		jump mangle_PREROUTING_ZONES
	}

	chain mangle_PREROUTING_POLICIES_pre {
		jump mangle_PRE_policy_allow-host-ipv6
	}

	chain mangle_PREROUTING_ZONES {
		iifname "docker0" goto mangle_PRE_docker
		iifname "eno8403" goto mangle_PRE_internal
		iifname "eno8303" goto mangle_PRE_external
		goto mangle_PRE_external
	}

	chain mangle_PREROUTING_POLICIES_post {
	}

	chain nat_PREROUTING {
		type nat hook prerouting priority dstnat + 10; policy accept;
		jump nat_PREROUTING_ZONES
	}

	chain nat_PREROUTING_POLICIES_pre {
		jump nat_PRE_policy_allow-host-ipv6
	}

	chain nat_PREROUTING_ZONES {
		iifname "docker0" goto nat_PRE_docker
		iifname "eno8403" goto nat_PRE_internal
		iifname "eno8303" goto nat_PRE_external
		goto nat_PRE_external
	}

	chain nat_PREROUTING_POLICIES_post {
	}

	chain nat_POSTROUTING {
		type nat hook postrouting priority srcnat + 10; policy accept;
		jump nat_POSTROUTING_ZONES
	}

	chain nat_POSTROUTING_POLICIES_pre {
	}

	chain nat_POSTROUTING_ZONES {
		oifname "docker0" goto nat_POST_docker
		oifname "eno8403" goto nat_POST_internal
		oifname "eno8303" goto nat_POST_external
		goto nat_POST_external
	}

	chain nat_POSTROUTING_POLICIES_post {
	}

	chain filter_PREROUTING {
		type filter hook prerouting priority filter + 10; policy accept;
		icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
		meta nfproto ipv6 fib saddr . mark . iif oif missing log prefix "rpfilter_DROP: " drop
	}

	chain filter_INPUT {
		type filter hook input priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		iifname "lo" accept
		jump filter_INPUT_ZONES
		ct state { invalid } log prefix "STATE_INVALID_DROP: "
		ct state { invalid } drop
		log prefix "FINAL_REJECT: "
		reject with icmpx type admin-prohibited
	}

	chain filter_FORWARD {
		type filter hook forward priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		iifname "lo" accept
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } log prefix "RFC3964_IPv4_REJECT: " reject with icmpv6 type addr-unreachable
		jump filter_FORWARD_ZONES
		ct state { invalid } log prefix "STATE_INVALID_DROP: "
		ct state { invalid } drop
		log prefix "FINAL_REJECT: "
		reject with icmpx type admin-prohibited
	}

	chain filter_OUTPUT {
		type filter hook output priority filter + 10; policy accept;
		ct state { established, related } accept
		oifname "lo" accept
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } log prefix "RFC3964_IPv4_REJECT: " reject with icmpv6 type addr-unreachable
		jump filter_OUTPUT_POLICIES_pre
		jump filter_OUTPUT_POLICIES_post
	}

	chain filter_INPUT_POLICIES_pre {
		jump filter_IN_policy_allow-host-ipv6
	}

	chain filter_INPUT_ZONES {
		iifname "docker0" goto filter_IN_docker
		iifname "eno8403" goto filter_IN_internal
		iifname "eno8303" goto filter_IN_external
		goto filter_IN_external
	}

	chain filter_INPUT_POLICIES_post {
	}

	chain filter_FORWARD_POLICIES_pre {
	}

	chain filter_FORWARD_ZONES {
		iifname "docker0" goto filter_FWD_docker
		iifname "eno8403" goto filter_FWD_internal
		iifname "eno8303" goto filter_FWD_external
		goto filter_FWD_external
	}

	chain filter_FORWARD_POLICIES_post {
		iifname { "eno8403" } oifname { "eno8303" } jump filter_FWD_policy_int_to_ext
	}

	chain filter_OUTPUT_POLICIES_pre {
	}

	chain filter_OUTPUT_POLICIES_post {
	}

	chain filter_IN_external {
		jump filter_INPUT_POLICIES_pre
		jump filter_IN_external_pre
		jump filter_IN_external_log
		jump filter_IN_external_deny
		jump filter_IN_external_allow
		jump filter_IN_external_post
		jump filter_INPUT_POLICIES_post
		meta l4proto { icmp, ipv6-icmp } accept
		log prefix ""filter_IN_external_REJECT: ""
		reject with icmpx type admin-prohibited
	}

	chain filter_IN_external_pre {
	}

	chain filter_IN_external_log {
	}

	chain filter_IN_external_deny {
	}

	chain filter_IN_external_allow {
		tcp dport 22 ct state { new, untracked } accept
		tcp dport 80 ct state { new, untracked } accept
		tcp dport 443 ct state { new, untracked } accept
	}

	chain filter_IN_external_post {
	}

	chain nat_POST_external {
		jump nat_POSTROUTING_POLICIES_pre
		jump nat_POST_external_pre
		jump nat_POST_external_log
		jump nat_POST_external_deny
		jump nat_POST_external_allow
		jump nat_POST_external_post
		jump nat_POSTROUTING_POLICIES_post
	}

	chain nat_POST_external_pre {
	}

	chain nat_POST_external_log {
	}

	chain nat_POST_external_deny {
	}

	chain nat_POST_external_allow {
		meta nfproto ipv4 oifname != "lo" masquerade
	}

	chain nat_POST_external_post {
	}

	chain filter_FWD_external {
		jump filter_FORWARD_POLICIES_pre
		jump filter_FWD_external_pre
		jump filter_FWD_external_log
		jump filter_FWD_external_deny
		jump filter_FWD_external_allow
		jump filter_FWD_external_post
		jump filter_FORWARD_POLICIES_post
		log prefix ""filter_FWD_external_REJECT: ""
		reject with icmpx type admin-prohibited
	}

	chain filter_FWD_external_pre {
	}

	chain filter_FWD_external_log {
	}

	chain filter_FWD_external_deny {
	}

	chain filter_FWD_external_allow {
	}

	chain filter_FWD_external_post {
	}

	chain nat_PRE_external {
		jump nat_PREROUTING_POLICIES_pre
		jump nat_PRE_external_pre
		jump nat_PRE_external_log
		jump nat_PRE_external_deny
		jump nat_PRE_external_allow
		jump nat_PRE_external_post
		jump nat_PREROUTING_POLICIES_post
	}

	chain nat_PRE_external_pre {
	}

	chain nat_PRE_external_log {
	}

	chain nat_PRE_external_deny {
	}

	chain nat_PRE_external_allow {
	}

	chain nat_PRE_external_post {
	}

	chain mangle_PRE_external {
		jump mangle_PREROUTING_POLICIES_pre
		jump mangle_PRE_external_pre
		jump mangle_PRE_external_log
		jump mangle_PRE_external_deny
		jump mangle_PRE_external_allow
		jump mangle_PRE_external_post
		jump mangle_PREROUTING_POLICIES_post
	}

	chain mangle_PRE_external_pre {
	}

	chain mangle_PRE_external_log {
	}

	chain mangle_PRE_external_deny {
	}

	chain mangle_PRE_external_allow {
	}

	chain mangle_PRE_external_post {
	}

	chain filter_IN_policy_allow-host-ipv6 {
		jump filter_IN_policy_allow-host-ipv6_pre
		jump filter_IN_policy_allow-host-ipv6_log
		jump filter_IN_policy_allow-host-ipv6_deny
		jump filter_IN_policy_allow-host-ipv6_allow
		jump filter_IN_policy_allow-host-ipv6_post
	}

	chain filter_IN_policy_allow-host-ipv6_pre {
	}

	chain filter_IN_policy_allow-host-ipv6_log {
	}

	chain filter_IN_policy_allow-host-ipv6_deny {
	}

	chain filter_IN_policy_allow-host-ipv6_allow {
		icmpv6 type nd-neighbor-advert accept
		icmpv6 type nd-neighbor-solicit accept
		icmpv6 type nd-router-advert accept
		icmpv6 type nd-redirect accept
	}

	chain filter_IN_policy_allow-host-ipv6_post {
	}

	chain nat_PRE_policy_allow-host-ipv6 {
		jump nat_PRE_policy_allow-host-ipv6_pre
		jump nat_PRE_policy_allow-host-ipv6_log
		jump nat_PRE_policy_allow-host-ipv6_deny
		jump nat_PRE_policy_allow-host-ipv6_allow
		jump nat_PRE_policy_allow-host-ipv6_post
	}

	chain nat_PRE_policy_allow-host-ipv6_pre {
	}

	chain nat_PRE_policy_allow-host-ipv6_log {
	}

	chain nat_PRE_policy_allow-host-ipv6_deny {
	}

	chain nat_PRE_policy_allow-host-ipv6_allow {
	}

	chain nat_PRE_policy_allow-host-ipv6_post {
	}

	chain mangle_PRE_policy_allow-host-ipv6 {
		jump mangle_PRE_policy_allow-host-ipv6_pre
		jump mangle_PRE_policy_allow-host-ipv6_log
		jump mangle_PRE_policy_allow-host-ipv6_deny
		jump mangle_PRE_policy_allow-host-ipv6_allow
		jump mangle_PRE_policy_allow-host-ipv6_post
	}

	chain mangle_PRE_policy_allow-host-ipv6_pre {
	}

	chain mangle_PRE_policy_allow-host-ipv6_log {
	}

	chain mangle_PRE_policy_allow-host-ipv6_deny {
	}

	chain mangle_PRE_policy_allow-host-ipv6_allow {
	}

	chain mangle_PRE_policy_allow-host-ipv6_post {
	}

	chain filter_IN_internal {
		jump filter_INPUT_POLICIES_pre
		jump filter_IN_internal_pre
		jump filter_IN_internal_log
		jump filter_IN_internal_deny
		jump filter_IN_internal_allow
		jump filter_IN_internal_post
		jump filter_INPUT_POLICIES_post
		meta l4proto { icmp, ipv6-icmp } accept
		log prefix ""filter_IN_internal_REJECT: ""
		reject with icmpx type admin-prohibited
	}

	chain filter_IN_internal_pre {
	}

	chain filter_IN_internal_log {
	}

	chain filter_IN_internal_deny {
	}

	chain filter_IN_internal_allow {
		tcp dport 22 ct state { new, untracked } accept
		ip daddr 224.0.0.251 udp dport 5353 ct state { new, untracked } accept
		ip6 daddr ff02::fb udp dport 5353 ct state { new, untracked } accept
		udp dport 137 ct helper set "helper-netbios-ns-udp"
		udp dport 137 ct state { new, untracked } accept
		udp dport 138 ct state { new, untracked } accept
		ip6 daddr fe80::/64 udp dport 546 ct state { new, untracked } accept
		tcp dport 9090 ct state { new, untracked } accept
		tcp dport 80 ct state { new, untracked } accept
		tcp dport 443 ct state { new, untracked } accept
		udp dport 67 ct state { new, untracked } accept
		udp dport 123 ct state { new, untracked } accept
		tcp dport 53 ct state { new, untracked } accept
		udp dport 53 ct state { new, untracked } accept
		tcp dport 3306 ct state { new, untracked } accept
		tcp dport 5432 ct state { new, untracked } accept
		tcp dport 139 ct state { new, untracked } accept
		tcp dport 445 ct state { new, untracked } accept
	}

	chain filter_IN_internal_post {
	}

	chain nat_POST_internal {
		jump nat_POSTROUTING_POLICIES_pre
		jump nat_POST_internal_pre
		jump nat_POST_internal_log
		jump nat_POST_internal_deny
		jump nat_POST_internal_allow
		jump nat_POST_internal_post
		jump nat_POSTROUTING_POLICIES_post
	}

	chain nat_POST_internal_pre {
	}

	chain nat_POST_internal_log {
	}

	chain nat_POST_internal_deny {
	}

	chain nat_POST_internal_allow {
	}

	chain nat_POST_internal_post {
	}

	chain filter_FWD_internal {
		jump filter_FORWARD_POLICIES_pre
		jump filter_FWD_internal_pre
		jump filter_FWD_internal_log
		jump filter_FWD_internal_deny
		jump filter_FWD_internal_allow
		jump filter_FWD_internal_post
		jump filter_FORWARD_POLICIES_post
		log prefix ""filter_FWD_internal_REJECT: ""
		reject with icmpx type admin-prohibited
	}

	chain filter_FWD_internal_pre {
	}

	chain filter_FWD_internal_log {
	}

	chain filter_FWD_internal_deny {
	}

	chain filter_FWD_internal_allow {
		oifname "eno8403" accept
	}

	chain filter_FWD_internal_post {
	}

	chain nat_PRE_internal {
		jump nat_PREROUTING_POLICIES_pre
		jump nat_PRE_internal_pre
		jump nat_PRE_internal_log
		jump nat_PRE_internal_deny
		jump nat_PRE_internal_allow
		jump nat_PRE_internal_post
		jump nat_PREROUTING_POLICIES_post
	}

	chain nat_PRE_internal_pre {
	}

	chain nat_PRE_internal_log {
	}

	chain nat_PRE_internal_deny {
	}

	chain nat_PRE_internal_allow {
	}

	chain nat_PRE_internal_post {
	}

	chain mangle_PRE_internal {
		jump mangle_PREROUTING_POLICIES_pre
		jump mangle_PRE_internal_pre
		jump mangle_PRE_internal_log
		jump mangle_PRE_internal_deny
		jump mangle_PRE_internal_allow
		jump mangle_PRE_internal_post
		jump mangle_PREROUTING_POLICIES_post
	}

	chain mangle_PRE_internal_pre {
	}

	chain mangle_PRE_internal_log {
	}

	chain mangle_PRE_internal_deny {
	}

	chain mangle_PRE_internal_allow {
	}

	chain mangle_PRE_internal_post {
	}

	chain filter_FWD_policy_int_to_ext {
		jump filter_FWD_policy_int_to_ext_pre
		jump filter_FWD_policy_int_to_ext_log
		jump filter_FWD_policy_int_to_ext_deny
		jump filter_FWD_policy_int_to_ext_allow
		jump filter_FWD_policy_int_to_ext_post
		accept
	}

	chain filter_FWD_policy_int_to_ext_pre {
	}

	chain filter_FWD_policy_int_to_ext_log {
	}

	chain filter_FWD_policy_int_to_ext_deny {
	}

	chain filter_FWD_policy_int_to_ext_allow {
	}

	chain filter_FWD_policy_int_to_ext_post {
	}

	chain filter_IN_docker {
		jump filter_INPUT_POLICIES_pre
		jump filter_IN_docker_pre
		jump filter_IN_docker_log
		jump filter_IN_docker_deny
		jump filter_IN_docker_allow
		jump filter_IN_docker_post
		jump filter_INPUT_POLICIES_post
		accept
	}

	chain filter_IN_docker_pre {
	}

	chain filter_IN_docker_log {
	}

	chain filter_IN_docker_deny {
	}

	chain filter_IN_docker_allow {
	}

	chain filter_IN_docker_post {
	}

	chain nat_POST_docker {
		jump nat_POSTROUTING_POLICIES_pre
		jump nat_POST_docker_pre
		jump nat_POST_docker_log
		jump nat_POST_docker_deny
		jump nat_POST_docker_allow
		jump nat_POST_docker_post
		jump nat_POSTROUTING_POLICIES_post
	}

	chain nat_POST_docker_pre {
	}

	chain nat_POST_docker_log {
	}

	chain nat_POST_docker_deny {
	}

	chain nat_POST_docker_allow {
	}

	chain nat_POST_docker_post {
	}

	chain filter_FWD_docker {
		jump filter_FORWARD_POLICIES_pre
		jump filter_FWD_docker_pre
		jump filter_FWD_docker_log
		jump filter_FWD_docker_deny
		jump filter_FWD_docker_allow
		jump filter_FWD_docker_post
		jump filter_FORWARD_POLICIES_post
		accept
	}

	chain filter_FWD_docker_pre {
	}

	chain filter_FWD_docker_log {
	}

	chain filter_FWD_docker_deny {
	}

	chain filter_FWD_docker_allow {
		oifname "docker0" accept
	}

	chain filter_FWD_docker_post {
	}

	chain nat_PRE_docker {
		jump nat_PREROUTING_POLICIES_pre
		jump nat_PRE_docker_pre
		jump nat_PRE_docker_log
		jump nat_PRE_docker_deny
		jump nat_PRE_docker_allow
		jump nat_PRE_docker_post
		jump nat_PREROUTING_POLICIES_post
	}

	chain nat_PRE_docker_pre {
	}

	chain nat_PRE_docker_log {
	}

	chain nat_PRE_docker_deny {
	}

	chain nat_PRE_docker_allow {
	}

	chain nat_PRE_docker_post {
	}

	chain mangle_PRE_docker {
		jump mangle_PREROUTING_POLICIES_pre
		jump mangle_PRE_docker_pre
		jump mangle_PRE_docker_log
		jump mangle_PRE_docker_deny
		jump mangle_PRE_docker_allow
		jump mangle_PRE_docker_post
		jump mangle_PREROUTING_POLICIES_post
	}

	chain mangle_PRE_docker_pre {
	}

	chain mangle_PRE_docker_log {
	}

	chain mangle_PRE_docker_deny {
	}

	chain mangle_PRE_docker_allow {
	}

	chain mangle_PRE_docker_post {
	}
}
table ip filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
		meta l4proto tcp tcp dport 22 # match-set f2b-sshd src counter packets 493 bytes 40856 reject
	}

	chain DOCKER {
		iifname != "docker0" oifname "docker0" meta l4proto tcp ip daddr 172.17.0.2 tcp dport 443 counter packets 0 bytes 0 accept
		iifname != "docker0" oifname "docker0" meta l4proto tcp ip daddr 172.17.0.2 tcp dport 22 counter packets 0 bytes 0 accept
	}

	chain DOCKER-ISOLATION-STAGE-1 {
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
		counter packets 379743 bytes 157355335 return
	}

	chain DOCKER-ISOLATION-STAGE-2 {
		oifname "docker0" counter packets 0 bytes 0 drop
		counter packets 0 bytes 0 return
	}

	chain FORWARD {
		type filter hook forward priority filter; policy accept;
		counter packets 379714 bytes 157351467 jump DOCKER-USER
		counter packets 379719 bytes 157352208 jump DOCKER-ISOLATION-STAGE-1
		oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
		oifname "docker0" counter packets 0 bytes 0 jump DOCKER
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
		iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
	}

	chain DOCKER-USER {
		counter packets 379714 bytes 157351467 return
	}
}
table ip nat {
	chain DOCKER {
		iifname "docker0" counter packets 0 bytes 0 return
		iifname != "docker0" meta l4proto tcp tcp dport 30443 counter packets 0 bytes 0 dnat to 172.17.0.2:443
		iifname != "docker0" meta l4proto tcp tcp dport 30022 counter packets 0 bytes 0 dnat to 172.17.0.2:22
	}

	chain POSTROUTING {
		type nat hook postrouting priority srcnat; policy accept;
		oifname != "docker0" ip saddr 172.17.0.0/16 counter packets 0 bytes 0 masquerade 
		meta l4proto tcp ip saddr 172.17.0.2 ip daddr 172.17.0.2 tcp dport 443 counter packets 0 bytes 0 masquerade 
		meta l4proto tcp ip saddr 172.17.0.2 ip daddr 172.17.0.2 tcp dport 22 counter packets 0 bytes 0 masquerade 
	}

	chain PREROUTING {
		type nat hook prerouting priority dstnat; policy accept;
		fib daddr type local counter packets 239473 bytes 18282843 jump DOCKER
	}

	chain OUTPUT {
		type nat hook output priority -100; policy accept;
		ip daddr != 127.0.0.0/8 fib daddr type local counter packets 358 bytes 21886 jump DOCKER
	}
}

What’s your default zone set to in /etc/firewalld/firewalld.conf? Or you can also check it with:

firewall-cmd --get-default-zone

I did set mine to the external one. I’ve done pretty much the same as you (I guess we found the same article), so I’m not sure if it’s a Rocky 8 vs 9 thing in why it doesn’t work. But to be honest, I would have expected it to just work.

Looks like it is external based on:

	chain filter_INPUT_ZONES {
		iifname "docker0" goto filter_IN_docker
		iifname "eno8403" goto filter_IN_internal
		iifname "eno8303" goto filter_IN_external
		goto filter_IN_external
	}

Although, we also note that every interface has explicit zone, so the default is not used by anyone.

What is more interesting is this:

table inet firewalld {
	chain filter_FORWARD {
		type filter hook forward priority filter + 10; policy accept;
		ct state { established, related } accept
		ct status dnat accept
		iifname "lo" accept
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } log prefix "RFC3964_IPv4_REJECT: " reject with icmpv6 type addr-unreachable
		jump filter_FORWARD_ZONES
		ct state { invalid } log prefix "STATE_INVALID_DROP: "
		ct state { invalid } drop
		log prefix "FINAL_REJECT: "
		reject with icmpx type admin-prohibited
	}
}
table ip filter {
	chain FORWARD {
		type filter hook forward priority filter; policy accept;
		counter packets 379714 bytes 157351467 jump DOCKER-USER
		counter packets 379719 bytes 157352208 jump DOCKER-ISOLATION-STAGE-1
		oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
		oifname "docker0" counter packets 0 bytes 0 jump DOCKER
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
		iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
	}
}

There are two chains (FORWARD in table ip filter and filter_FORWARD in table inet firewalld) that are on hook forward. The nftables allows such. AFAIK, priorities are ascending, so FORWARD is processed before filter_FORWARD. Apparently Docker creates its own tables, even though docker-related chains are also in table inet firewalld.

The firewalld+docker generated ruleset is … uneasy to read. Nothing strikes out as clear explanation.

Worth of notice are the counter and log actions. Nftables does not collect statistics by default like netfilter did, so one has to add counters explicitly. I’ve written rulesets by hand, so don’t know how/whether firewalld supports them.

Counter does not have to be as separate rule, it can be with another action:
iifname "lan" tcp dport ssh ct state {new, untracked} counter accept
counts and accepts.

The log rules are not in default ruleset either. You have added them somehow. Perhaps by --set-log-denied=all ?
Where are the logs? journalctl -u firewalld ?
Do they explain what is rejected?

I would listen with (tcpdump, wireshark, etc) on the router, on both interfaces.

  1. Connection attempt from internal machine X to 74.6.136.150:80 will show on eno8403
  2. Packet from address Y of eno8303 should leave to 74.6.136.150:80 via eno8303
  3. Reply from 74.6.136.150:80 to Y should arrive back to eno8303
  4. Reply from 74.6.136.150:80 to X obviously does not exit via eno8403 (but does something else?)

Sure, here it is:

firewall-cmd --get-default-zone
external

I did some more testing and while I consider myself okay with networking, some of what is going wrong here is definitely over my head.

Here is what a call made from a computer within my network (routing through the server in question). The computer on the network just makes a wget www.google.ca call. I set up tcpdump to record it’s flow at the exiting server (in question) monitoring the external interface:

New users are restricted from posting more than 1 image on a post. I have 2 more screen grabs i can share:

  1. A tcpdump capture set up to monitor the internal network card yields (vs the capture on the external above). It yields the same results telling us that the server is in fact routing the content (which is already garbled i guess at this point?).
  2. A tcpdump capture set up to monitor the external network card while performing the same wget call from the router (server in question) to show a perfect (what is expected) transaction. It is all green lights and consists of about 26 packets exchanged unlike the 7 that occur when a server on my internal network performs it.

Not sure if the issue is with the kernel ip_conntrack (Connection tracker)? I checked that it is running correctly.

lsmod | egrep nf
binfmt_misc            28672  1
nf_conntrack_netlink    57344  0
nft_objref             16384  1
nf_conntrack_netbios_ns    16384  4
nf_conntrack_broadcast    16384  1 nf_conntrack_netbios_ns
nft_counter            16384  19
nft_compat             20480  14
nft_masq               16384  1
nft_fib_inet           16384  1
nft_fib_ipv4           16384  1 nft_fib_inet
nft_fib_ipv6           16384  1 nft_fib_inet
nft_fib                16384  3 nft_fib_ipv6,nft_fib_ipv4,nft_fib_inet
nft_reject_inet        16384  8
nf_reject_ipv4         16384  2 nft_reject_inet,ipt_REJECT
nf_reject_ipv6         20480  1 nft_reject_inet
nft_reject             16384  1 nft_reject_inet
nft_ct                 20480  27
nft_chain_nat          16384  5
nf_nat                 53248  4 xt_nat,nft_masq,nft_chain_nat,xt_MASQUERADE
nf_conntrack          176128  9 xt_conntrack,nf_nat,nft_ct,nf_conntrack_netbios_ns,xt_nat,nf_conntrack_broadcast,nf_conntrack_netlink,nft_masq,xt_MASQUERADE
nf_defrag_ipv6         24576  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
nf_tables             270336  535 nft_ct,nft_compat,nft_reject_inet,nft_fib_ipv6,nft_objref,nft_fib_ipv4,nft_counter,nft_masq,nft_chain_nat,nft_reject,nft_fib,nft_fib_inet
nfnetlink              20480  6 nft_compat,nf_conntrack_netlink,nf_tables,ip_set
libcrc32c              16384  5 nf_conntrack,nf_nat,nf_tables,xfs,raid456

The versions involved with all of what is going on is the latest and greatest too

rpm -qa | egrep '(kernel|firewalld|NetworkManager)' | sort

firewalld-1.0.0-4.el9.noarch
firewalld-filesystem-1.0.0-4.el9.noarch
kernel-5.14.0-70.13.1.el9_0.x86_64
kernel-5.14.0-70.22.1.el9_0.x86_64
kernel-core-5.14.0-70.13.1.el9_0.x86_64
kernel-core-5.14.0-70.22.1.el9_0.x86_64
kernel-modules-5.14.0-70.13.1.el9_0.x86_64
kernel-modules-5.14.0-70.22.1.el9_0.x86_64
kernel-tools-5.14.0-70.22.1.el9_0.x86_64
kernel-tools-libs-5.14.0-70.22.1.el9_0.x86_64
NetworkManager-1.36.0-5.el9_0.x86_64
NetworkManager-libnm-1.36.0-5.el9_0.x86_64
NetworkManager-ppp-1.36.0-5.el9_0.x86_64
NetworkManager-team-1.36.0-5.el9_0.x86_64
NetworkManager-tui-1.36.0-5.el9_0.x86_64

Edit:
Just saw you added to your original post asking for more details. The logs for firewalld are just the typical docker ones. When you google it, it’s just it’s way of trying to remove tables that don’t exist and it’s nothing to be worried about.

Sep 18 17:33:02 core.home.arpa systemd[1]: Started firewalld - dynamic firewall daemon.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER' failed: iptables v1.8.7 (nf_tables): Chain 'DOCKER' does not>
                                                Try `iptables -h' or 'iptables --help' for more information.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER' failed: iptables v1.8.7 (nf_tables): Chain '>
                                                Try `iptables -h' or 'iptables --help' for more information.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER' failed: iptables v1.8.7 (nf_tables): Chain 'DOCKER' does not exi>
                                                Try `iptables -h' or 'iptables --help' for more information.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Sep 18 17:33:11 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Sep 18 18:05:39 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Sep 18 18:05:39 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Sep 18 18:26:26 core.home.arpa firewalld[1045]: WARNING: ZONE_ALREADY_SET: 'eno8403' already bound to 'internal'
Sep 18 18:28:22 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Sep 18 18:28:23 core.home.arpa firewalld[1045]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).

And iptables:

iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
DOCKER-USER  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:https
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:ssh

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (0 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere      

If you want i can disable docker for the time being. This routing issue existed before it was installed and set up and persists with it disabled as well. But i can see how it might throw added confusion into all of this.


Just want to thank you again @iwalker and @jlehtone for your help so far here. I hope you and/or someone else might be able to make more sense of the information shared here to pinpoint the culprit.

Thoughts?

The ‘iptables’ command is already in EL8 a mere wrapper to ‘nft’. It does show only some chains from the ruleset.

One thing to try is to start with “clean slate” (firewalld and docker off, ruleset empty) and load a simpler ruleset.

The content below is adapted from what I have on an AlmaLinux 8 machine (where I run nftables.service, rather than firewalld.service). Alas, I don’t have EL9-based router in use yet.

# cat /etc/sysctl.d/my.conf 
net.ipv4.ip_forward = 1
# tail -1 /etc/sysconfig/nftables.conf 
include "/etc/nftables/my.nft"
# cat /etc/nftables/my.nft
table inet my {
	chain raw_PREROUTING {
		type filter hook prerouting priority raw; policy accept;
		icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
		meta nfproto ipv6 fib saddr . iif oif missing drop
	}

	chain filter_INPUT {
		type filter hook input priority filter; policy accept;
		ct state established,related accept
		ct status dnat accept
		iifname "lo" accept
		jump filter_INPUT_ZONES
		ct state invalid drop
		reject with icmpx type admin-prohibited
	}

	chain filter_FORWARD {
		type filter hook forward priority filter; policy accept;
		ct state established,related counter accept
		ct status dnat counter accept
		iifname "lo" accept
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 type addr-unreachable

		iifname "eno8403" oifname "eno8303" counter accept
		iifname "eno8303" counter drop
		meta l4proto { icmp, ipv6-icmp } counter accept

		ct state invalid drop
		reject with icmpx type admin-prohibited
	}

	chain filter_OUTPUT {
		type filter hook output priority filter; policy accept;
		oifname "lo" accept
		ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 type addr-unreachable
	}

	chain filter_INPUT_ZONES {
		iifname "eno8403"        counter goto filter_IN_trust 
		iifname "eno8303"        counter goto filter_IN_block
	}

	chain filter_IN_trust {
		tcp dport ssh          ct state new,untracked counter accept
		meta l4proto { icmp, ipv6-icmp } accept
		counter accept
	}

	chain filter_IN_block {
		reject with icmpx type admin-prohibited
	}
}

table ip my {
	chain nat_PREROUTING {
		type nat hook prerouting priority dstnat; policy accept;
	}

	chain nat_POSTROUTING {
		type nat hook postrouting priority srcnat; policy accept;
		oifname "eno8303" masquerade
	}
}

If routing is successful with the above, then we know that issue is in ruleset. If not, then either the above ruleset is bad too, or the issue really is within kernel.

Sorry to take so long to get back to you; Monday means back to work so evenings are all i got now. I really appreciate your patience.

Your rules seem to improve the conditions as my phone can pull up some webpages, and others time out (this is further along). Sticking with the straight wireshark and wget call like above also goes beyond 7 packets… in fact it was the 79th packet (during the SSL exchanging where the connection died). But it does it every time… … same TCP Re-transmission errors.

Edit:
In attempts to rule out physical hardware, I used a different Ethernet connection and set it up to be the new external. Mind you the conection uses the same driver:

 lshw -class network
...
# eno8303 (the device we've been using for this entire ticket)
description: Ethernet interface
product: NetXtreme BCM5720 Gigabit Ethernet PCIe
vendor: Broadcom Inc. and subsidiaries
logical name: eno8303
version: 00
serial: <not important>
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation

...

# ens1f0 - The device i attempted to use instead just now (same problem)
#               Just a different version of the broadcom
description: Ethernet interface
product: NetXtreme BCM5719 Gigabit Ethernet PCIe
vendor: Broadcom Inc. and subsidiaries
logical name: ens1f0
version: 01
serial: <not important>
size: 1Gbit/s
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: bus_master cap_list rom Ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation

I was reading on the internet that this happens with some Cisco switches that use auto-negotiation. I’m wondering if this is some kind of driver problem between the network card on the server and the cable modem (configured in bridge mode). Or if there is some kind of MTU setting i should be setting to facilitate this scenario?

Or perhaps it’s what you’re suggesting and it’s a kernel issue?

Not sure what the next steps are… Thoughts?

I’m not technically at your experience level but I do follow the dd-wrt forums since I use their software on my router. One of the things I notice is that though they support certain broadcom hardware the majority of users favor the atheros based hardware. This appears to be due to the fact that broadcom is less forthcoming with their specs. Just a thought not a solution.

One of the things I notice is that though they support certain broadcom hardware the majority of users favor the atheros based hardware. This appears to be due to the fact that broadcom is less forthcoming with their specs. Just a thought not a solution.

I would partially agree and disagree with you.

The disagree part would be: The server I’m using (Dell R350) optionally ships with RHEL 9 as an option (if one chooses it.) Also, this server is certified on RedHat’s website:


Link:

# I have to put the link as a code block since as a newbie i can only have
# one hyperlink (already used above) per post, so bear with me here. But
# this is the direct link to RHEL9 certified hardware (specifically the R350).
https://catalog.redhat.com/hardware/servers/detail/6207522?platforms=Red%20Hat%20Enterprise%20Linux%209`

I only ordered this server because of this knowing very well I’d be throwing Rocky Linux 9 on it instead :slight_smile: . I’m somewhat under the impression that if it should work for RHEL9, then it should work for Rocky 9 as well. Any bug detected here, would actually (likely) persist in RHEL 9 too? Or is this a very bad assumption? I could be wrong here…

That said, I do agree with you about broadcom support; they’ve always been somewhat of closed support. But they don’t go anywhere as time goes on, and every time they come out with new hardware, they also come out with the drivers to support it. So props to that; Nvidia works the same way.

Speaking of support; i really appreciate the level this forum provides. I look forward to any more comments/suggestions anyone has. You’ve all been amazing.


On another note, I played a bit with MTU settings and am able to get some web pages (such as google.com) to show up every-time. However if i search for anything, or even try to access forums.rockylinux.org it goes through the same Duplicate Ack and just dies. So there is definitely something wrong :frowning: .

Just out of interest, are you willing to check/test it with Rocky 8? If not I’ll fire up a Rocky 9 install to make a firewall from it to try and replicate your issue. We could at least rule out if something in Rocky 9 changed that is stopping this from working.

As an aside, I just did it on Rocky 9 and it works:

firewall-cmd --zone=external --add-interface=enp2s0 --permanent
firewall-cmd --zone=internal --add-interface=enp1s0 --permanent
firewall-cmd --set-default-zone=external --permanent
firewall-cmd --reload
firewall-cmd --get-default-zone
firewall-cmd --new-policy internal-external --permanent
firewall-cmd --reload
firewall-cmd --policy internal-external --add-ingress-zone=internal --permanent
firewall-cmd --policy internal-external --add-egress-zone=external --permanent
firewall-cmd --policy internal-external --set-target=ACCEPT --permanent
firewall-cmd --reload
firewall-cmd --info-policy internal-external

enp1s0 being my internal network with just an IP assigned. enp2s0 being the internet connection with the default route set appropriately. I actually edited /etc/firewalld/firewalld.conf and set here the Default Zone, but I expect the command I show above would actually do the same thing. I create the policy, add the ingress and egress zones, change the target, and then it worked for me.

2 Likes

Indeed. Bug-for-bug compatible is the goal.

With devices it is useful to run lspci -nn to get device ID. The ID tells exact version of the device. (Marketing names are less unambiguous.)

ELRepo builds kernel modules for devices that RH does not support in EL, but they also do build some modules that supercede (“fixed version”) what is in EL. See ELRepo | DeviceIDs

Have you searched RHEL bugs of this type? In https://bugzilla.redhat.com/

I’ve seen on Dell R440 with CentOS 7 a Broadcom [14e4:16d8] that was not reliable, so it was replaced with Intel [8086:1572] , although these were 10Gbps cards:

Broadcom Inc. and subsidiaries BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller [14e4:16d8] (rev 01)
Intel Corporation Ethernet Controller X710 for 10GbE SFP+ [8086:1572] (rev 02)

With devices it is useful to run lspci -nn to get device ID. The ID tells exact version of the device. (Marketing names are less unambiguous.)

Sounds good! here it is:
Note: both (eno8303 - ‘external’) and (eno8403 - ‘internal’) are the same card to a T (minus serial number of course :slight_smile: ). So i’ll just paste the one:

lspci -vv
...
# just filtering for the device in question (eno8303 - 'external')
01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
        Subsystem: Broadcom Inc. and subsidiaries 4-port 1Gb Ethernet Adapter
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 16
        Region 0: Memory at 40000b0000 (64-bit, prefetchable) [size=64K]
        Region 2: Memory at 40000a0000 (64-bit, prefetchable) [size=64K]
        Region 4: Memory at 4000090000 (64-bit, prefetchable) [size=64K]
        Expansion ROM at 91900000 [disabled] [size=256K]
        Capabilities: [48] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
        Capabilities: [50] Vital Product Data
                Product Name: Broadcom NetXtreme Gigabit Ethernet
                Read-only fields:
                        [PN] Part number: BCM95719
                        [MN] Manufacture ID: 1028
                        [V0] Vendor specific: FFV22.00.6
                        [V1] Vendor specific: DSV1028VPDR.VER1.0
                        [V2] Vendor specific: NPY4
                        [V3] Vendor specific: PMT1
                        [V4] Vendor specific: NMVBroadcom Corp
                        [V5] Vendor specific: DTINIC
                        [V6] Vendor specific: DCM1001008945210100894532010089454301008945
                        [RV] Reserved: checksum good, 213 byte(s) reserved
                End
        Capabilities: [58] MSI: Enable- Count=1/8 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [a0] MSI-X: Enable+ Count=17 Masked-
                Vector table: BAR=4 offset=00000000
                PBA: BAR=4 offset=00001000
        Capabilities: [ac] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 <64us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75.000W
                DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr+ NoSnoop- FLReset-
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <1us, L1 <2us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp-
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s (ok), Width x4 (ok)
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis- LTR- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [13c v1] Device Serial Number <redacted>
        Capabilities: [150 v1] Power Budgeting <?>
        Capabilities: [160 v1] Virtual Channel
                Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
                Arb:    Fixed- WRR32- WRR64- WRR128-
                Ctrl:   ArbSelect=Fixed
                Status: InProgress-
                VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
                        Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
                        Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
                        Status: NegoPending- InProgress-
        Kernel driver in use: tg3
        Kernel modules: tg3

@iwalker i have not tried Rocky Linux 8, no. To be blunt though, i don’t really want to. I’m hoping to set this server up and leave it for the next several years (of course keeping up with patches).

Have you searched RHEL bugs of this type? In https://bugzilla.redhat.com/

I did my best before I came here; but it’s very possible i missed something. I could reach out to Dell and get their opinion. Not sure if they’ll throw back that I’m not using RHEL 9 so thus I’d likely be denied support.

You can filter with the -s option. The -nn and -vv do show different info. In the below outputs I see the ID [14e4:165f] only in the first:

$ lspci -s 04:00.0 -nn
04:00.0 Ethernet controller [0200]: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe [14e4:165f]

$ lspci -s 04:00.0 -vv
04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
	Subsystem: Dell Device 001f
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 17
	NUMA node: 0
	Region 0: Memory at 92e30000 (64-bit, prefetchable) [size=64K]
	Region 2: Memory at 92e40000 (64-bit, prefetchable) [size=64K]
	Region 4: Memory at 92e50000 (64-bit, prefetchable) [size=64K]
	Expansion ROM at 90000000 [disabled] [size=256K]
	Capabilities: [48] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] Vital Product Data
		Product Name: Broadcom NetXtreme Gigabit Ethernet
		Read-only fields:
			[PN] Part number: BCM95720
			[MN] Manufacture ID: 31 30 32 38
			[V0] Vendor specific: FFV21.40.2
			[V1] Vendor specific: DSV1028VPDR.VER1.0
			[V2] Vendor specific: NPY2
			[V3] Vendor specific: PMT1
			[V4] Vendor specific: NMVBroadcom Corp
			[V5] Vendor specific: DTINIC
			[V6] Vendor specific: DCM1001008d452101008d45
			[RV] Reserved: checksum good, 233 byte(s) reserved
		End
	Capabilities: [58] MSI: Enable- Count=1/8 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [a0] MSI-X: Enable+ Count=17 Masked-
		Vector table: BAR=4 offset=00000000
		PBA: BAR=4 offset=00001000
	Capabilities: [ac] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 <64us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 10.000W
		DevCtl:	Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd- ExtTag- PhantFunc- AuxPwr+ NoSnoop- FLReset-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x2, ASPM not supported, Exit Latency L0s <1us, L1 <2us
			ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp-
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- CommClk+
			ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF Disabled
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [100 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt+ RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP+ FCP+ CmpltTO+ CmpltAbrt+ UnxCmplt- RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		CEMsk:	RxErr- BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [13c v1] Device Serial Number 00-00-4c-d9-8f-57-63-04
	Capabilities: [150 v1] Power Budgeting <?>
	Capabilities: [160 v1] Virtual Channel
		Caps:	LPEVC=0 RefClk=100ns PATEntryBits=1
		Arb:	Fixed- WRR32- WRR64- WRR128-
		Ctrl:	ArbSelect=Fixed
		Status:	InProgress-
		VC0:	Caps:	PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
			Arb:	Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
			Ctrl:	Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
			Status:	NegoPending- InProgress-
	Kernel driver in use: tg3
	Kernel modules: tg3

Yeah, I realise that, but for a quick test to see if the problem exists, you’ll literally only lose 1 hour of time or so just to check and test.

I’ll give a quick scenario of something I experienced. I used to use Linux Mint, but for some bizarre reason my wireless network card would never work at 5GHz, always connected on 2.4GHz. If I forced it to 5GHz, there was major packet drop, etc, etc, basically unusable. I figured, maybe it’s an old driver, so I fired up Fedora, and all worked good, major fast everything. So decided that was it. So switched the laptop over to Fedora. Started getting it set up, all looking good. I then decided to enable the NVIDIA drivers. Boom, network unstable, no longer working, dropping packets like crazy. My Linux Mint install was using the NVIDIA drivers, didn’t think to switch it to the Intel one. Had I done that, sure I could have saved the reinstall time by not doing it in the first place, but then I’d have never found the problem. I’d have just carried on using 2.4GHz, until I decided to reinstall, or get a new laptop.

Anyway, I did test it anyway, and it works for me on both Rocky 8 and Rocky 9 on a VM I created, so it could well be hardware related for you at which @jlehtone has mentioned. Rocky 8 does have a different kernel, older than Rocky 9. You could enable elrepo and potentially use a newer mainline kernel, that might be something to try before testing Rocky 8 as a comparison.

Other than that, sorry cannot suggest anything more. Assuming you followed similar steps to what I outlined in my previous post for configuring the firewall, then it should work. So I would say software wise it’s fine. Whether any network drivers/modules are interfering, or older versions of modules than say a kernel mainline from elrepo, I have no way to test it. It would need to be something that you are willing to try on your server.

1 Like

I think we’re at a stalemate if no other suggestions. I will buy an Intel network card and swap it out for the one shipped from Dell and see if that makes a difference.

As an update to this, I popped out the Broadcom Ethernet device, and stuck in an Intel card.

Hooked it all back up again and everything worked first try… no mtu settings, no nothing; super simple like it’s supposed to be. You called this @jbkt23 :point_up:

It’s a very unfortunate situation; not sure exactly what qualified a Certification on the hardware when the first network test i do on it fails.

Just wanted to give props to all of you guys for your awesome patients and help. Hopefully this thread will prove useful to someone else. Maybe someone over at Red Hat can take a look at this and fix their Broadcom drivers before someone else notices ;).

Kudos again guys!

1 Like

Unfortunately it is unlikely that RH looks here. You need to post a bug against what network component I don’t know, showing the steps to failure and what you did to fix.

RH Bugzilla

Certifications are kind of pay to play service and don’t certify for specific use cases if you read the fine print.

1 Like