When I start firewalld , ssh from other host stops

On Debian 12 I also enabled NAT in 1 minute using firewalld:
firewall-cmd --get-active-zones
public
interfaces: enp2s0 enp3s0

firewall-cmd --permanent --zone=public --add-masquerade
success
firewall-cmd --permanent --query-masquerade
success
firewall-cmd --permanent --zone=public --add-rich-rule=‘rule family=ipv4 source address=10.44.0.0/16 masquerade’
success

firewall-cmd --reload

firewall-cmd --list-all --zone=public
public (active)
target: default
icmp-block-inversion: no
interfaces: enp2s0 enp3s0
sources:
services: dhcpv6-client
ports: 11954/tcp
protocols:
forward: yes
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family=“ipv4” source address=“10.44.0.0/16” masquerade

What are the actual rules in the kernel?
You do tell the FirewallD to create masquerade rule(s) (masquerade: yes)
AND
add explicit masquerade rule too (rule family=“ipv4” source address=“10.44.0.0/16” masquerade).
Isn’t that redundant.


If the masquerade: yes does create rule(s), then it adds masquerade on both interfaces (enp2s0 enp3s0).
Why should you sNAT on the enp2s0 post-routing?

My rules( I have reduced the range of the masquerade):

firewall-cmd --permanent --list-all --zone=public
public (active)
target: default
icmp-block-inversion: no
interfaces: enp2s0
sources: 10.44.1.0/24
services: dhcpv6-client
ports: 11954/tcp
protocols:
forward: yes
masquerade: yes
forward-ports:
port=80:proto=tcp:toport=80:toaddr=10.44.2.58
port=30009:proto=tcp:toport=3389:toaddr=10.44.1.44
source-ports:
icmp-blocks:
rich rules:
rule family=“ipv4” source address=“10.44.7.77” masquerade
rule family=“ipv4” source address=“10.44.7.78” masquerade
rule family=“ipv4” source address=“10.44.1.0/24” masquerade
rule family=“ipv4” source address=“10.44.2.0/24” masquerade
rule family=“ipv4” source address=“10.44.3.0/24” masquerade
rule family=“ipv4” source address=“10.44.4.0/24” masquerade
rule family=“ipv4” source address=“10.44.5.0/24” masquerade

But despite such rules, NAT for some reason allows all addresses of range 10.44.0.0/16 (for example 10.44.250.250) although I deleted it. This rule probably remains somewhere :laughing:

Yes, you probably did request that rule with the:

firewall-cmd --permanent --zone=public --add-masquerade

The actual rules in the kernel (not the FirewallD config) you should see with:

nft list ruleset

nft list ruleset
table inet firewalld {
chain mangle_PREROUTING {
type filter hook prerouting priority mangle + 10; policy accept;
jump mangle_PREROUTING_ZONES
}

chain mangle_PREROUTING_POLICIES_pre {
	jump mangle_PRE_policy_allow-host-ipv6
}

chain mangle_PREROUTING_ZONES {
	ip saddr 10.44.1.0/24 goto mangle_PRE_public
	iifname "enp3s0" goto mangle_PRE_public
	iifname "enp2s0" goto mangle_PRE_public
	goto mangle_PRE_public
}

chain mangle_PREROUTING_POLICIES_post {
}

chain nat_PREROUTING {
	type nat hook prerouting priority dstnat + 10; policy accept;
	jump nat_PREROUTING_ZONES
}

chain nat_PREROUTING_POLICIES_pre {
	jump nat_PRE_policy_allow-host-ipv6
}

chain nat_PREROUTING_ZONES {
	ip saddr 10.44.1.0/24 goto nat_PRE_public
	iifname "enp3s0" goto nat_PRE_public
	iifname "enp2s0" goto nat_PRE_public
	goto nat_PRE_public
}

chain nat_PREROUTING_POLICIES_post {
}

chain nat_POSTROUTING {
	type nat hook postrouting priority srcnat + 10; policy accept;
	jump nat_POSTROUTING_ZONES
}

chain nat_POSTROUTING_POLICIES_pre {
}

chain nat_POSTROUTING_ZONES {
	ip daddr 10.44.1.0/24 goto nat_POST_public
	oifname "enp3s0" goto nat_POST_public
	oifname "enp2s0" goto nat_POST_public
	goto nat_POST_public
}

chain nat_POSTROUTING_POLICIES_post {
}

chain nat_OUTPUT {
	type nat hook output priority -90; policy accept;
	jump nat_OUTPUT_POLICIES_pre
	jump nat_OUTPUT_POLICIES_post
}

chain nat_OUTPUT_POLICIES_pre {
}

chain nat_OUTPUT_POLICIES_post {
}

chain filter_PREROUTING {
	type filter hook prerouting priority filter + 10; policy accept;
	icmpv6 type { nd-router-advert, nd-neighbor-solicit } accept
	meta nfproto ipv6 fib saddr . mark . iif oif missing drop
}

chain filter_INPUT {
	type filter hook input priority filter + 10; policy accept;
	ct state { established, related } accept
	ct status dnat accept
	iifname "lo" accept
	ct state invalid drop
	jump filter_INPUT_ZONES
	reject with icmpx admin-prohibited
}

chain filter_FORWARD {
	type filter hook forward priority filter + 10; policy accept;
	ct state { established, related } accept
	ct status dnat accept
	iifname "lo" accept
	ct state invalid drop
	ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
	jump filter_FORWARD_ZONES
	reject with icmpx admin-prohibited
}

chain filter_OUTPUT {
	type filter hook output priority filter + 10; policy accept;
	ct state { established, related } accept
	oifname "lo" accept
	ip6 daddr { ::/96, ::ffff:0.0.0.0/96, 2002::/24, 2002:a00::/24, 2002:7f00::/24, 2002:a9fe::/32, 2002:ac10::/28, 2002:c0a8::/32, 2002:e000::/19 } reject with icmpv6 addr-unreachable
	jump filter_OUTPUT_POLICIES_pre
	jump filter_OUTPUT_POLICIES_post
}

chain filter_INPUT_POLICIES_pre {
	jump filter_IN_policy_allow-host-ipv6
}

chain filter_INPUT_ZONES {
	ip saddr 10.44.1.0/24 goto filter_IN_public
	iifname "enp3s0" goto filter_IN_public
	iifname "enp2s0" goto filter_IN_public
	goto filter_IN_public
}

chain filter_INPUT_POLICIES_post {
}

chain filter_FORWARD_POLICIES_pre {
}

chain filter_FORWARD_ZONES {
	ip saddr 10.44.1.0/24 goto filter_FWD_public
	iifname "enp3s0" goto filter_FWD_public
	iifname "enp2s0" goto filter_FWD_public
	goto filter_FWD_public
}

chain filter_FORWARD_POLICIES_post {
}

chain filter_OUTPUT_POLICIES_pre {
}

chain filter_OUTPUT_POLICIES_post {
}

chain filter_IN_public {
	jump filter_INPUT_POLICIES_pre
	jump filter_IN_public_pre
	jump filter_IN_public_log
	jump filter_IN_public_deny
	jump filter_IN_public_allow
	jump filter_IN_public_post
	jump filter_INPUT_POLICIES_post
	meta l4proto { icmp, ipv6-icmp } accept
	reject with icmpx admin-prohibited
}

chain filter_IN_public_pre {
}

chain filter_IN_public_log {
}

chain filter_IN_public_deny {
}

chain filter_IN_public_allow {
	ip6 daddr fe80::/64 udp dport 546 accept
	tcp dport 11954 accept
}

chain filter_IN_public_post {
}

chain nat_POST_public {
	jump nat_POSTROUTING_POLICIES_pre
	jump nat_POST_public_pre
	jump nat_POST_public_log
	jump nat_POST_public_deny
	jump nat_POST_public_allow
	jump nat_POST_public_post
	jump nat_POSTROUTING_POLICIES_post
}

chain nat_POST_public_pre {
}

chain nat_POST_public_log {
}

chain nat_POST_public_deny {
}

chain nat_POST_public_allow {
	meta nfproto ipv4 oifname != "lo" masquerade
	ip saddr 10.44.7.77 oifname != "lo" masquerade
	ip saddr 10.44.7.78 oifname != "lo" masquerade
	ip saddr 10.44.1.0/24 oifname != "lo" masquerade
	ip saddr 10.44.3.0/24 oifname != "lo" masquerade
	ip saddr 10.44.4.0/24 oifname != "lo" masquerade
	ip saddr 10.44.5.0/24 oifname != "lo" masquerade
	ip saddr 10.44.2.0/24 oifname != "lo" masquerade
}

chain nat_POST_public_post {
}

chain filter_FWD_public {
	jump filter_FORWARD_POLICIES_pre
	jump filter_FWD_public_pre
	jump filter_FWD_public_log
	jump filter_FWD_public_deny
	jump filter_FWD_public_allow
	jump filter_FWD_public_post
	jump filter_FORWARD_POLICIES_post
	reject with icmpx admin-prohibited
}

chain filter_FWD_public_pre {
}

chain filter_FWD_public_log {
}

chain filter_FWD_public_deny {
}

chain filter_FWD_public_allow {
	ip daddr 10.44.1.0/24 accept
	oifname "enp2s0" accept
	oifname "enp3s0" accept
}

chain filter_FWD_public_post {
}

chain nat_PRE_public {
	jump nat_PREROUTING_POLICIES_pre
	jump nat_PRE_public_pre
	jump nat_PRE_public_log
	jump nat_PRE_public_deny
	jump nat_PRE_public_allow
	jump nat_PRE_public_post
	jump nat_PREROUTING_POLICIES_post
}

chain nat_PRE_public_pre {
}

chain nat_PRE_public_log {
}

chain nat_PRE_public_deny {
}

chain nat_PRE_public_allow {
	meta nfproto ipv4 tcp dport 80 dnat ip to 10.44.2.58:80
	meta nfproto ipv4 tcp dport 30009 dnat ip to 10.44.1.44:3389
}

chain nat_PRE_public_post {
}

chain mangle_PRE_public {
	jump mangle_PREROUTING_POLICIES_pre
	jump mangle_PRE_public_pre
	jump mangle_PRE_public_log
	jump mangle_PRE_public_deny
	jump mangle_PRE_public_allow
	jump mangle_PRE_public_post
	jump mangle_PREROUTING_POLICIES_post
}

chain mangle_PRE_public_pre {
}

chain mangle_PRE_public_log {
}

chain mangle_PRE_public_deny {
}

chain mangle_PRE_public_allow {
}

chain mangle_PRE_public_post {
}

chain filter_IN_policy_allow-host-ipv6 {
	jump filter_IN_policy_allow-host-ipv6_pre
	jump filter_IN_policy_allow-host-ipv6_log
	jump filter_IN_policy_allow-host-ipv6_deny
	jump filter_IN_policy_allow-host-ipv6_allow
	jump filter_IN_policy_allow-host-ipv6_post
}

chain filter_IN_policy_allow-host-ipv6_pre {
}

chain filter_IN_policy_allow-host-ipv6_log {
}

chain filter_IN_policy_allow-host-ipv6_deny {
}

chain filter_IN_policy_allow-host-ipv6_allow {
	icmpv6 type nd-neighbor-advert accept
	icmpv6 type nd-neighbor-solicit accept
	icmpv6 type nd-router-advert accept
	icmpv6 type nd-redirect accept
}

chain filter_IN_policy_allow-host-ipv6_post {
}

chain nat_PRE_policy_allow-host-ipv6 {
	jump nat_PRE_policy_allow-host-ipv6_pre
	jump nat_PRE_policy_allow-host-ipv6_log
	jump nat_PRE_policy_allow-host-ipv6_deny
	jump nat_PRE_policy_allow-host-ipv6_allow
	jump nat_PRE_policy_allow-host-ipv6_post
}

chain nat_PRE_policy_allow-host-ipv6_pre {
}

chain nat_PRE_policy_allow-host-ipv6_log {
}

chain nat_PRE_policy_allow-host-ipv6_deny {
}

chain nat_PRE_policy_allow-host-ipv6_allow {
}

chain nat_PRE_policy_allow-host-ipv6_post {
}

chain mangle_PRE_policy_allow-host-ipv6 {
	jump mangle_PRE_policy_allow-host-ipv6_pre
	jump mangle_PRE_policy_allow-host-ipv6_log
	jump mangle_PRE_policy_allow-host-ipv6_deny
	jump mangle_PRE_policy_allow-host-ipv6_allow
	jump mangle_PRE_policy_allow-host-ipv6_post
}

chain mangle_PRE_policy_allow-host-ipv6_pre {
}

chain mangle_PRE_policy_allow-host-ipv6_log {
}

chain mangle_PRE_policy_allow-host-ipv6_deny {
}

chain mangle_PRE_policy_allow-host-ipv6_allow {
}

chain mangle_PRE_policy_allow-host-ipv6_post {
}

}

That does already masquerade all IPv4 traffic (that does not go to localhost).

Thanks. How can I remove it?I’m thinking of setting up NAT here and then on Rocky setting up a-rex without NAT

I would:

firewall-cmd --permanent --zone=public --remove-masquerade

(But I would also use different zone for outside and inside.)

And the NM on Rocky is completely broken. No sees network cards. When you make a ping to a network card, the request comes from some IP address that does not exist in nature and naturally there is no response. I’m thinking of installing Rocky in a non-graphical environment and then manually adding XFCE.By the way, Debian 12 NM Gnome has the same bug. And NM Mate is normal

It seems to me very inconvenient that ISO is only with 2 such cumbersome graphical environments as Gnome and Kde.(moreover, they are unstable) For example EndeavourOS allows you to choose Xfce, Plasma, Gnome, Mate, Cinnamon, Budgie, LxQt, LxDe, and i3-wm,I use Red Hat clones only because they are most suitable for Nordugrid computing clusters

Rocky has what RHEL has and RHEL has only Gnome. The KDE is “third party”, so ISO with that is not the plain Rocky.

On typical computing clusters there is no GUI installed at all.

I put GUI on the control node because I need to use ganglia and it is only in graphics .-Ganglia is a scalable distributed system for monitoring clusters of parallel and distributed computing and cloud systems with a hierarchical structure. Allows you to track statistics and history (processor load, network load) of calculations in real time for each of the observed nodes.

Installed ROCKY without GUI. Everything is fine, only when I changed port in sshd.config to 11954 I can’t contact via ssh. But sshd.service is active ,enabled and running. Server listening on 0.0.0.0 port 11954.Server listening on :: port 11954 . Selinux disabled . And still “no route to host” But ping to host is good

Im stop firewalld and all is good :smile:

This thread did start with “How to open a port with FirewallD?” Surely you can use that solution?


Don’t you look at ganglia data with browser? The “control host” requires only HTTP server for that, not any GUI DE – if you can run the browser on, say a laptop.

And I need a browser to prolong the claster’s and user’s certificates from the certification centre site and upload them to the server.It’s more convenient for me

It’s more convenient for me to work with iptables because all the ARC installing instructions are focused on it for exemple:

Configure Firewall

Different ARC CE services open a set of ports that should be allowed in the firewall configuration.

To generate iptables configuration based on arc.conf, run:

[root ~]# arcctl deploy iptables-config

Does that script write to file or to kernel?
It is possible to translate iptables rules into nf_tables rules.
See Chapter 2. Getting started with nftables Red Hat Enterprise Linux 9 | Red Hat Customer Portal


Why do you tell that now and not in the OP?

Because I thought that firewalld was convenient to use, but it turned out to be less convenient even than ipfw on freebsd. I remember setting up IPFW about ten years ago. But it’s true that it still works on 2 servers with FreeBSD for 10 years continuously and without failures.It’s hard to set up but easy to manage.But in Freebsd 14 Rinetd started to fail and so I wanted to protect myself with a firewall on Linux server.But these servers with FreeBSD are not on a cluster.Now I’m trying to transfer the cluster from Сentos6 to Rocky9. I think that the firewall on the cluster can be completely disabled. It connects only nodes.