How to configure multiple nics in different subnets

Hi!

I’m new to Rocky Linux and doing the first things to get it up and running. Doing so, I thought to use it as a server running docker containers with applications I do use with different VMs (Proxmox) at the moment. This means that I do have to use different vlans as I separated my clients into different categories.

To get this I used Mr. Google and followed this little howto:
Configure two network cards in a different subnet on RHEL 6, RHEL 7, CentOS 6 and CentOS 7 | Jensd's I/O buffer

Doing so I cannot get both cards being usable… It’s always just one subnet that is working. The other one gets an IP by DHCP but isn’t reachable. There is no difference in having just one virtual network device and setting up two vlans or having two virutal network devices within the proper vlan.

# cat ./rule-ens18
from 192.168.12.1/32 tab 1 priority 100

# cat route-ens18
192.168.12.0/24 dev ens18 tab 1
default via 192.168.12.1 dev ens18 tab 1

# ip route show tab 1
default via 192.168.12.1 dev ens18 
192.168.12.0/24 dev ens18 scope link`
# cat ./rule-ens19
from 10.246.17.1/32 tab 2 priority 200

# cat route-ens19
10.246.17.0/24 dev ens19 tab 2
devault via 10.246.17.1 dev ens19 tab 2

# ip route show tab 2
10.246.17.0/24 dev ens19 scope link

The ip route show tab 2 isn’t showing the ‘default via…’ setting - but I don’t know the cause for this…

Is there someone who knows about a howto I can use with Rocky Linux 8.5 to get this setup configured?

KInd regards,

DerTom

First, EL6 did configure network with initscripts. EL7 and EL8 do it with NetworkManager.service by default, although network.service is still available (but deprecated in EL8). Most guides assume network.service. Not that it matters here.

Second. Being member of multiple subnets is not a problem. What is a problem is if you are not happy with one default route for your server. “There can be only one!” Outbound connections (to beyond link-local) will always use the default route.

What you do is Policy-based routing. That changes how services reply to incoming connections. You have a typo in the route-ens19: devault where there should be default.

Linux is supposedly able to support network namespaces, but I don’t know how to set them up. A process in separate namespace would have network config of that namespace.

I thank you for your short dated reply!

Typo is corrected…

I will start working my way into IP routes and have a look where this leads me to…

My best wishes from Germany

You did not say much about your network setup…

On policy based routing:

From what I understood, you have two untagged networks and trying to configure two NIC devices (one in each network).

  • How are these two untagged networks relate to eachother? (is there a routing between them)
  • Where is/are network gateway/s?
  • Is the Proxmox server another server or is it beeing replaced? (how is it connected to RL?)
  • Is the RL a VM or a hardware?
  • Where are the RL NIC connected? (hardware switch? vlans?)
  • How is the network for VMs configured? (bridged, routed)
  • How is routing done in/between your networks?
  • How is the networking for containers configured? (by default it binds to all available interfaces)
  • Where are the containers? hypervisor or dedicated VM?
  • Why do you think you need policy based routing? It has very specialized use - there is nothing in your post suggesting that you need it.

(basically a diagram of your entire network is needed)

This means that I do have to use different vlans

Do you mean a real vlan (with vlan id) or maybe just a virtual lan? (routed network)

My preferred way of setting up networking:

  • managed switch (with LAG/LACP)
  • hardware servers are connected to switch with all available NIC devices with LAG/LACP configured
  • on hardware servers (RL) I use openvswitch to configure LACP part, together with as many virtual NICs/VLANs as I want (vlans are shared with hardware switch)
  • while network manager is OK, recently I became a fan of systemd-networkd for servers (when network configuration does not change)
  • hardware servers are usually hypervisors, openvswitch options for VM networking are virtually unlimited (bridged specific vlan, private vlans across multiple hardware servers, or you can trunk everything, or even you can put openvswitch in the VM and bridge it there)
  • for my convenience, VM networking is bridged directly to the vlan on the hardware switch (libvirt bridge to openvswitch bridge vlan over LAG/LACP)
  • for routing between vlans I use dedicated hardware firewall (VM could be used too)

If you are using docker - have a look at podman - it is an upgrade.

1 Like

Hi j6tqgf.s!

According to my network:

WAN
<->
FritzBox
<->
opnSense
<->
Unifi 10GB-Switch <=> (2x10GB bond) Proxmox
<->
Unifi 1G Switch
<->
Clients

Proxmox
→ NAS - Ubuntu VM
→ Unifi Network Controller - Ubuntu VM
→ TV-Server (tvheadend) - Ubuntu VM
(=> RockyLinux VM (should replace the VMs NAS, Unifi, TV))
→ Win11 VM

  • In total I do use 1 untagged and 4 tagged Vlans (private Network, VPN, Streaming, Management) at the moment.
  • There are 3 Gateways configured with opnSense (WAN/FritzBox, VPN, Streaming).
  • Rocky Linux VM is having two virtio ‘Network-Cards’ (2x10GB Bond, vlan-aware) which are using the Vlan-IDs for the first 2 Vlans I would like to use/test (1xVPN, 1xStreaming).
  • The routing and DHCP-Service is done by opnSense.

The first thing for me to do was to get the two nics with the two subnets up and running. I’m thinking about using macvlan with the docker-containers.

I just ordered the book RHCSA Red Hat Enterprise Linux 8 (UPDATED) by Asghar Ghori…

Will have a look at podman - using Portainer at the moment.

So, here is what I understood:

used networks: (I need a vlan number to grasp it)
vlan1  - untaggged
vlan10 - private
vlan11 - VPN
vlan12 - streaming
vlan13 - management

physical network:
------------   ---------------                -----------
| opnsense | - | unifi1 10GB | - (bond x2) -  | proxmox |
------------   ---------------                -----------
                 |
               --------------
               | unifi2 1GB | - clients
               --------------

virtual network:
-----------               ---------
| proxmox | - (bond x2) - | RL VM |
-----------    vlan11     ---------
               vlan12

I have many questions:

  • please confirm/correct the above
  • How vlans are connected to opnsense? In particular the VPN and streaming vlans? Why do you need a gateway to VPN/Streaming vlans? Are there more routers you did not mention? I would assume that all vlans are directly attached to opnsense…
  • Is bond between proxmox and RL VM really needed? It is virtual anyway…
  • What is the purpose of VPN vlan?
  • Why your servers (VM) are in more than one functional vlan? One functional vlan + management vlan is all you need (plus access through opnsense firewall).
  • How do you connfigure networking on proxmox? dedicated bridge for each vlan? or bridge for a trunk? Anyway I assume that all (relevant) vlans are configured (and are available) on proxmox. I do not use proxmox, but I think by default they are using kernel bridges… so, nothing special here.
  • I am assuming you are not using nat nor routed networks for VM?
  • Where is docker? The convenient way would be to put it inside VM.

From RL VM point of view there is nothing special here, just assign IP addresses and all should work.

What exactly is not working?

but isn’t reachable

From where?

  • Network is looking good.

  • All vlans are attached to opnSense. OpnSense is configured being a VPN-Client - is needed for work. Streaming vlan is to separate all of the android-boxes and TVs from the private vlan.

  • The 10GB-bond is configured being a linux bridge (vlan aware) on Proxmox.

  • At the moment docker is inside the NAS-VM.

The thing that’s not working is that only one configured network-device is reachable/answers a ping. There is no difference in configuring the following options:

  1. One untagged virtio Network-Card (vlan 11 - ens18) and configuring ens18.13 (tagged, vlan 13)
  2. Two untagged virtio Network-Cards (vlan 11 - ens18 and vlan13 - ens19)
  3. One untagged virtio Network-Card (no vlan - ens18) and two tagged vlans (ens18.11 and ens18.13).

Both vlans/Networks get an IP by the DHCP-Server (opnSense) but there is only one that answers ping-requests from my main-machine. Pinging the vlan-gateway from within the Rocky Linux Terminal only one gateway can be reached. The dhcp-leases only show one lease and not both.

I’d say that one could/should start with topology of logical network – the subnets and gateways between them. How the connections are implemented (separate physical wires, VLANs in wire, proxmox/podman config, etc) is a detail that merely has to make the logical happen.

Disclaimer: I have no idea what “proxmox” is.

On the not working part:

  • I am assuming that configuration of intarfaces (proxmox - RL VM) is OK. It should be straightforward and the same as in other VM… Otherwise this is the place where it can go wrong.
  • run test ping between proxmox and VM (there are already two firewalls involved) (assuming proxmox has interface in relevant vlans - otherwise it does not matter)
  • run test ping from opnsense box (third firewall) (pfsense has ping utility - opnsense shold have it too)
  • opnsense firewall - you need to make rules for outgoing, incomming (and possibly forwarding) icmp on all vlan interfaces that need it (you need to define vlan interfaces first) (on pfsense this is not enabled by default)
  • opnsense firewall - make sure DHCP (in and out) is allowed on relevant interfaces
  • after pings work from opnsense you need to make sure that routing is enabled on opensense between the required vlans
  • traceroute may be of help

There are still some design choices that are not clear to me…
My thoughts:

  • you do not need vpn vlan (unless this is site to site vpn…)
  • if VPN client (for work) is needed only for one system (main machine?) I’d configure it only on that one system instead for entire network…
  • you do not need a VM connected to more than one vlan (management vlan excluded), otherwise you will have routing/firewall/security issues (and you are not really gaining anything)
  • (below proposal) all clients (from private and streaming vlans) can access a VM in server1 vlan - you just need to set up firewall rules. It can be done per vlan or per client (ip address). So, I would put all VMs in their own vlan.
  • (below proposal) dependding on protocols you want to use, there may ba some complications… When auto-discovering services (mdns, avahi, bonjour…) - I am not sure how well it works across networks. (netbios bridging should work, and streamming protocols should work too)
proposed logical vlan setup:
------------   -----------
| opnsense | - | private | -  ( main machine )
------------   -----------
             - | server1 | - all VM
               -----------
             - | server2 | - secure VM
               -------------
             - | streaming | - (android clients)
               -------------
             - | management |
               --------------