Pacemaker vip assignment

Hi,

I was using below configuration to create resouce and point vip to healthy node. This configuration is working fine in Rocky8

pcs resource create cccd_vip ocf:heartbeat:IPaddr2 ip={cccd_vip_ipv4} cidr_netmask={cccd_vip_subnet_ipv4} nic=eth0 meta failure-timeout=15 op monitor interval=5s
pcs resource create cccd_vip_ipv6 ocf:heartbeat:IPaddr2 ip={cccd_vip_ipv6} cidr_netmask={cccd_vip_subnet_ipv6} nic=eth0 meta failure-timeout=15 op monitor interval=5s

pcs resource create app_healthcheck ocf:nokia:rest_healthcheck url=http://localhost:8080/status meta failure-timeout=15 op monitor interval=1s
pcs resource clone app_healthcheck globally-unique=false clone-node-max=1 meta failure-timeout=15 migration-threshold=3

if cccd_vip_ipv4 pcs constraint colocation add cccd_vip with app_healthcheck-clone INFINITY
if cccd_vip_ipv6 pcs constraint colocation add cccd_vip_ipv6 with app_healthcheck-clone INFINITY
if cccd_vip_ipv4 pcs constraint order start app_healthcheck-clone then start cccd_vip kind=Optional
if cccd_vip_ipv6 pcs constraint order start app_healthcheck-clone then start cccd_vip_ipv6 kind=Optional

With the Rocky9 using same cluste.cfg file vip was never assigned then changed kind=After, we can see pcs status but vip is pointed to unhealthy node. Its not clear what all changes required in cluster file to point to healthy node always irrespective of healthcheck status response is error or no response.

Thanks &Regards,

It would be easiest if the person who made the cluster on v.8 refactored it on v.9
But if you are trying to refactor it, why not just make the cluster yourself from scratch?
From what you quoted (you say - it works on v.8)

pcs resource create cccd_vip ocf:heartbeat:IPaddr2 ip={cccd_vip_ipv4} cidr_netmask={cccd_vip_subnet_ipv4} nic=eth0 meta failure-timeout=15 op monitor interval=5s

you can remove
nic=eth0 meta failure-timeout=15 op monitor interval=5s

The checks and times are already globally defined.
But since you don’t say how many nodes the cluster has and whether it works in symmetric or asymmetric mode, and I don’t feel like checking code written for a specific solution, I’ll suggest (I assume it’s a 3-node cluster)

pcs resource create cccd_vip ocf:heartbeat:IPaddr2 ip={cccd_vip_ipv4} cidr_netmask=32 clone clone-max=3 notify=true interleave=true ordered=true globally-unique=true clusterip_hash=sourceip

pcs constraint order start {resource_before_cccd_vip_ipv4} then {cccd_vip_ipv4}

Check if there is a reason to put {cccd_vip_ipv4} infinitely where app_healthcheck-clone is
if cccd_vip_ipv4 pcs constraint colocation add cccd_vip with app_healthcheck-clone INFINITY

Isn’t the idea to assign cccd_vip floatingly to the selected node, elected as master in elections?

And lastly, check your firewall with iptables or nftables works! And which one should it be with? So that it is easy and manageable by Pacemaker.