Rocky 9 - error: can't find md-cluster module

Hello, I’m trying to set up a GFS2 cluster. Since lvmlockd doesn’t support clustered mirroring. I’m planning to do the RAID1 setups with cluster-MD. Then run the cLVM strips across them.

However, this happens;

$ sudo mdadm --create /dev/md10 --bitmap=clustered --metadata=1.2 --raid-devices=2 --level=mirror /dev/sdb /dev/sdc
mdadm: RUN_ARRAY failed: No such file or directory

Log file shows this;

Mar 19 16:08:17 node1 kernel: md/raid1:md10: not clean -- starting background reconstruction
Mar 19 16:08:17 node1 kernel: md/raid1:md10: active with 2 out of 2 mirrors
Mar 19 16:08:17 node1 kernel: can't find md-cluster module or get its reference.
Mar 19 16:08:17 node1 kernel: md10: Could not setup cluster service (-2)
Mar 19 16:08:17 node1 kernel: md10: failed to create bitmap (-2)
Mar 19 16:08:17 node1 kernel: md: md10 stopped.

Worked on this on/off and cannot seem to find anything on how to get or enable the md-cluster module. It also looks like it’s part of the ML kernel according to this;
https://www.kernel.org/doc/html/latest/driver-api/md/md-cluster.html

Could someone please tell me what I’m doing wrong, or how to fix this? Would the kernel-ml from ELRepo fix this?

Cluster state:

$ sudo pcs status --full 
Cluster name: core
Status of pacemakerd: 'Pacemaker is running' (last updated 2023-03-19 16:28:01 -04:00)
Cluster Summary:
  * Stack: corosync
  * Current DC: node1 (1) (version 2.1.4-5.el9_1.2-dc6eb4362e) - partition with quorum
  * Last updated: Sun Mar 19 16:28:01 2023
  * Last change:  Sat Mar 18 22:43:29 2023 by root via cibadmin on node2
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ node1 (1) node2 (2) ]

Full List of Resources:
  * node1_fence	(stonith:fence_ilo3):	 Started node1
  * node2_fence	(stonith:fence_ilo3):	 Started node2
  * Clone Set: locking-clone [locking]:
    * Resource Group: locking:0:
      * dlm	(ocf:pacemaker:controld):	 Started node1
      * lvmlockd	(ocf:heartbeat:lvmlockd):	 Started node1
    * Resource Group: locking:1:
      * dlm	(ocf:pacemaker:controld):	 Started node2
      * lvmlockd	(ocf:heartbeat:lvmlockd):	 Started node2

Migration Summary:

Tickets:

PCSD Status:
  node1: Online
  node2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled