I’m implementing Rocky IdM into our new Rocky 9 releases. I have a new HPC cluster where I have the IdM client working for login to the main server. But I’m not clear on how to forward that to all the servers in the cluster. I suspect I need to install the ipa client to the cluster image, but how do I make sure ALL the cluster nodes are added in? I don’t want to do one and have all the others locked out.
Without creating security issues within your cluster/infrastructure, you cannot do this. (And you should not consider trying to do this). Each computer should have its own entry in IPA as a best practice.
If you want to make sure all nodes are added into IPA, you need to run a join (by some form of automation if need be) on each individual system. I’ve guided and have seen some of my $dayjob clients do the following:
- Pre-create the hosts in IPA
- Create the host in IPA ahead of time and set a one-time password
- This password is used with the ipa-client-install command (via a bash script or config management) to join the system
- This assumes the hostnames are a full match
- Use a script to join when system comes up
- Create an account in IPA that only performs account joins - It needs to have the appropriate RBAC’s set
- Use the above account to perform the joins within the script
- This is flaky at best and is prone to errors
- Use config management to join when the systems are up/available
- Same as above, create an account in IPA with the correct RBAC for joins only
As these are HPC nodes, and not directly connected to the main network, I need to pass through the authentication from the login node. I’ll probably have to manually enter the nodes via command line on the IdM server since that’s being rejected from the web tool. The trick is going to be what I need to open to allow accounts to forward through when running jobs.
What if …
you don’t set up IdM on compute nodes and simply create/manage – with config management – accounts (with no created password) from basic userdata that you fetch from IPA? That is what I do on one HPC cluster. Granted, that cluster has very static set of users and thus no automated “update users”.
Users can access nodes from login node via SLURM (munged auth) or with ssh keys (homes are on cluster’s internal NFS). That is why there is no need for auth with IPA (except on the login nodes).
Not an option for my environment.
Got it to work by adding the package on one of the login nodes so it wrote all the details to the image. Now it works across the board. Just had to make sure all the various nodes were added into IdM.