Install using kickstart and %post rsync

I am trying to sync a couple files on install using kickstart script

export RSYNC_PASSWORD="password"
/usr/bin/rsync -rltgo rsync://rsync@myrsyncdeamon.local/important-files/ /important-files/

Is their any way to secure the password in the kickstart file to have it encrypted.

I don’t know answer to that question.

I do use a slightly different approach:

  1. Kickstart deploys a ssh public key for the root user
  2. After install I do run Ansible playbook on my “control host” that completes the config. (Ansible gets into the target system via ssh.)

That obviously requires that there is a control host (for example, the myrsyncdeamon.local) that can run Ansible. In your case, if that rsync is all you do, you could run the rsync in the myrsyncdeamon.local to push (rather than pull) if target had your key in authorized_keys? As downside, one has to do it separately, unlike the %post.

Can you provide an example of how the kickstart will deploy an SSH public key?
I am sure this could work as well, if host controller can access the new install or new install can access the host controller with minimal interference from a user, the better.

I don’t want to baby sit VM’s while they install.

I do have a host controller that is running as a rsync deamon. I have been looking into Puppet but it looks to be over kill for what we are trying to accomplish.

I seem to have at end of kickstart file:

sshkey --username=root "ssh-ed25519 jlehtone@moon"

The sshkey should be described in kickstart documentation.

I got into Ansible because it was used by someone to set up (and maintain) HPC clusters, and it is available in the distro (package ansible-core). IMHO, the main benefit of Ansible and Puppet is not that initial install is automated, but that one can update/maintain config of existing systems and one has logical copy of config of each system – a backup that is trivial to “restore”.

1 Like

Looking at the documentation this could work but it works in the opposite way, this gives the controller access to the host server were currently we give the host access to the controller.

Reason: customers with on site servers and the impossible task to get ssh access, it’s easier to get access out from a clients network than into their networks

My biggest issue is security related to the kickstart file. If anyone gets access to kickstart file, they can get access to the files.
Using sshkey won’t allow access to the controller which fixed the security issue.

I am also doing all of this to update/maintain config of our servers. Exp: A simple change to httpd.conf can be synced to all the servers. I currently I am use cron and a bash script to pull the changes from a rsync deamon server.
Puppet, Ansible and rsyncd to me, all do exactly the same thing just an a different way.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.