Up until now I’ve been using a simple Bash script to perform post-install configuration on my servers. Here’s what the script for Rocky Linux 8.4 looks like:
Now I wonder if Ansible wouldn’t be better suited for the job. I am not familiar with this technology. As far as I understand, it’s quite complex to grasp. And I wonder if it’s worth the hassle.
What’s your take on this? Keep the bone-headed approach? Or invest the time to learn Ansible? (And if so, what documentation do you eventually recommend?)
I only have a small network, so take this with a grain of salt. A large enterprise of thousands of machines may do it differently.
Personally, I use kickstart with a post-install script to the do initial configuration.
A lot of the post-install process is just untar’ing a tarball of common files (eg repo configuration, root user config, daily cron jobs for house keeping, backups etc etc).
That builds the “base” OS and is the starting point.
I then use ansible for other stuff (eg I have a playbook that makes the machine into a webserver, another one to make it into a DNS server, a mail server, etc). And I have other playbooks for managing things like TLS certs. I don’t do this in kickstart because most of the time I’m spinning up machines for test purposes (“let’s build a SyncThing cluster”) which requires a base to experiment from (and from which a new ansible playbook got created).
I found Ansible a couple years ago and use it now heavily. A strong point is that the “desired state” of a machine is stored in “inventory”. Essentially an easy to restore backup of configuration.
Another point is that config is not set in stone. There are always changes. With Ansible I can both do the initial install and deploy changes. And, since the changes go into play, the next install will get them just like the rest of the machines.
A few years ago when I started looking at config management, I chose SaltStack over Ansible, but I believe config management is where things are going so you will gain benefit by learning and understanding the concepts. Whether it’s Ansible, SaltStack, Puppet, or Chef, they all do the “same thing” and use the same concepts. Once you have one you would be able to transfer to others relatively easily.
I would perform a minimal install on the systems and let Salt install and configure all other aspects. I once had to rebuild a webserver that was part of a load balanced cluster. It was so simple because all I had to do was kick it off and the system setup remote syslog, firewall, installed packages, set configurations, SSL, security settings, SELinux policies, etc. Even the things you may have forgot that you did on a system over time such as a workaround for a specific issue.
Another benefit is config enforcement. The systems can periodically enforce a standard configuration. This can be helpful if you ever have to deal with any sort of auditors and they ask how often you check/ensure… You can answer that you have a system that does it at X interval and corrects anything.
You don’t have to throw away your bash script, just tweak it a bit for ansible. Ansible can run modules written in bash (or any other scripting language) just as easily as python ones, custom modules can just be placed in a directory called ‘library’ under the directory the playbook is in (or a directory pointed to with an ansible environment variable).
The only ‘tweaks’ are results from the script must be a valid JSON structure rather than lots of echoes that bash scripts normally use, all the varname:value settings in a playbook are provided to the script on the target machine in a form than can be used by a bash script with a ‘source $1’.
Google can find you lots of examples, and personally my custom modules are all bash as I have not got around to learning python yet.
But if you have spent a lot of time creating a bash script, you can still use it as an ansible module with a few tweaks.