I’m working on upgrading many clients from CentOS 7 to Rocky 8.5. One issue I need to consider is the application of “updates”.
With CentOS 7, the updates were in a separate repository. I could sync that repository down entirely, and create additional update repositories which would only contain updates for say, testing and production. This allowed me to have an approval process where I could add the updates to “testing” repo, and all my test machines would get them. Once everything was fine, I could sync testing->production and then all the production machines would get the updates… But this way, updates aren’t immediately installed until I’ve done some testing. The last thing I want to do is install all the updates everywhere, only to find out there is some problem.
With Rocky8, I guess I could have symlinked copies of the whole Rocky8 tree and only move in packages after they are approved, but just wondering if there is a better way?
Thanks for any suggestions.
Create your own Rocky repo, disable the default repos that come “built-in”. Point your client machines to your private repo. Populate that with whatever you want to use and call it a day.
I would really consider using
dnf reposync -g --delete -p /path/to/baseos --download-metadata --repoid=baseos
dnf reposync -g --delete -p /path/to/appstream --download-metadata --repoid=appstream
dnf reposync -g --delete -p /path/to/powertools --download-metadata --repoid=powertools
dnf reposync -g --delete -p /path/to/extras --download-metadata --repoid=extras
Have it on a cron if you’d like and then point your repo files to the right location in your network. That’s the simple approach.
Note: Do not run
createrepo on these synced repositories. It will destroy dnf groups and other important metadata.
Frank - that’s what I was thinking…just wondered if there was already something to automate the process. I’m really surprised there doesn’t seem to be.
Louis - if I do this then I’m syncing the whole thing… what if there’s an update that, for whatever reason, I don’t want all the clients to install? I was hoping to be able to sync, delete updates I don’t want all the clients to apply then rerun createrepo on each repo dir but you’re saying this will break things … I guess I could instead sync everything but then have an exclude list for special cases. Rather than running a yum update on the client I run a special yum update that gets passed the exclude list. It’s rare to exclude an update but if an update breaks important functionality I may have to…
I don’t see how you would automate it. Your objective is to hand-select everything that goes into that repo. If you want to use the standard repos then it’s automatic but that’s not your objective.
Config management systems could make such procedure easier. An example Ansible task:
- name: Update all packages, except kernel
The beauty is that I can run this task on all my hosts with a one-liner from admin machine.
The power is of Ansible playbook.
Actually, you need repo lifecycle management.
I use Uyuni (open source version of SuSE Manager) which is based on Spacewalk but integrated with SALT to manage the repos of all kind of distros (not only SuSe & EL , but also deb-based).
Also it has CVE audit, so you can patch the systems with the relevant fix only.
Keep in mind that for a few systems - it will be an overkill.
Have you looked into Pulp?