Unexpected results from installation of RH9

I have been wondering how to ask these probably basic questions related to a new server installation I did several months ago. It is running properly but I didn’t get the expected outcomes and many people may have these same problems:

  1. I have 4 disks that I wanted to partition as two separate RAID1 clusters and after a LOT of tries (anaconda badly wanted to be a 4 disk LVM array) I managed to get something like that BUT what happened is I have one RAID1 cluster for every required directory (boot, efi, home…etc) what I expected was two RAID1’s with the directories spread around on them. I did preformat the disks that way but anaconda re-partitioned and reformatted them. I suspect if I get a disk failure I will have to do a fail/remove on every MD number on that physical disk before removing the disks???

  2. I would like to use automatic updating BUT I don’t want to have to reboot the server each time as I have to with my workstation. I know there is a way to do this BUT what happens when the applications software (eg PHP) has a deprecated function that gets removed? Do I suddenly get a crash with no obvious reason? How would you debug something like this? Will the automated updates pass across new versions - ie RL9 to a future RL10 without problems?

  3. At the moment selinux is in permissive mode because I have installed a lot of third-party software, including a lot of in-house stuff. How can I find all the dependencies before turning it on? I bought a book about selinux and now I know less than I did before!

I have a few more but these are top of the head right now. I’m sure there is documentation somewhere but I’m not a professional sysadmin, I only do it because nobody else knows how and it’s sort of fun; if you think these questions are trivial please just tell me so and I’ll go away.

In the installer one section is storage. There one can choose which drives to use and
choose between Automatic and Customize. The former is the default and I always
choose the Customize. That gets me to dialog, where I can remove and add filesystems
as I please. That has also “create automatically” that creates default filesystems, but
one can modify the set before choosing “Done”. One even gets a summary of what will
be done on install, with option to reject and change choices.

The default does indeed use two “standard” partitions for /boot and /boot/efi,
and one VG, where LVs for /, /home, and swap are. In the Custom one can
switch between standard, LVM (and “thin LVM”?).

I don’t use automatic updating, but I’d guess there is a service that can run “dnf up”
on schedule and does not reboot. Some packages, like kernel and glibc, do need reboot
to get into use.

A core idea of Enterprise Linux is that features are not removed. That you can set up a service and it can run up to decade.

The RL10 will not be a “new version”. It will be a distinct distro. Yes, there are distros that allow conversion to similar distros, e.g. Fedora N → Fedora N+1, and Ubuntu to Ubuntu, but Enterprise Linux has not been such. There might become third-party tools for in-place conversion, but Rocky has no support for them. (Cannot predict future, obviously.)

The SELinux does log events in permissive mode that would have been blocked in restrictive mode.
One can see summary of those events with: audit2why < /var/log/audit/audit.log

Jlehtone: thanks, that provides enough information that I can probably find what I need. During installation I always select “custom” for the disks because I always, so far, install as RAID of some sort. I wasn’t aware that different release versions were considered different distros. That is important to know I think - it seems to imply that when the RL9 finally expires I will need new hardware to install on so it can be run non-stop.

Can I assume that something like PHP will be modified by Rocky/Red Hat so deleted functions remain for the life of the distro then? What I had in mind is something like:

Thanks for the selinux info - that was not in the book that I bought and seems crucial!

One more unknown - smartctl doesn’t give the correct interpretation of the data on WD Blue SSD’s and I have done a dnf upgrade to the current release. Is this a bug in smartctl or the disk’s own databases?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.