Rocky LInux 9.4 Memory Issue after update from 9.3

Updated a Virtualization Host Server to RL9.4 yesterday and 24 hours later, TOP, /proc/meminfo, System Monitor are reporting 99% memory utilization. Even after shutting down all the virtualization clients (turn off auto-start) and rebooting, system instantly reports 99% memory utilization before virtualization starts via TOP, /proc/meminfo, System Monitor. System is using OpenVSwitch with DPDK enabled as it was in RL9.3

top - 18:51:24 up 49 min, 1 user, load average: 19.62, 19.09, 16.17
Tasks: 1097 total, 5 running, 1092 sleeping, 0 stopped, 0 zombie
%Cpu(s): 6.0 us, 4.4 sy, 0.1 ni, 84.8 id, 4.0 wa, 0.5 hi, 0.2 si, 0.0 st
MiB Mem : 257356.0 total, 665.1 free, 257356.0 used, 471.1 buff/cache
MiB Swap: 524288.0 total, 432395.2 free, 91892.8 used. 709.8 avail Mem

Linux sys7 5.14.0-427.16.1.el9_4.x86_64 #1 SMP PREEMPT_DYNAMIC Wed May 8 17:48:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Linux sys7 5.14.0-362.24.1.el9_3.0.1.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Apr 4 22:31:43 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Discovered error via fire drill method, via the documentation Open vSwitch with DPDK — Open vSwitch 3.3.90 documentation it talks about setting “echo ‘vm.nr_hugepages=2048’ > /etc/sysctl.d/hugepages.conf” or “sysctl -w vm.nr_hugepages=N # where N = No. of 2M huge pages”. If you follow the “sysctl.d/hugepages.conf” method or “grubby --args hugepages=2048 --update-kernel DEFAULT” method it applies as 204% of memory but if you apply via the “sysctl -w vm.nr_hugepages=N # where N = No. of 2M huge pages” method and say 8 or 16 or 32 it applies and doesn’t over-allocate system memory.

But are you saying the setting for hugepages has changed between Rocky 9.3 and Rocky 9.4?

Not sure, I have posted the same thing to the OpenvSwitch community as well. Have several other servers running 9.3 and 1/2 versions back of OpenvSwitch/DPDK and while they show the memory usage high, they are not crashes like this one was. Now that I have a working workaround will attempt to update another server as well and see if issue follows.