I have built a website on AWS (t3.medium) using the official Rocky-9.7 ami (+patches). It loads files from a S3FS (Bucket) and serves them for download by our customers. It was originally running OK for several years under Centos-7 and now I have built a Rocky-9 clone it runs very slowly when under load.
The only clue is that I sometimes see this in the httpd error logs:
[Sun Feb 15 02:15:47.301852 2026] [mpm_event:error] [pid 1620622:tid 1620622] AH10159: server is within MinSpareThreads of MaxRequestWorkers, consider raising the MaxRequestWorkers setting
[Sun Feb 15 02:15:48.302829 2026] [mpm_event:error] [pid 1620622:tid 1620622] AH00484: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
[Sun Feb 15 02:17:11.386907 2026] [mpm_event:error] [pid 1620622:tid 1620622] AH03490: scoreboard is full, not at MaxRequestWorkers.Increase ServerLimit.
It’s hard to compare CentOS 7 and Rocky 9, as the MPM changed, e.g. was multi-process, now event. Simply increasing settings is not always what you want, because it will increase resource usage. If you have huge spare bandwidth, memory, cpu, then it’s ok, but otherwise it could make it worse.
Instead, find out what each of the request workers is doing. e.g. you are being used for a reflective cyberattack, or you are being hit by agressive crawlers. I’ve recently seen where 99% of requests in one minute did not come from genuine users (all bots).
Check also for any latency between the web server and the S3 (and / or database), as this can cause the “keep alive” problem where connections stay open for too long.
Expanding on gerry666uk’s 99% statistic, I suggest you look at the “User-Agents” showing up in your Apache logs. You might be shocked how many times you see User-Agents like “wget” or various strings containing “bot” or “crawler” or “spider”, etc.