Slow httpd response on Rocky9.7

I have built a website on AWS (t3.medium) using the official Rocky-9.7 ami (+patches). It loads files from a S3FS (Bucket) and serves them for download by our customers. It was originally running OK for several years under Centos-7 and now I have built a Rocky-9 clone it runs very slowly when under load.

The only clue is that I sometimes see this in the httpd error logs:

[Sun Feb 15 02:15:47.301852 2026] [mpm_event:error] [pid 1620622:tid 1620622] AH10159: server is within MinSpareThreads of MaxRequestWorkers, consider raising the MaxRequestWorkers setting
[Sun Feb 15 02:15:48.302829 2026] [mpm_event:error] [pid 1620622:tid 1620622] AH00484: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
[Sun Feb 15 02:17:11.386907 2026] [mpm_event:error] [pid 1620622:tid 1620622] AH03490: scoreboard is full, not at MaxRequestWorkers.Increase ServerLimit.

That would suggest that you need to tune your Apache config for the option MaxRequestWorkers by increasing it.

Something like this should help:

<IfModule mpm_event_module>
    ServerLimit             16
    StartServers    		4
    MinSpareThreads	    	25
    MaxSpareThreads	    	75
    ThreadLimit		        64
    ThreadsPerChild	    	25
    MaxRequestWorkers	    400
    MaxConnectionsPerChild  3000
</IfModule>

you need to do that for the MPM that you are using, which from the above logs suggest mpm_event.

The above is an example, you need to make changes depending on your usage but it should be a good starting point.

1 Like

It’s hard to compare CentOS 7 and Rocky 9, as the MPM changed, e.g. was multi-process, now event. Simply increasing settings is not always what you want, because it will increase resource usage. If you have huge spare bandwidth, memory, cpu, then it’s ok, but otherwise it could make it worse.

Instead, find out what each of the request workers is doing. e.g. you are being used for a reflective cyberattack, or you are being hit by agressive crawlers. I’ve recently seen where 99% of requests in one minute did not come from genuine users (all bots).

Check also for any latency between the web server and the S3 (and / or database), as this can cause the “keep alive” problem where connections stay open for too long.

Hi,

Did you enable Apache server-status handler and analyzed the usage statics ?

Did you analyze the Brower Developer Tools network tab closely ? which content is taking more time ?

Have you enable VPC end point to reach S3 bucket from EC2 trhough AWS internal network network?

Expanding on gerry666uk’s 99% statistic, I suggest you look at the “User-Agents” showing up in your Apache logs. You might be shocked how many times you see User-Agents like “wget” or various strings containing “bot” or “crawler” or “spider”, etc.

I have enabled Apache server-status and the S3 buckets are local to the servers so I am going to investigate using a VPC Endpoint.

My server is not showing any bots but I will investigate further.

Is this using LocalStack; there’s a difference between being local in real life, and just being mounted (make it look like FS)