I just want to share a strange bug I encounter updating to rocky 9.1.
In my setup I had a software package distribution running with nginx 14.1 on a rocky 8.6 server. My newly upgraded rocky 9.1 workstation was not able to download more than 2048Mb from the sever with ansible get_url. The error was along the lines : “failed to create temporary content file: timed out”.
After spending a bit of time investigating it seems even wget and curl were not able to download more then 2048Mb from the webserver. (to be fair curl succeeded after a retry)
But downloading more than 2048Mb from another http server was no problem. And my Rocky 8.6 workstation had no problem at all.
On rocky 9.1 it seems sendfile had issues reading more than 2147479552 bytes form nginx 14.1. Please let me know if you can reproduce the issue or find a bug report.
To remediate I stopped nginx on the server and startet “python3 -m http.server” and now everything works again.
I hope this report helps someone else with the same issue.
Upgraded? From what? If it was Rocky 8 on this system before the upgrade, then you should know upgrades are not supported between different versions. Can you do a clean install of Rocky 9 and try again? Or if not please clarify what you mean by upgraded (eg: upgraded 9.0 to 9.1 then that would be OK).
… everything works when Computer A and Computer B and both using Rocky 8.6, but trying to use Rocky 9.1 as a client fails?
Yes
Upgraded? From what?
I upgraded the client machine from 9.0 (clean install) to 9.1 and started to notice the software deployment not working anymore (for all files > 2GB) though ansible.
I’ve just installed nginx:14 on my rocky8 install, and from rocky9 will try to download the Rocky 9 DVD iso from it, since this is greater than 2GB. Will test and report back.
But on the server the inhouse repo is located on a smb mount. And downloading from it fails at 2GB. The nginx config is minimalistic (root is amended):
I think I’ve seen this issue with CIFS before. The issue of large file transfers failing. If you search “cifs large file transfer” you might hit on more information.
Certainly looks like at least the cause has been found - using a CIFS mount and this being the reason why it’s unstable. Especially since when using a standard directory (/var/www), it then started to work properly.
I did not mention it, because downloading from a nginx cifs mount works on rocky prior to 9.1.
I’m still not 100% sure what happened. Updating the client from r9.0 to r9.1 and all of a sudden it’s the cifs mount on the server sounds bizarre imho.
Well the fact nginx works when using a normal partition would say it’s not nginx. I expect you could also test mounting an NFS partition with files on it and see if nginx behaves the same as CIFS or not. That would also rule out the potential of nginx not liking network-mounted partitions for some reason.
Is there a particular reason you are mounting CIFS and providing this to nginx? Perhaps do it the other way around, install samba on the nginx server, and configure it to allow access to the directory on the web server where you are storing the files. You can then just connect to the web server over samba and then copy the files there. Obviously it can mean your web server requiring additional storage for itself. Or alternatively, instead of mounting it over CIFS (if you are storing files on a NAS), then use a native Linux mount like NFS instead and mount that on the nginx server.