Downloading files on rocky 9.1 from a rocky 8.6 server with nginx failes after 2048Mb

I just want to share a strange bug I encounter updating to rocky 9.1.

In my setup I had a software package distribution running with nginx 14.1 on a rocky 8.6 server. My newly upgraded rocky 9.1 workstation was not able to download more than 2048Mb from the sever with ansible get_url. The error was along the lines : “failed to create temporary content file: timed out”.

After spending a bit of time investigating it seems even wget and curl were not able to download more then 2048Mb from the webserver. (to be fair curl succeeded after a retry)

But downloading more than 2048Mb from another http server was no problem. And my Rocky 8.6 workstation had no problem at all.

On rocky 9.1 it seems sendfile had issues reading more than 2147479552 bytes form nginx 14.1. Please let me know if you can reproduce the issue or find a bug report.

To remediate I stopped nginx on the server and startet “python3 -m http.server” and now everything works again.

I hope this report helps someone else with the same issue.

You may find this interesting.
https://trac.nginx.org/nginx/ticket/1472

Hi Frank. Thank you for the suggestion. I tried proxy_max_temp_file_size 0;, but unfortunately it did not solve the issue.

So I think you’re saying everything works when Computer A and Computer B and both using Rocky 8.6, but trying to use Rocky 9.1 as a client fails?

Upgraded? From what? If it was Rocky 8 on this system before the upgrade, then you should know upgrades are not supported between different versions. Can you do a clean install of Rocky 9 and try again? Or if not please clarify what you mean by upgraded (eg: upgraded 9.0 to 9.1 then that would be OK).

… everything works when Computer A and Computer B and both using Rocky 8.6, but trying to use Rocky 9.1 as a client fails?

Yes

Upgraded? From what?

I upgraded the client machine from 9.0 (clean install) to 9.1 and started to notice the software deployment not working anymore (for all files > 2GB) though ansible.

I’ve just installed nginx:14 on my rocky8 install, and from rocky9 will try to download the Rocky 9 DVD iso from it, since this is greater than 2GB. Will test and report back.

Well, I have tested now with Nginx 14 on Rocky 8.7 with default Nginx config other than setting /var/www as the root directory.

So the image for download:

[root@rocky8 ~]# ls -lha /var/www
total 8.4G
drwxr-xr-x.  2 root root   28 Dec  8 10:06 .
drwxr-xr-x. 21 root root 4.0K Dec  8 09:56 ..
-rw-r--r--.  1 root root 8.4G Dec  8 10:05 rocky9-dvd.iso

From my Rocky 9.1 install, I then download using wget:

[root@rocky9 ~]# wget http://rocky8/rocky9-dvd.iso
--2022-12-08 10:06:46--  http://rocky8/rocky9-dvd.iso
Resolving rocky8 (rocky8)... 10.1.7.4
Connecting to rocky8 (rocky8)|10.1.7.4|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9008185344 (8.4G) [application/octet-stream]
Saving to: ‘rocky9-dvd.iso’

rocky9-dvd.iso                    100%[============================================================>]   8.39G   118MB/s    in 66s     

2022-12-08 10:07:53 (130 MB/s) - ‘rocky9-dvd.iso’ saved [9008185344/9008185344]

As you can see it has downloaded the entire file without any problems. So I now decided to try this using Ansible locally on my Rocky 9.1 server:

[root@rocky9 ansible]# ansible-playbook download-rocky-dvd.yml 

PLAY [Download Rocky DVD from Rocky8 Server] ******************************************************************************************

TASK [Download DVD Image] *************************************************************************************************************

changed: [rocky9]

PLAY RECAP ****************************************************************************************************************************
rocky9                     : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[root@rocky9 ansible]# 
[root@rocky9 ansible]# ls -lha /tmp/
total 8.4G
drwxrwxrwt.  5 root root 4.0K Dec  8 10:14 .
dr-xr-xr-x. 18 root root  251 Aug 24 16:11 ..
-rwxr-xr-x.  1 root root 8.4G Dec  8 10:13 rocky9-dvd.iso

and it’s also downloaded perfectly fine. The contents of my test playbook:

---
- name: Download Rocky DVD from Rocky8 Server
  hosts: rocky9
  gather_facts: false
  tasks:
    - name: Download DVD Image
      get_url:
        url: http://rocky8/rocky9-dvd.iso
        dest: /tmp
        owner: root
        group: root
        mode: 0755

So there isn’t a problem/bug with Rocky/Nginx, but perhaps you have a misconfigured Nginx? Especially since you tested under python and it worked.

I did not expect you to go that far testing it.

On the server I switched to nginx 1.20 but the issue remained.

But the oddity continues:
I ran the same test as you did: Rocky9 iso with root set to /var/www. The download works perfectly.

# wget http://rocky8/Rocky-9.1-x86_64-dvd.iso
--2022-12-08 11:03:55--  http://rocky8/Rocky-9.1-x86_64-dvd.iso
Resolving rocky8 (rocky8)... 10.180.128.5
Connecting to rocky8 (rocky8)|10.180.128.5|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9008185344 (8.4G) [application/octet-stream]
Saving to: ‘Rocky-9.1-x86_64-dvd.iso’

Rocky-9.1-x86_64-dvd.iso           100%[===============================================================>]   8.39G   357MB/s    in 26s     

2022-12-08 11:04:22 (325 MB/s) - ‘Rocky-9.1-x86_64-dvd.iso’ saved [9008185344/9008185344]

But on the server the inhouse repo is located on a smb mount. And downloading from it fails at 2GB. The nginx config is minimalistic (root is amended):

    server {
        listen       8181 default_server;
        listen       [::]:8181 default_server;
        server_name  _;
        root         /path/to/repo/on/a/cifs/share/;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;
        default_type application/octet-stream;

        location / {
	   autoindex off;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }

# wget http://rocky8:8181/rocky/Rocky-9.0-x86_64-dvd.iso                        
--2022-12-08 12:22:04--  http://rocky8:8181/rocky/Rocky-9.0-x86_64-dvd.iso
Resolving rocky8 (rocky8)... 10.180.128.5
Connecting to rocky8 (rocky8)|10.180.128.5|:8181... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8459190272 (7.9G) [application/octet-stream]
Saving to: ‘Rocky-9.0-x86_64-dvd.iso’

Rocky-9.0-x86_64-dvd.iso                    25%[=====================>                                                                  ]   2.00G  --.-KB/s    eta 58s    ^C

I wont test ansible, as wget already fails. For now I run the python server and I’ll report back, after upgrading the server to 9.1.

Thank you for your help.

I think I’ve seen this issue with CIFS before. The issue of large file transfers failing. If you search “cifs large file transfer” you might hit on more information.

1 Like

But I don’t remember seeing anything about ‘cifs’ in the original post??

Certainly looks like at least the cause has been found - using a CIFS mount and this being the reason why it’s unstable. Especially since when using a standard directory (/var/www), it then started to work properly.

I did not mention it, because downloading from a nginx cifs mount works on rocky prior to 9.1.

I’m still not 100% sure what happened. Updating the client from r9.0 to r9.1 and all of a sudden it’s the cifs mount on the server sounds bizarre imho.

Well the fact nginx works when using a normal partition would say it’s not nginx. I expect you could also test mounting an NFS partition with files on it and see if nginx behaves the same as CIFS or not. That would also rule out the potential of nginx not liking network-mounted partitions for some reason.

Is there a particular reason you are mounting CIFS and providing this to nginx? Perhaps do it the other way around, install samba on the nginx server, and configure it to allow access to the directory on the web server where you are storing the files. You can then just connect to the web server over samba and then copy the files there. Obviously it can mean your web server requiring additional storage for itself. Or alternatively, instead of mounting it over CIFS (if you are storing files on a NAS), then use a native Linux mount like NFS instead and mount that on the nginx server.

1 Like

Just to follow up: I updated the server to the latest 8.7 release and everything works again.

1 Like