Connection timeout pulling big containers
See original GitHub issueHi @vsoch,
Sorry again for bringing this nginx errors … but I’m not very experienced with this web services and I think this could be easy for you. 😛
I’m trying to pull a 2.3Gb image and I always get the following error:
nginx_1 | 2018/01/18 16:46:59 [error] 6#6: *3 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXXXXX, server: localhost, request: "GET /containers/8/download/3c548175-fd5c-4033-b667-be836f0c7f4b HTTP/1.1", upstream: "uwsgi://172.17.0.4:3031", host: "XXXXXXX2
The connections are always closed after 60s.
This is the current nginx.conf file
server {
listen *:80;
server_name localhost;
client_max_body_size 8000M;
client_body_buffer_size 2000M;
client_body_timeout 900;
send_timeout 900;
add_header X-Clacks-Overhead "GNU Terry Pratchett";
add_header X-Clacks-Overhead "GNU Terry Pratchet";
add_header Access-Control-Allow-Origin *;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
location /images {
alias /var/www/images;
}
location / {
include /etc/nginx/uwsgi_params.par;
uwsgi_pass uwsgi:3031;
}
location /static {
alias /var/www/static;
}
}
server {
listen 443;
server_name localhost;
root html;
client_max_body_size 8000M;
client_body_buffer_size 2000M;
client_body_timeout 900;
send_timeout 900;
add_header X-Clacks-Overhead "GNU Terry Pratchett";
add_header X-Clacks-Overhead "GNU Terry Pratchet";
add_header Access-Control-Allow-Origin *;
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
ssl on;
ssl_certificate XXXXXXXXXXXXX.crt;
ssl_certificate_key XXXXXXXXXXX.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA;
ssl_session_cache shared:SSL:50m;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_prefer_server_ciphers on;
location /images {
alias /var/www/images;
}
location /static {
alias /var/www/static;
}
location / {
include /etc/nginx/uwsgi_params.par;
uwsgi_pass uwsgi:3031;
}
}
Thanks in advance!
Issue Analytics
- State:
- Created 6 years ago
- Comments:18 (18 by maintainers)
Top Results From Across the Web
How to solve i/o timeout error in docker pull - Stack Overflow
As discussed in the comments, you tried to ping the host dseasb33srnrn.cloudfront.net and it is not responding (it responds to my ping), ...
Read more >How to resolve the Docker "Timeout exceeded while awaiting ...
Log in to your DockerHub account. · Click your profile image in the upper right corner. · Click Account Settings. · Go to...
Read more >Pulling Docker images: i/o timeout - Open Source Registry API
We try to pull Docker images from Docker Hub. This worked fine for some time. Now, we always get an error, for example...
Read more >How do I resolve the "DockerTimeoutError" error in AWS Batch?
1. Use SSH to connect to the container instance for your AWS Batch compute environment. 2. To inspect the Amazon ECS container agent,...
Read more >Fix intermittent time-outs or server issues during app access
x.x port 80 after 21050 ms: Timed out * Closing connection 0 curl: (28) Failed to ... One of the containers is in...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Hi all - I don’t think there’s anything that’s useful to compare from my logs - it just works. 200 OK codes all round, so I did a bit more digging…
@victorsndvg - Looking back through the above, the logs you have show that uwsgi generated very little output in terms of bytes during the relatively long time the request ran. We can probably put in massive timeouts to make it work, but that’s not optimal, and you are likely to hit problems when download size goes close to, or larger than RAM…
I took a look into the sregistry download code and noted that the container download is being sent out via plain HttpResponse. This results in the file being read into RAM in its entirety before anything gets sent out - and from your logs it’s timing out during this process, before any data is being sent. There are 2 better ways to handle large files in Django:
Use StreamingHttpResponse and the basehttp FileWrapper - this will mean the uwsgi app doesn’t read the entire file into RAM - it streams it off disk straight to nginx, then out down the http connection.
Use xsendfile (django-sendfile has a good implementation supporting nginx) which offloads the serving of the download to Nginx - which will stream it out directly.
I’ve done both of these in systems at work that need to serve files that could be in the 100s of GBs without issue.
@vsoch I can try and to a PR to address this on the weekend.
thanks @dctrud !