We have a full docker container build with a digital ocean load balancer, an nginx webserver and a reverse proxy to the backend node container for api calls.
We are trying to make api calls in bulk for a nextjs page build and the connection is timing out after 60 seconds.
We’ve tried all of the normal things: updating the nginx.conf settings in the http heading, updating the nginx conf.d default server blocks with proxy settings increased, etc.
How do I increase the timeout time for api calls to the server? when we lower the values in these locations the settings work, but they do not expand beyond 60 seconds.
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
proxy_read_timeout 500;
proxy_connect_timeout 500;
proxy_send_timeout 500;
send_timeout 500;
keepalive_timeout 165;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name xyz.com;
#server_name localhost;
client_max_body_size 300M;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 0;
gzip_types text/plain application/javascript text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype;
proxy_read_timeout 100;
proxy_send_timeout 100;
send_timeout 100;
keepalive_timeout 100;
location / {
# Reverse proxy for Next server
proxy_pass http://container-static:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
add_header Cache-Control "public, max-age=3600";
}
location /app/ {
proxy_pass http://container-frontend:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
#root /usr/share/nginx/html;
#index index.html index.htm;
#try_files $uri /index.html$is_args$args;
}
#location @public {
# add_header Cache-Control "public, max-age=3600";
#}
# Serve any static assets with NGINX
location /next/static/ {
alias /usr/share/nginx/nextjs/;
add_header Cache-Control "public, max-age=3600, immutable";
}
# Handle backend requests to api
location /api/ {
proxy_pass http://container-backend:8080;
# proxy_read_timeout 1000s;
# proxy_send_timeout 1000s;
# send_timeout 1800s;
# keepalive_timeout 165;
# keepalive_requests 16500;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# handle AASA file for universal links in ios
location = /.well-known/apple-app-site-association {
root '/usr/share/nginx/html';
default_type application/json;
}
# handle file for universal links in android
location = /.well-known/assetlinks.json {
root '/usr/share/nginx/html';
default_type 'application/json';
}
}
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi there,
According to the official documentation, the managed Load balancer connections have a keep-alive time of 60 seconds:
https://docs.digitalocean.com/products/networking/load-balancers/details/limits/
Can you verify if you are able to execute your requests correctly if you make them against the Nginx service directly rather than going through the Managed Load Balancer?
Best,
Bobby