how to configure nginx with elasticsearch on windows


I’m new to nginx. I’m trying to configure nginx with elasticsearch on windows OS. I’m trying to redirect the http request automatically to secondary node(localhost:9201) if primary node(localhost:9200) is down using nginx .Can anyone please help me ? Thanks in advance.

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.


The best way to set things up would be to separate things out instead of combining them all in to one single file.

For example, nginx.conf at the location below.


The above file would have something such as (for a more complete configuration):

user                                                    nginx nginx;
worker_processes                                        1;
worker_priority                                         -10;

worker_rlimit_nofile                                    260000;
timer_resolution                                        100ms;

pcre_jit                                                on;

events {
    worker_connections                                  10000;
    accept_mutex                                        off;
    accept_mutex_delay                                  200ms;
    use                                                 epoll;

http {
    map_hash_bucket_size                                128;
    map_hash_max_size                                   4096;
    server_names_hash_bucket_size                       128;
    server_names_hash_max_size                          2048;
    variables_hash_max_size                             2048;

    index                                               index.php index.html index.htm;
    include                                             mime.types;
    default_type                                        application/octet-stream;
    charset                                             utf-8;

    sendfile                                            on;
    sendfile_max_chunk                                  512k;
    tcp_nopush                                          on;
    tcp_nodelay                                         on;
    server_tokens                                       off;
    server_name_in_redirect                             off;

    keepalive_timeout                                   5;
    keepalive_requests                                  500;
    lingering_time                                      20s;
    lingering_timeout                                   5s;
    keepalive_disable                                   msie6;

    gzip                                                on;
    gzip_vary                                           on;
    gzip_disable                                        "MSIE [1-6]\.";
    gzip_static                                         on;
    gzip_min_length                                     1400;
    gzip_buffers                                        32 8k;
    gzip_http_version                                   1.0;
    gzip_comp_level                                     5;
    gzip_proxied                                        any;
    gzip_types                                          text/plain

    client_body_buffer_size                             256k;
    client_body_in_file_only                            off;
    client_body_timeout                                 10s;
    client_header_buffer_size                           64k;
    client_header_timeout                               5s;
    client_max_body_size                                50m;
    connection_pool_size                                512;
    directio                                            4m;
    ignore_invalid_headers                              on;
    large_client_header_buffers                         8 64k;
    output_buffers                                      8 256k;
    postpone_output                                     1460;
    proxy_temp_path                                     /etc/nginx/cache/proxy;
    request_pool_size                                   32k;
    reset_timedout_connection                           on;
    send_timeout                                        10s;
    types_hash_max_size                                 2048;

    open_file_cache                                     max=50000 inactive=60s;
    open_file_cache_valid                               120s;
    open_file_cache_min_uses                            2;
    open_file_cache_errors                              off;
    open_log_file_cache                                 max=10000 inactive=30s min_uses=2;

    include                                             /etc/nginx/sites-available/*.conf;

Now above, you’ll see:

include                                             /etc/nginx/sites-available/*.conf;

That’s saying that NGINX should include all configuration files in the ./sites-available directory, which is where you’d place your website specific configuration (your server blocks) – i.e.

server {

Now, for a full proxy setup, create a file in /etc/nginx/sites-available/ named something like


Inside that file, paste in:

upstream backend {

server {
    listen 80;

    location / {
        proxy_pass http://backend;
        proxy_connect_timeout 59s;
        proxy_send_timeout 600;
        proxy_read_timeout 600;
        proxy_buffer_size 64k;
        proxy_buffers 16 32k;
        proxy_busy_buffers_size 128k;
        proxy_temp_file_write_size 64k;
        proxy_pass_header Set-Cookie;
        proxy_redirect off;
        proxy_set_header Accept-Encoding '';
        proxy_ignore_headers Cache-Control Expires;
        proxy_set_header Referer $http_referer;
        proxy_set_header Host $host;
        proxy_http_version 1.1;
        proxy_hide_header X-Powered-By;
        proxy_set_header Cookie $http_cookie;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_no_cache $http_pragma $http_authorization;
        proxy_cache_bypass $http_pragma $http_authorization;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;

In the upstream block, you need to use IP’s or Hostnames, as I’ve showcased above. You can’t use localhost since all your servers are not actually served by the same droplet. You should also use a name for the upstream other than localhost since that’s a reserved name in most all configurations.

Once you have everything configured with your IP’s / Ports (simply replace my IP’s in the upstream block with your own), you’d need to restart NGINX.


nginx config file:

events { worker_connections 1024; }

http { upstream localhost {

    server localhost:9201;
    server localhost:9202;

server { listen 80; server_name localhost:9200; location / { proxy_pass http://localhost; } } }

I have 3 ES nodes. Its supposed to redirect automatically when the primary node is down. Am i doing anything wrong. any advise would be helpful. Thanks in advance.


If you want to direct requests based on whether one node is up or down, then you’d need to setup a third node and let it function as the primary entry point for requests. In doing so, you’d be setting up a load balanced solution.

This load balancer node accepts incoming requests and directs them to servers that are listed in the upstream block (as noted in the guide). In the event one of them is down, requests are automatically routed to the next node in the list.

DigitalOcean actually has a basic guide on how to set this up, which I’d recommend taking a look at first.