How to Configure Nginx for Optimized Performance on Centos 7

Posted September 21, 2016 15.5k views
NginxCentOSApacheControl Panels

Hi All,

I want to ask about how to optimize. Calculation RAM & CPU to set appropriate parameters.

I use CentOS 7 with Vestacp (Apache + Nginx).

And let me ask the meaning of keepalive_timeout 60 60;

Lu Bang Thang

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
2 answers


I can recommend you few DigitalOcean that could help you. This is not something VestaCP specific, but this will help tweak both Apache and Nginx if you want.


How to optimize Nginx is older Debian tutorial, but it will still do its job, even on Centos. Nginx directives are same for all distros, so this is not problem.
Beside that I would recommend some more steps like:
Installing gzip - this tutorial for Centos 7 will help you learn more about it.
In case you are using Wordpress this can help you.
In case of PHP application, FastCGI caching can help.

For keepalivetimeout under nginx - syntax of it is keepalivetimeout timeout (header_timeout)
This is used to specify how long keep-alive client connection will stay open on server side. header_timeout is used to specify that time in Keep-alive response header.

Learn more about nginx modules on offical nginx site.


How to optimize Apache
Apache Content Caching

If you need more help don’t hesitate to post here :)

by Jesin A
Here's how to setup FastCGI caching with Nginx on your VPS.
  • Hi xMudrii,

    If I check with grep command grep processor /proc/cpuinfo | wc -l
    workerprocesses 12;
    connections ?????;

    # Server globals
    user                    nginx;
    worker_processes        auto;
    error_log               /var/log/nginx/error.log crit;
    pid                     /var/run/;
    # Worker config
    events {
            worker_connections  12288;
            multi_accept        on;
            use                 epoll;
    worker_rlimit_nofile 100000;
    http {
        # Main settings
        sendfile                        on;
        tcp_nopush                      on;
        tcp_nodelay                     on;
        client_header_timeout           1m;
        client_body_timeout             1m;
        client_header_buffer_size       2k;
        client_body_buffer_size         256k;
        client_max_body_size            256m;
        large_client_header_buffers     4   8k;
        send_timeout                    30;
        keepalive_timeout               65;
        keepalive_requests              100000;
        reset_timedout_connection       on;
        server_tokens                   off;
        server_name_in_redirect         off;
        server_names_hash_max_size      512;
        server_names_hash_bucket_size   512;
        # Log format
        log_format  main    '$remote_addr - $remote_user [$time_local] $request '
                            '"$status" $body_bytes_sent "$http_referer" '
                            '"$http_user_agent" "$http_x_forwarded_for"';
        log_format  bytes   '$body_bytes_sent';
        #access_log          /var/log/nginx/access.log  main;
        access_log off;
        # Mime settings
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
        # Compression
        gzip                on;
        gzip_vary           on;
        gzip_comp_level     9;
        gzip_min_length     10240;
        gzip_buffers        8 64k;
        gzip_types          text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml application/x-font-ttf font/opentype;
        gzip_proxied expired no-cache no-store private auth;
        gzip_disable "MSIE [1-6]\.";
        # Proxy settings
        proxy_redirect      off;
        proxy_set_header    Host            $host;
        proxy_set_header    X-Real-IP       $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass_header   Set-Cookie;
        proxy_connect_timeout   90;
        proxy_send_timeout  90;
        proxy_read_timeout  90;
        proxy_buffers       32 4k;
        # SSL PCI Compliance
        ssl_session_cache   shared:SSL:10m;
        ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers        "ECDHE-RSA-AE";
        # Error pages
        error_page          403          /error/403.html;
        error_page          404          /error/404.html;
        error_page          502 503 504  /error/50x.html;
        # Cache
        proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=512m;
        proxy_temp_path  /var/cache/nginx/temp;
        proxy_cache_key "$host$request_uri $cookie_user";
        proxy_ignore_headers Expires Cache-Control;
        proxy_cache_use_stale error timeout invalid_header http_502;
        proxy_cache_valid any 3d;
        map $http_cookie $no_cache {
            default 0;
            ~SESS 1;
            ~wordpress_logged_in 1;
        # Wildcard include
        include             /etc/nginx/conf.d/*.conf;

    you see anything wrong?


    • worker_processes should be number of CPU cores you have. On 5$/10$ droplets it’s 1, on $20/$40 it’s 2…
      This is what you get by:

      • grep processor /proc/cpuinfo | wc -l

      worker_connections defines how many people can simultaneously be served by nginx.
      To check core limitations for worker_connections execute

      • ulimit -n

      Output would be number e.g.:

      ulimit -n

      That number you need to type in worker_connections.
      I can’t test now does commands work on Centos but if this is problem, we will try to find alternative

      Edit: When working with nginx, don’t forget to check config, nginx have very easy option for that. Execute:

      • sudo nginx -t

      This will run test against all your config files, if problem is there, it will report