Question

Http outperformed Http2 ,why, please help me to figure out?

I’m currently conducting performance testing on my NGINX web server container and I’m encountering some unexpected results. My tests show that HTTP performance is outperforming HTTP/2, despite SSL and reuseport being enabled.

Here are the configurations for HTTP:

nginx.conf:


user www-data;

worker_processes auto;

pid /run/nginx.pid;

include /etc/nginx/modules-enabled/*.conf;

events {

    worker_connections 768;

}

http {

    sendfile on;

    tcp_nopush on;

    tcp_nodelay on;

    keepalive_timeout 65;

    types_hash_max_size 2048;

    server_tokens off;

    include /etc/nginx/mime.types;

    default_type application/octet-stream;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_prefer_server_ciphers on;

    access_log /var/log/nginx/access.log;

    error_log /var/log/nginx/error.log;

    gzip on;

    include /etc/nginx/conf.d/*.conf;

    include /etc/nginx/sites-enabled/*;

}

website.conf (vHOST):


server {

    listen 80 default_server;

    listen [::]:80 default_server;

    root /var/www/html;

    index index.php index.html index.htm;

    server_name _;

    location / {

        try_files $uri $uri/ =404;

    }

}

Test 1 results for HTTP:


docker run --rm -it ruslanys/wrk -t12 -c400 -d10s http://ip

Running 10s test @ http://ip

12 threads and 400 connections

Thread Stats Avg Stdev Max +/- Stdev

Latency     1.60ms    0.97ms  27.52ms   75.52%

Req/Sec    20.40k     2.79k  105.77k    96.34%

2441871 requests in 10.09s, 61.93GB read

Requests/sec: 241889.68

Transfer/sec: 6.13GB

And here are the configurations for HTTP/2:

nginx.conf:


user www-data;

worker_processes 16;

pid /run/nginx.pid;

thread_pool iopool threads=32 max_queue=65536;

pcre_jit on;

events {

    worker_connections 8000;

    use epoll;

multi_accept on;

accept_mutex off;

}

http {

    sendfile            on;

    tcp_nopush          on;

    tcp_nodelay         on;

    gzip                on;

    keepalive_timeout   5;

    types_hash_max_size 2048;

    client_max_body_size 2G;

    http2 on;

    gzip_vary on;

    gzip_disable "MSIE [1-6]\.";

    gzip_proxied any;

    gzip_http_version 1.0;

    gzip_min_length  1000;

    gzip_comp_level  7;

    gzip_buffers  16 8k;

    gzip_types    text/plain text/xml text/css application/x-javascript application/xml application/javascript application/xml+rss text/javascript application/atom+xml application/json image/svg+xml;

reset_timedout_connection on;

aio threads;

server_tokens off;

# Open File Cache

open_file_cache          max=4096 inactive=5m;

open_file_cache_valid    5m;

open_file_cache_min_uses 2;

open_file_cache_errors   on;

server_names_hash_max_size 2048;

 server_names_hash_bucket_size 512;

    real_ip_header CF-Connecting-IP;

    log_format debugging '$remote_addr - $remote_user [$time_local] '

                 '"$request" $status $body_bytes_sent '

                 '"$http_referer" "$http_user_agent" "$http_x_forwarded_for" $request_time';

    include             /etc/nginx/mime.types;

    default_type        application/octet-stream;

    include             vhosts/*.conf;

}

website.conf:


server {

    listen 443 ssl fastopen=256 deferred reuseport so_keepalive=on ipv6only=on;

    server_name _;

    ssl_certificate /etc/nginx/ssl/xxx.crt;

    ssl_certificate_key /etc/nginx/ssl/xxx.key;

    ssl_trusted_certificate /etc/nginx/ssl/xxx.crt;

    index index.php index.htm index.html;

    root /var/www/html;

 # Enable session resumption to improve HTTPS performance

    ssl_session_timeout 1d;

    ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions

    ssl_session_tickets off;

    # Enable OCSP stapling for better privacy and performance

    ssl_stapling on;

    ssl_stapling_verify on;

    # recommended ssl settings for better security

    ssl_protocols TLSv1.3;

    ssl_prefer_server_ciphers on;

    ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

}

Test results for HTTP/2:


docker run --rm -it ruslanys/wrk -t12 -c400 -d10s https://ip

Running 10s test @ https://ip

12 threads and 400 connections

Thread Stats Avg Stdev Max +/- Stdev

Latency     3.27ms    1.83ms  46.73ms   72.65%

Req/Sec    10.12k     0.87k   12.28k    76.58%

1210716 requests in 10.03s, 30.73GB read

Requests/sec: 120722.60

Transfer/sec: 3.06GB

I built the Docker image using the following Dockerfile:


# Base image

FROM debian:bullseye

RUN apt-get update && \

    apt-get install -y \

        build-essential \

        wget \

        libpcre3 \

        libpcre3-dev \

        zlib1g \

        zlib1g-dev \

        libssl-dev

ARG NGINX_VERSION="1.25.1"

ARG CONFIGURE_ARGUMENTS="--prefix=/etc/nginx \

    --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf \

    --error-log-path=/var/log/nginx/error.log \

    --http-log-path=/var/log/nginx/access.log \

    --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock \

    --http-client-body-temp-path=/var/cache/nginx/client_temp \

    --http-proxy-temp-path=/var/cache/nginx/proxy_temp \

    --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \

    --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \

    --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-compat \

    --with-file-aio --with-threads  --with-http_dav_module --with-http_flv_module \

    --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module \    --with-http_random_index_module --with-http_realip_module \

    --with-http_secure_link_module --with-http_slice_module \

    --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module \

    --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream \

    --with-stream_realip_module --with-stream_ssl_module \

    --with-stream_ssl_preread_module"

# Download, compile and install Nginx

RUN wget --inet4-only http://nginx.org/download/nginx-$NGINX_VERSION.tar.gz && \

    tar -xzvf nginx-$NGINX_VERSION.tar.gz && \

    cd nginx-$NGINX_VERSION && \

    ./configure $CONFIGURE_ARGUMENTS && \

    make && make install && \

    cd .. && rm -rf nginx-$NGINX_VERSION nginx-$NGINX_VERSION.tar.gz

RUN mkdir -p /var/cache/nginx/client_temp

# Expose ports

EXPOSE 80 443

# Start Nginx

CMD /usr/sbin/nginx -g "daemon off;"

My server environment consists of Rocky Linux 9.2, Docker version 24.0.2, build cb74dfc, and the Linux kernel version is 6.4.3-1.el9.elrepo.x86_64. The NGINX version is 1.25.1.

Does anyone have suggestions on why I might be seeing these results?


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

alexdo
Site Moderator
Site Moderator badge
August 18, 2023

HTTP2 isn’t necessarily always faster than HTTP1. A lot of factors, such as TCP tuning, kernel settings, and HTTP2’s single TCP connection per origin (versus HTTP/1’s multiple TCP connections), can influence the results.

In your case, comparing your HTTP and HTTP2 configurations, there are certain differences that might be causing the performance variation:

  • Worker_processes: You set it to ‘auto’ for HTTP and ‘16’ for HTTP/2. In the latter case, if it’s higher than the number of CPU cores, it might decrease performance.
  • SSL settings: SSL/TLS has overhead and depending on the cipher suite, it might degrade performance.
  • Keepalive timeout: Higher keepalive times can hold connections longer and can affect performance.

Try to align your HTTP and HTTP2 configs as much as possible to isolate the variables that are impacting the performance.

Please remember, configurations might need to be optimized based on your application requirements and the underlying hardware characteristics.

Hope that this helps!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel