How to increase performance of a PHP app in Kubernetes?

Posted January 13, 2022 104 views
KubernetesDigitalOcean Managed KubernetesDigitalOcean Droplets

I’ve just started using Kubernetes. I have a Laravel Application running on a Basic Droplet 4vCPUs/8GB Memory. Doing a ApacheBench gives me ~160 requests/sec. Doing the same test on the same Laravel Application but running in Kubernetes on a 40vCPU/160GB Droplet, gives me only ~40 requests/sec. I’m using one DO LoadBalancer. Even if I only use one 4vCPUs/8GB Droplet I get ~35 requests/sec. I’ve checked the logs and the requests are being processed by all pods in use. Is it generally slower to run all requests through the load balancer -> ingress controller -> nginx -> php-fpm?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
1 answer

This is a really interesting but there are a lot of details to explore to get to the bottom of this.

The first thing I would note is that requests per second might not be the best metric to measure. You probably should measure average response time or, even better, P99 latency.

Also, do you know how ApacheBench is running its test? How many parallel requests is performs? One at a time? Something else?

Also, do you know the baseline processing time of your Laravel app? In other words, ignoring network time, how long does it take for your app to process a request? If your app is super fast, then even the smallest network slowness will significantly change your requests/sec: Compare (100ms network + 1ms processing) to (150ms network + 1ms processing).

That said, I think it’s reasonable that you will have some overhead if you use a Kubernetes cluster with a Load Balancer compared to an all-in-one droplet. You basically end up with something like:

  • Load balancer
  • Kubernetes node (nginx ingress)
  • Kubernetes node (Laravel –> nginx + php-fpm)
  • Hi nabsul,

    thanks for your detailed answer.

    I made the ApacheBench test with:

    ab -n 1000 -c 100 https://domain/

    And when accessing the page during the test over cellular network on my smartphone the server was extremely slow (~50sec to load the page). Even when I had 4 x 40vCPU/160GB nodes with lots of pods running on it. I also increased the number of load balancer nodes to 4 but didn’t help either. And after checking the logs after the tests, it seemed all the pods were involved.

    I also did the test with a static file served by nginx directly and also this test was on the Basic Droplet 4vCPUs/8GB much faster than on the Kubernetes cluster.

    I think I have to check all the involved parts again and try to find the bottleneck. I also think I’ll have to install an advanced monitoring system to monitor all parts.

    Btw: the goal is not just to increase the requests/sec but primarily keep the server responding even under high load.

    • 50 seconds for a response is definitely not normal. There must be something odd in your setup.

      Kubernetes will add some overhead (multiple levels of routing, containers vs direct OS installation, etc.), but all of that should never add more than 100ms. I run my personal site ( on a DO Kubernetes cluster and can easily achieve response times lower than 100ms (from my home on a fiber connection though).

      • In the meantime I‘ve managed to improve the speed. Instead of using TCP load balancing I‘ve changed it to https. Also I terminate SSL at the ingress controller now.
        When running ApacheBench tests with ~500 concurrent requests the server still responds within a few hundred milliseconds. Also the requests/s for a static html increased to ~900 requests/s. For a PHP page it‘s ~400 now. For now I’m pretty happy with the results. Thanks!