Hi all,

Recently, maybe because of this lockdown thing, we are getting 3x the traffic in one of our client’s ecommerce and we are dealing with a lot of time outs to the point they can’t even work properly because they can’t check the orders in the admin panel.

We are using Woocommerce and our stack is EasyEngine 3.7.5 on a $40 droplet (4 cores, 8GB RAM) and usually we deal at any given time with 60~120 users at same time shopping. It handled that easily with no problem with only FASTCGI cache enabled.

But since this recent crisis with people staying at home we are dealing almost constantly with 240~300+ costumers at any given time (I guess we do not get more because we are stretched thin in resources) and using htop I see the 4 cores are at 100% (switching higher usage between MYSQL and PHP as top proccess in CPU usage) and 122 processes “running” and the load averages are all 120+. My PHP max children is set to 120, I wonder if raising or lowering this number would make any difference. The problem is, once the 4 cores get 100% usage things starts to drag down and it spirals downward since every request gets slowed down and consequently the costumer takes longer to shop (pretty much like traffic jam).

How do you guys deal with this kind of unusual behaviour? I enabled w3tc object cache and database cache, but didn’t seem to be making any difference.

My knowledge is limited to internet tutorials so I have no clue what else to do. I see load balancing would prolly be something to consider but I, again, have no clue how to deal with it. The ecommerce owner is having a hard time even managing the orders.

Thanks in advance.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

That’s precisely what distributed systems solve. I would have a look at kubernetes and managed databases. You can set kubernetes to automatically add nodes ( droplets ) and replicas of your app to handle the spikes. You can look at horizontal pod autoscaler and cluster autoscaler to achieve that.

  • Thx, I guess it is time to research then.

    On a second matter if possible, what happens if I lower the PHP max children to, lets 60 (50% less of present config) what would happen? I am thinking that if I lower the queue it will then lower the delivery time, but as I said I am no expert. Is this thinking corret or completely misunderstood of my part? Again, thx for replying

    • I’m not into php at all so I can’t help you there. But high level the idea of a cluster is that you have an entry point to your cluster, namely a load balancer, that forwards the requests to one of the replicas of your app, based on load. It can by itself recover from errors by restarting failed replicas, stopping to route requests to failed replicas, or add replicas in cases of higher loads, which potentially can result in adding nodes to be able to schedule all replicas when current available nodes don’t offer enough resources. You can set mins and maxs so that you stay within a certain budget and so on. Once setup correctly it should be able to run by itself for a very long time.

      Now the main issue I see is that it requires quite some basic knowledge and experience in order to set it up and deploy the app and necessary resources. If you start from scratch and want a quick solution I would turn myself to someone specialized in that and pay him for the service. Your use case being fairly simple ( a monolith and a database ). It should be a rather quick job !

Submit an Answer