High Volume Traffic (25,000 simultaneous visitors)

July 1, 2014 5.8k views

Hi we are expecting a spike in traffic next week after a promo. Does anyone know if upgrading the droplet to something like the highest of the high volume plans:

20 CoreProcessor
640GBSSD Disk

Would be enough to sustain 25,000 visitors at once or 150,000 hits in a few minutes?

I know they have recommended a DEDICATED server, which DO is not. Does anyone have experience with this?

8 Answers

In my opinion it seems a bit excessive, having 20 cores and such, what sort of application are you running?

  • We're running a Wordpress site with the shopping cart plug-in <a href="">Shopp</a>.

    Even if it's excessive to bulk our droplet up that much, I'd rather allot too much processing than have it all go down. @lukeberry99 do you have any experience with this sort of traffic spike?

    Thanks for your response!

Are you sure that this is the legitimacy visitors, not DDoS?

  • We are preparing for a promo event, and the numbers above are what they told us to prepare for, so yes it would be for legitimate visitors.

The traffic and CPU load should scale linearly with the amount of visitors that you have, from what I see? You should be able to extrapolate on that. Keep in mind that shared resourced (e.g. hard drive) will provide the bottleneck at some point. Having 20 CPUs plaster the HDD with requests wont help.
The biggest spikes I have been experiencing are at about 1k visitors in 10 minutes, on the smallest server I never went above 50% CPU load by these accesses alone, even with the number of database reads it creates.
Perhaps you can stresstest your server before the event starts? I do not know if theres tools available for this, but that would be a possibility to figure out the best configuration of the server.

  • @bensnik testing would be great if I can find a tool to do so, I'll look into it.

    I understand what you're saying about the shared resources. I imagine at a certain point, even with DO's SSD's, no mater the memory and processors the shared resources will cause the site to fail (obviously why the promo company is recommending a dedicated server).

    I'll submit a ticket with DO to see if they have any data on this. Does anyone else have experience with this sort of traffic spike?

    Thanks so much for all your responses, truly helpful!

Your performance depends on your application and how it's configured.

I would recommend setting up your WordPress so it caches static pages for users who aren't logged in. You can use WP Super Cache with Nginx for that. Review this tutorial from Shopp, then integrate their recommended configuration into my WP Super Cache tutorial. Skip the section on Jetpack. It is very important to configure WP Super Cache with the Shopp exceptions, or your store will break.

For load testing, you may want to try out Apache JMeter. I have also written a tutorial on that, here, but you will probably not be able to generate the kind of load that you mentioned with a single computer. Also, you would have to come up with a more realistic test case (than the one in the tutorial).

I have tested a 1 CPU/1GB droplet with a static WordPress site with JMeter. Without WP Super Cache, I could serve 2.5 simulated users/sec before CPU utilization was too high. With WP Super Cache, I could serve over 50 simulated users/sec. Remember, this won't affect the load of users who use your Shopp pages (because they are dynamic).

by Mitchell Anicas
In this tutorial, we will go over how to use Apache JMeter to perform basic load and stress testing on your web application environment. We will show you how to use the graphical user interface to build a test plan and run tests against a web server. JMeter is an open source desktop Java application that is designed to load test and measure performance.
  • Hi Mitchell, this is super helpful! We do have WP Super Cache (and yes it initially brought down the shopping pages because we didn't 'ignore' the shop pages) but I'll take a look at your tutorial and try and run a basic test with JMeter.

    I have nothing insightful to add at this point, but will create a follow up post if I learn anything of value.

    Thanks so much for everyone's responses!

I can recommend to offload the static content to a content delivery network (CDN). This will significantly reduce the outbound traffic on your droplet. I know from my own experience with DO that you can run quickly into issues with your droplet if traffic spikes become too much (as you're sharing the resources with others).
Using a CDN has also the advantage that the content is cached around the world and closer to your end users (versus a droplet which is only one location).
For my projects I'm using KeyCDN. They offer a great service. It's the ideal CDN for my droplets.

I guess If u need good latency and throughput here should be good hosting. Nowadays I use this cdn which deliver the files to my end users in a super speed.

This is insane. If you need a machine that big, then there's something seriously wrong with the application's architecture. Vertically scaling (bigger machine) is limited. Horizontally scallng (lots of smaller machines, using a load-balancer such as HAProxy to delegate the work is industry-standard) it's much more scalable and cost-effective as you can add / remove smaller workers to the cluster. The next bottleneck will likely be the database. For now, that will the thing you'll need to scale-up vertically. To horizontally scale a database will require a lot more thought and expertise. A suggestion would be to use a message bus, like RabbitMQ, which can accept orders, but queue them up to be processed at a later time. Suggest you employ a contract architect for a little while to help you get this right.

Have another answer? Share your knowledge.