Wordpress Scaling Infrastructure

September 24, 2015 1.5k views
Deployment WordPress Server Optimization Nginx Ubuntu

In reference to this article: https://www.digitalocean.com/community/tutorials/automating-the-deployment-of-a-scalable-wordpress-site

is there a nice 'starting' point that I can implement which will allow the adoption of this model? For instance, can I simply start out with a web server and MariaDB server (2 servers)? Storage will be on the web server. Then, when growth is needed, separate into the file cluster model and implement another web server and load balancer?

Would this be doable or would it add a lot of complexity into things since it will require separating the storage from the web server after the fact?

Also, for SSL, I'm assuming the certificate would need to be loaded on all web servers or would the load balancer be able to handle it? Would I need a wildcard in this scenario?

1 Answer

This would be do-able and shouldn't add too much complexity. As long as your web root (whether it is on the local disk or a glusterfs share should not make any difference for how WordPress itself is configured (aside from the possibility of changing the web root if your glusterfs is not mounted at /var/www/html).

For an SSL certificate you would add this to the load balancer as this is the web server that end users would interact with.

  • Hey Ryan,

    Thanks for the followup. I have a couple more questions that hopefully you can answer.

    I've been doing a lot more research on this model and I have found some concerns relating users logged in to wordpress when utilizing a load balanced environment. The two concerns are there is only one db server and php sessions will be locked on a single host.

    Both of these items can be remedied by implementing a Redis server to store object cache from the db and php sessions. My question regarding this is, where would I place the Redis server? Somewhere off to the side? I know that in each wp-config file, you can tell the web servers to reference the Redis server. So that might be it actually...

    Also, I'm not sure that I'll ever get this big, but when it comes time to add db servers, would it be better to add servers in a master/slave config or master/master behind another load balancer?

    As for Redis, I believe you can simply add more servers and use sentinel to replicate. Still researching here.

    My ultimate goal is to have dedicated caching and decouple as much as possible and that's why I love this model. The load balanced Nginx (Fastcgi-cache) web servers will serve as page cache and optimization, and Redis will serve object cache for database calls and php sessions for users that are logged in. Granted each web server will probably have a different set of cached paged... but I want to think it'll be okay. At least the persistent objects will be cached on Redis and available to all web servers.

    One of the biggest issues that I'm running into are articles based on vertically scaled servers which have mysql, nginx, fastcgi_cache, and Redis all installed on one machine. That's insane but moreover, doesn't help me answer my questions and leaves me racking my head against the wall :)

Have another answer? Share your knowledge.