Question

Migrating to a Scalable Wordpress Solution

TLDR; I want to manage and configure multiple LEMP stacks with Terraform and Chef/Ansible to host WordPress sites without losing the data when a droplet is destroyed.

Here are the questions that I am looking to have answered:

  • Is Terraform and Chef/Ansible the right direction to move in to help me manage multiple WordPress sites?
  • Is it better to update NGINX configuration by destroying and rebuilding a droplet with Terraform, or is it better to change the file across all servers with Chef/Ansible
  • If I use Terraform to configure NGINX, PHP, WordPress Config Files, etc… How do I secure the WordPress database and Uploads data so that when a Droplet is destroyed by Terraform, the site remains live without interruption
  • Is Block Storage or GlusterFS a better solution for managing the Data of 100’s of different WordPress sites
  • Is Chef or Ansible better for this purpose

Here is the context for those questions: I design and host WordPress websites in my area. For a lack of a deeper understanding of automation, I have been manually deploying the sites on DigitalOcean droplets and configuring them each independently. The only difference between the servers is the WordPress database and Uploads data, which I feel justifies the move to automate deployment through Terraform. I guess what I am looking for is a critical ear to interpret my plan and guide me in the right direction.

The plan is to set up a centralized management server which will host Terraform for infrastructure and docker automation, Chef (or Ansible) for configuration management, and an OpenVPN server for channeling secure access to WordPress servers. All servers are built on a LEMP stack and have carefully secured and optimized NGINX configurations.

The goal is to set it up so that when a client request a website designed for them I can easily spin up a new droplet through Terraform which containerizes the LEMP stack and configures everything (like NGINX files) with Chef or Ansible. I would also like to be able to update server files in one place and have it update across all servers. This is where I could really use some guidance.

Does it make sense to update configuration files (like NGINX or PHP) with Chef/Ansible or does it make sense to rebuild the droplets with Terraform so there is no configuration drift. If I do it with Terraform then the next problem that presents itself is figuring out a way to keep WordPress sites and data live when Terraform destroys a droplet. There are some pretty good tutorials on here about setting up GlusterFS or block storage, so I could potentially host all of the WordPress data on separate servers and use Terraform to manage the processing servers.

I appreciate the community taking the time to read this and any feedback or guidance would be greatly appreciated.


Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

jarland
DigitalOcean Employee
DigitalOcean Employee badge
July 11, 2018
Accepted Answer

(Some mistakes were made in this reply, see comments that follow it)

Hello friend!

That sounds like a lot of fun any way you spin it. I’m going to do my best to provide answers to what I can, and hope that others feel compelled to weigh in on it as well. My perspective alone will not be enough to satisfy all of the advice that you would like, but I still have some thoughts I’d like to share.

My position on Terraform, Chef, and Ansible is “maybe” on all points. I think someone who uses them more intensively than I do might have reasons to lean in a more clear direction. I hope someone will see the opening I’m leaving there and run with it.

What I think I have the most insight on is this:

  • Making individual servers irrelevant, and their destruction non-destructive

Here’s what I’m thinking, just as a rough idea:

  • 2x MySQL with master-master replication behind HAProxy
  • 2x LEMP stacks with data in sync via GlusterFS, both behind HAProxy (or our Load Balancer service)
  • Automation which spins up a new server, adds it to GlusterFS cluster, connects to MySQL HAProxy instance, then adds it to the load balancer serving the LEMP stacks

You can use our block storage for the data stores, but the volume will only be mounted to one droplet at a time. You would end up needing a volume for each stack with something like GlusterFS keeping the volumes in sync on each of those droplets. Destroying one droplet would inevitably make it’s attached volume fall out of sync with the rest, requiring a fresh sync with a new droplet. Unless you need the additional storage, this seems to increase complexity for no particular gain by using storage volumes.

Now one might imagine having 2x data stores in the cluster and mounting them over network to multiple servers at once. In my experience, this is more functional in theory than in practice, and I have never personally found a protocol that satisfies for me the theory of how great that could potentially be. The most functional implementation I have had on that path has been a davfs docker volume.

I hope that at least helps you with some considerations on this :)

Kind Regards, Jarland

Have you considered Docker at all? Its pretty awesome and solves your specific use case pretty nicely. I also host quite a few wordpress sites + a number of more complex wordpress installs that utilize things like decoupled react frontends via rest api. We also manage hosting for fortune 500 clients and having a site go down or losing data is simply not an option. It could cost some of our clients hundreds of thousands if not millions of dollars if their site went down for a day or we lost any data. We set up Wordpress to operate as closely to a 12 factor application as possible (https://12factor.net).

We also utilize docker in our Development which allows us to have 100% parity between our development and production environments. When adding things like plugins/themes we always add those to the filesystem locally and build them into the docker image rather than uploading via the wp-admin. Plugin & theme updates and additions are all handled in our local dev environment and fully tested before pushing to staging and eventually production. We generally disable plugin/theme install and updating inside the wp-admin as well.

Images / Videos / etc are offloaded to Digital Ocean’s New spaces storage using the iLab media cloud plugin. Historically we always used Amazon S3 for this but that can get pretty expensive. Spaces + a good CDN like imgix or DO’s upcoming Spaces CDN functionality is all you need to make sure uploads aren’t lost as a result of a server or container going down and you get the added benefit of globally available super fast image loading.

We currently utilize a set of external mysql servers setup with master/master replication behind a load balancer just as Jarland does above. We also have an internal project that uses a MariaDB Cluster within the Docker environment but aren’t 100% keen on running production Databases within docker just yet.

Rancher has been our docker orchestration platform of choice up to this point. Its a pretty incredible system that handles a lot of the tedious devops stuff and is really easy to setup and use. The ability to add custom “Application Stacks or Templates” within the catalog that can be launched quickly by developers without devops knowledge. Rancher also has a full rest api which allows you to literally automate everything.

Digital Ocean’s upcoming Kubernetes release will make docker container management even easier and will be able to fulfill your requests Out of the box as it will integrate DO’s object storage, spaces, load balancers, DNS, etc for a complete managed experience. You’ll simply launch your apps via Kubernetes YAML files and they will handle the management of the Kubernetes cluster. Much easier than messing with chef, ansible, glusterfs, etc. You also get the added benefit of being able to quickly launch highly available apps with Letsencrypt ssl certs, monitoring, and more across $5 DO droplets and autoscale as traffic / server load increases. You can literally have a $15 3 x node cluster with this method. The ability to both automate all the things and manage your costs more granularly are the driving factors behind our Agency moving to Docker almost exclusively.

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel