This is more of a conceptual question about when is the appropriate time to spit a database server off to it’s own droplet.

So say you have a LAMP type solution running on a 2vcpu/4GB droplet and you think it’s time to get some more capacity. Doubling the size of the droplet is easy, but is that the right way to go? Spinning up a separate database server is another option with some advantages and some disadvantages.

Advantages:

  • Isolated database cannot be stopped by overloaded processes on the web/application server.
  • The database is further isolated from a security perspective.
  • Problem diagnosis and performance monitoring is easier as the web and database loads are separated.
  • Simplified maintenance and upgrades because the web application can be pointed at a new/replacement database server that is already tested and operational.

Disadvantages:

  • network latency for requests to the database server (same datacenter)
  • less overall capacity for a spike in resources needs by the web application processes or the database.

I’m split right down the middle on the advantages vs disadvantages and thought I might find some interesting and informative opinions in the DO community.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
3 answers

In addition to the advantages you mentioned, another one I can think about is that with a separate database server, it becomes much easier to build other apps (or more instances of the same web app if stateless), API end-points, internal utilities etc. that need to view the same data. If the database was living on the same server as the web server then all of this other traffic would unnecessarily conflict with the primary use case of the server - serve web traffic.

“When do you move your database to a separate server?”

Answer: “You’ll know.”

But seriously, you outlined all of the advantages and disadvantages of rolling it out to its own droplet. The same considerations can be made when using DO’s hosted solutions including latency and the amount of control you have over the system itself.

We had a situation where we were hosting two load balancers in front of 6 gluster-enabled nginx/PHP servers atop 2 database servers configured in an active/passive configuration. It worked great for handling massive amounts of traffic and when there was a problem on one it didn’t compromise the cluster.

I am a huge proponent of deploying servers to meet need. As rsharma said it gives you the flexibility to add instances and access that data from other sources. While that is still possible with an on-server instance the CPU/resource gains on the web server are worth it in my opinion.

Lastly, the luxury of private networking DO offers eases my mind in that I don’t have to expose the servers to the public and makes latency less of a concern. While you will not get the full response time of having the database on the same server, running on private networking within the datacenter makes the latency minimal.

Btw also found this good article that touches upon related points: https://alexpareto.com/scalability/systems/2020/02/03/scaling-100k.html

Summary:

  • 1 User: 1 Machine
  • 10 Users: Split out the Database Layer
  • 100 Users: Split Out the Clients
  • 1,000 Users: Add a Load Balancer.
  • 10,000 Users: CDN
  • 100,000 Users: Scaling the Data Layer
Submit an Answer