This is more of a conceptual question about when is the appropriate time to spit a database server off to it’s own droplet.
So say you have a LAMP type solution running on a 2vcpu/4GB droplet and you think it’s time to get some more capacity. Doubling the size of the droplet is easy, but is that the right way to go? Spinning up a separate database server is another option with some advantages and some disadvantages.
Advantages:
Disadvantages:
I’m split right down the middle on the advantages vs disadvantages and thought I might find some interesting and informative opinions in the DO community.
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Btw also found this good article that touches upon related points: https://alexpareto.com/scalability/systems/2020/02/03/scaling-100k.html
Summary:
Answer: “You’ll know.”
But seriously, you outlined all of the advantages and disadvantages of rolling it out to its own droplet. The same considerations can be made when using DO’s hosted solutions including latency and the amount of control you have over the system itself.
We had a situation where we were hosting two load balancers in front of 6 gluster-enabled nginx/PHP servers atop 2 database servers configured in an active/passive configuration. It worked great for handling massive amounts of traffic and when there was a problem on one it didn’t compromise the cluster.
I am a huge proponent of deploying servers to meet need. As rsharma said it gives you the flexibility to add instances and access that data from other sources. While that is still possible with an on-server instance the CPU/resource gains on the web server are worth it in my opinion.
Lastly, the luxury of private networking DO offers eases my mind in that I don’t have to expose the servers to the public and makes latency less of a concern. While you will not get the full response time of having the database on the same server, running on private networking within the datacenter makes the latency minimal.
Answer: “You’ll know.”
But seriously, you outlined all of the advantages and disadvantages of rolling it out to its own droplet. The same considerations can be made when using DO’s hosted solutions including latency and the amount of control you have over the system itself.
We had a situation where we were hosting two load balancers in front of 6 gluster-enabled nginx/PHP servers atop 2 database servers configured in an active/passive configuration. It worked great for handling massive amounts of traffic and when there was a problem on one it didn’t compromise the cluster.
I am a huge proponent of deploying servers to meet need. As rsharma said it gives you the flexibility to add instances and access that data from other sources. While that is still possible with an on-server instance the CPU/resource gains on the web server are worth it in my opinion.
Lastly, the luxury of private networking DO offers eases my mind in that I don’t have to expose the servers to the public and makes latency less of a concern. While you will not get the full response time of having the database on the same server, running on private networking within the datacenter makes the latency minimal.
In addition to the advantages you mentioned, another one I can think about is that with a separate database server, it becomes much easier to build other apps (or more instances of the same web app if stateless), API end-points, internal utilities etc. that need to view the same data. If the database was living on the same server as the web server then all of this other traffic would unnecessarily conflict with the primary use case of the server - serve web traffic.
In addition to the advantages you mentioned, another one I can think about is that with a separate database server, it becomes much easier to build other apps (or more instances of the same web app if stateless), API end-points, internal utilities etc. that need to view the same data. If the database was living on the same server as the web server then all of this other traffic would unnecessarily conflict with the primary use case of the server - serve web traffic.