SQL replication and manage


I would like to have 2 separate droplets with same operating system (CentOS 7) and MySQL. I would like to create a Master-Master replication for the SQL, so (for example) I want to copy the changes from Server_A to Server_B every day, if Server_A crashes, then Server_B takes control and is used by the website. My question is, how can we do this and is there a graphical interface to manage the replication (set schedules, select replicated tables, change IP of the replicated server, etc.) that you could recommend?

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.


I’d recommend taking a look at this guide to get the ball rolling:

There’s not a GUI, so this would all have to be done manually, by hand. To manage failover, you’d do that one of two ways – using a load balancer or programming.

NGINX can be used as a load balancer to handle TCP requests on port 3306 using streams (slightly different setup than a standard NGINX configuration), or you can implement something at the app level which handles checking to see if server X is down, and if it is, use Y, or Z.

Since networking is often more reliable than code in terms of handling the initial request, I’d most likely recommend using NGINX as a load balancer to handle the TCP requests.

Using NGINX and the stream module, you can setup a new block, above http, called stream. That would look something like:

events {

stream {
    upstream db {

        listen 3306;

        proxy_pass db;
        proxy_connect_timeout 1s;

http {

In the above I’m only using one server, though multiple could be added. It works much like proxying does (and load balancing in general).

NGINX now listens for a request on 3306 and when it receives one, it’ll be sent to

This is just a basic example, but one I actually have in testing right now and it works very well.