how to do this whit mysql

Posted November 19, 2015 4.3k views
MySQLLoad Balancing

i want to know how mysql loadbalancer and mutliple servers work
my idea is the loadbalancer when he receive a request he try to see wich mysql server is not too busy and give the request to it

i want to know if we can store data in other places than the server itself
for example : i want to have a loadbalacer , i want to have 10 mysql servers , then i want to have 100 database

i say this because what i have in my mind is : mysql server is used only to analyse sql queries , look for data in the database and return a result

so i would like to have multiple mysql servers but i want to store data in other place than these servers , is it possible ?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
2 answers


Is your example with 10 MySQL servers truly what you’re looking to do, or is this a blown up example? I ask as how you’d go about this really depends on how you want to start.

For a true load-balanced solution, your Web Servers, MySQL Servers, Caching Servers, etc would all be independent (i.e. each their own Droplet). You would then setup HAProxy, Varnish or another piece of software to be the primary access point, which would then route each incoming request to the proper Droplet based on how you configure it to respond and priority.

A very basic example layout would look something like:

– » HAProxy / Varnish

– – » NGINX
– – » NGINX

– – » Caching Server (Memcached, Redis etc)
– – » Caching Server (Memcached, Redis etc)

– – – » MySQL (Master)
– – – » MySQL (Slave)

HAProxy / Varnish would be the primary entry point (and where you point your domain to). It would then route requests to each of the NGINX servers based on Priority, Status etc. The NGINX servers would be setup to mirror one another so that, regardless of which server is accessed, the requests made by your visitors, are valid.

The MySQL servers would be setup in a Master/Slave configuration. The Master would be able to perform reads and writes (thus handling requests from your script), while the slave would be ideally setup to only allow reads in the event the Master is down. You can, of course, set it up to allow both, though you’d need to make sure that the Slave syncs with the Master and remains as the active until this sync is complete, otherwise you’d have requests being sent to the Master for data that is only available on the Slave, thus the request would fail.

Anything created on the Master would be synced to the Slave in a basic Master/Slave setup. So if you created 100 databases, then those 100 databases would be synced (data included).

For the Caching servers, you would add each server via your script. Using PHP as an example, you’d make sure you have the PHP extension enabled, instantiate the Redis class and add the servers. Very similar to how you’d connect to MySQL (if with PHP, then by using PDO).

The idea is that everything would be seamless and that downtime should not be visible to the visitor, should it occur – eliminate the single point of failure.

That said, the SPOF in this example is the primary server. Up until a short while ago, DigitalOcean did not provide a way to get around this, though now they provide “Floating IP’s”, which will allow you to resolve this. You can set the Floating IP to failover to any Droplet, thus allowing you to setup multiple primary entry points.

  • the real problem is : i don’t understand yet how mysql load balancing / mysql servers works

    and what i want to do realy is : i am thinking that i will have this setup

    something here
    load balancing
    —- apache server 1
    —- apache server 2

    something here
    load balancing

    —- mysql server 1
    —- mysql server 2

    —— database 1 database 2 .......... database 10

    i am not trying to do this work by myself
    i know it is the work of many people

    what i am trying to do is just understand how things works
    and to be specific : i have in my mind that mysql server is like apache server
    it can receive an sql query than go to some folders to look for data

    so if this is true i decided to have manay mysql servers in their own droplets
    and then to put the data itself in other droplets

    and this is my real question here
    could data be separated than the server who execute sql queries

    • @qw0wwk2

      In the most simple ways (without getting too technical), load balancing works by routing requests through a load balancer. For instance, when yourdomain.ext is accessed, the load balancer accepts the request and determines which server to send that request to.

      The load balancer will normally ping the servers at a set interval to ensure that each of them is in fact responding, that way it doesn’t send a request to a server that’s down.

      To better simplify how a load balancer works, this is a basic example of how things go when someone types yourdomain.ext (i.e. your domain name) in the web browser:

      Request » Web Server » Response » Browser

      A request is made, the web server handles it and then responds. Once a response is issued, the result is displayed in the browser.

      With a load balancer:

      Request » Load Balancer » Query Defined Servers » Selected Web Server » Response » Browser

      A request is made, the load balancer accepts it and queries a defined list of servers starting with the server given the highest priority. If that server responds, the request is pushed through to it. If not, it queries the next server and repeats the process until either a server accepts the request or until there’s no servers left to query (i.e. resulting in actual downtime because nothing is responding to the load balancer query).

      That said, Apache/NGINX and MySQL are two entirely different things :-). A web server handles a request and provides a response to that request. A database server does not directly handle a request in most cases. A script connects to the database server (and in turn, a database residing on it) and pulls data from a query. The script then uses the data provided from the query to structure data output.

      As an example, PHP would use the PDO driver (standard feature) to connect to MySQL. A query from a PHP script may query a database full of products, specifically requesting product names and prices using a select statement. MySQL will handle that query and respond if it’s valid. The PHP script will then handle how to format, structure and display the data received from MySQL in the users’ browser.

      A web server is needed to handle HTTP/HTTPS requests, which are then passed off to your scripts (be they PHP or otherwise). Those scripts (not the web server) connect to the database server and in turn, output the data received.

      In a list-like format, your load balancer would be at the top of the list, the web servers right below the load balancer and the database servers right below the web servers (again, this is a very basic example, far more complex scenarios exist).

      You could introduce a load balancer between the MySQL servers, though you’re looking at a far more complex setup than just load balancing a few web servers.

      Ideally, a Master/Slave setup with replication would be something better to start with (and far easier to configure starting out) than trying to load balance everything :).

      The question I would also pose is: do you really need the hardware you’re proposing?

      A setup consisting of a load balancer, two web servers, two caching servers, and a Master/Slave MySQL configuration can handle quite a bit of traffic out of the gate and even more with proper configuration and optimization.

      Unless you’re serving traffic levels in the 1-5 million+ a month (or in a shorter period of time), or the scripts being ran are extremely resource intensive, you most likely will not need 10x database servers from the start. A single MySQL server with 2-4GB of RAM could easily handle the creation of 100 databases and serve the requests from each in a low-traffic scenario with few to little hiccups.

      • to be honest with you
        when you said : A script connects to the database server (and in turn, a database residing on it) and pulls data from a query.

        this let me know that data reside into the mysql server it self

        and i was thinking that data could be stored in a machine where there is no mysql server

        i was thinking than mysql server is like apache server , and that i can have many mysql servers to handle lot of sql queries

        here is the thing :

        i want to have like a small social network with up to 1Million user connected in the same time -

        i found that for apache it can handle up to 10.000 request per second
        and i found that i will need lot of data
        and that sql queries will fail when tables become large
        or when a number of sql queries per second is reached

        so i begun to think what if mysql server is like apache server
        what if i can have 10 mysql servers with a load balancer for them
        and then to have like 1000 small database stored in other machines - not machine where mysql servers are installed

        i was thinking that i could separate data from that software - mysql server - which handle sql queries

        all i needed then is to know if what in my mind is really the case

        now what i am thinking is :

        i can have many mysql servers
        each server handle like 100 database

        with those mysql servers i will see if i can have a load balancer for them

        then i will see if i can make each one of those mysql servers send a request to database stored under other servers

        or i will wait until having enough budget to hire real team for this
        i am just trying to have an idea on how it works

        it seems that you are a pro
        could you tell me if someone like me told you how much would be the cost to design a solution for a small social network , with the related services : managing setup , just an idea if you do this work , if this is not your activity then i am so thankful to you for your real nice help and your time , and the explanation you gave to me , sorry for my bad english

        • @qw0wwk2

          I’d be more than happy to discuss something like this with you privately :-). I would rather take such a discussion to e-mail, however, as DigitalOcean’s community isn’t the best place for such ;-).

          You’re more than welcome to get in touch with me at:

          jtittle [@]

          Simply replace the [@] with @ (trying to keep the spam at bay).

          • thank you very much
            may you give me the time to well do my homework before talking to you so you can see how bad are my ideas , and be sure that the help you gave to me this day will not be lost , you are the first choise to wok with no matter how good other people may be , because you gave me time and interest


No problem, shoot me an e-mail anytime :-).

I will say that Apache, in real-world application, is not going to handle 10,000 / requests per second by default, or without major tweaking & tuning. The majority of RPS tests boasting such high levels of are ran on simple static files (such as a very basic index.html file), not on fully developed applications that are serving real data.

I could deploy a 2-8GB Droplet right now with a stock Apache configuration and it wouldn’t even be able to cope with that much traffic. For one, you’d run in to socket / connection issues at the server and OS level if the system is not tuned for such activity. Even with such tuning, Apache isn’t the only factor. Using PHP as an example, with such traffic, application level tweaking and tuning gets tossed together with Apache as you’re then fine-tuning (as a basic example) classes, methods, function calls, output etc.

To serve a real application that relies on Apache/NGINX, PHP, MySQL and effective caching, I would not expect those results without a truly custom setup.

(sidenote: situations like this, where 10,000 RPS is a real situation, this is where the load balancers come in to play and help mitigate the issues noted above :-)).