Question

how to do this whit mysql

i want to know how mysql loadbalancer and mutliple servers work my idea is the loadbalancer when he receive a request he try to see wich mysql server is not too busy and give the request to it

i want to know if we can store data in other places than the server itself for example : i want to have a loadbalacer , i want to have 10 mysql servers , then i want to have 100 database

i say this because what i have in my mind is : mysql server is used only to analyse sql queries , look for data in the database and return a result

so i would like to have multiple mysql servers but i want to store data in other place than these servers , is it possible ?

Subscribe
Share

Submit an answer
You can type!ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

@qw0wwk2

No problem, shoot me an e-mail anytime :-).

I will say that Apache, in real-world application, is not going to handle 10,000 / requests per second by default, or without major tweaking & tuning. The majority of RPS tests boasting such high levels of are ran on simple static files (such as a very basic index.html file), not on fully developed applications that are serving real data.

I could deploy a 2-8GB Droplet right now with a stock Apache configuration and it wouldn’t even be able to cope with that much traffic. For one, you’d run in to socket / connection issues at the server and OS level if the system is not tuned for such activity. Even with such tuning, Apache isn’t the only factor. Using PHP as an example, with such traffic, application level tweaking and tuning gets tossed together with Apache as you’re then fine-tuning (as a basic example) classes, methods, function calls, output etc.

To serve a real application that relies on Apache/NGINX, PHP, MySQL and effective caching, I would not expect those results without a truly custom setup.

(sidenote: situations like this, where 10,000 RPS is a real situation, this is where the load balancers come in to play and help mitigate the issues noted above :-)).

@qw0wwk2

Is your example with 10 MySQL servers truly what you’re looking to do, or is this a blown up example? I ask as how you’d go about this really depends on how you want to start.

For a true load-balanced solution, your Web Servers, MySQL Servers, Caching Servers, etc would all be independent (i.e. each their own Droplet). You would then setup HAProxy, Varnish or another piece of software to be the primary access point, which would then route each incoming request to the proper Droplet based on how you configure it to respond and priority.

A very basic example layout would look something like:

– » HAProxy / Varnish

– – » NGINX – – » NGINX

– – » Caching Server (Memcached, Redis etc) – – » Caching Server (Memcached, Redis etc)

– – – » MySQL (Master) – – – » MySQL (Slave)

HAProxy / Varnish would be the primary entry point (and where you point your domain to). It would then route requests to each of the NGINX servers based on Priority, Status etc. The NGINX servers would be setup to mirror one another so that, regardless of which server is accessed, the requests made by your visitors, are valid.

The MySQL servers would be setup in a Master/Slave configuration. The Master would be able to perform reads and writes (thus handling requests from your script), while the slave would be ideally setup to only allow reads in the event the Master is down. You can, of course, set it up to allow both, though you’d need to make sure that the Slave syncs with the Master and remains as the active until this sync is complete, otherwise you’d have requests being sent to the Master for data that is only available on the Slave, thus the request would fail.

Anything created on the Master would be synced to the Slave in a basic Master/Slave setup. So if you created 100 databases, then those 100 databases would be synced (data included).

For the Caching servers, you would add each server via your script. Using PHP as an example, you’d make sure you have the PHP extension enabled, instantiate the Redis class and add the servers. Very similar to how you’d connect to MySQL (if with PHP, then by using PDO).

The idea is that everything would be seamless and that downtime should not be visible to the visitor, should it occur – eliminate the single point of failure.

That said, the SPOF in this example is the primary server. Up until a short while ago, DigitalOcean did not provide a way to get around this, though now they provide “Floating IP’s”, which will allow you to resolve this. You can set the Floating IP to failover to any Droplet, thus allowing you to setup multiple primary entry points.