@s0litaire
Your mileage may vary with this one, though one method you may be able to use without manually breaking down each table is split
. This command will allow you to break your SQL dump down in to multiple smaller SQL files which you can then pipe to MySQL.
To test this, I created a dummy single-table database called posts
which was 2.9GB worth of various data, dumped it to an SQL file called posts.sql
and then transferred it over to a 512MB Droplet with MariaDB installed (stock installation, no configuration changes/optimizations).
On the 512MB Droplet, I created /usr/local/src/db
using:
mkdir -p /usr/local/src/db
Pulled the posts.sql
dump file over to that directory using wget
, so the file is:
/usr/local/src/db/posts.sql
I then created a splits
directory:
mkdir -p /usr/local/src/db/splits
and ran:
split -n 250 posts.sql splits/sql_
That splits the posts.sql
file in to 250 chunks, places it in the splits
directory and tags it with sql_
, so each split looks like sql_a....
. You could technically split it down even more, but I’m not sure how much it’d help.
You can run ls /usr/local/src/db/splits
to confirm the count.
Once you have the splits, you can run:
cat /usr/local/src/db/splits/sql* | mysql -u dbuser -p dbname
And that’ll pipe all of the data to the database. It’s not super-fast (at all), but short of manually breaking the database down to chunks of tables, it’s one option that I can confirm worked on a 2.9GB table.
The table was created and filled using:
DROP TABLE IF EXISTS posts;
CREATE TABLE `posts` (
`author_id` int(11) NOT NULL,
`title` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
`description` varchar(500) COLLATE utf8_unicode_ci NOT NULL,
`content` text COLLATE utf8_unicode_ci NOT NULL,
`date` date NOT NULL
)
....
....
....
....
Where ....
is quite a few insert
statements (to the tune of 2.9GB) - roughly 17 million.
That said, it did cut it close. This knocked the free RAM on the Droplet down to about 15MB (with 272MB being cache) and a consistent load of about 1.0 on the CPU (thus maxing it – the load mainly comes from I/O wait time), but it didn’t crash, fail, error out, etc.
That said, again, it’s slow. You may be waiting an hour or more using this method, so don’t plan on it being as fast as it would be using the normal restoration methods.
If you’re running anything else on the Droplet (Apache/NGINX, PHP-FPM, etc), I’d stop the processes first and only have MySQL running while using this method.
General note, if you’re not using Ubuntu’s snapd
service, you can free up a little ram by disabling it.
systemctl stop snapd
systemctl disable snapd
snapd
allows you to install certain packages provided by Ubuntu, much like apt
, but for most, it’s not needed nor a critical service. If you don’t use it, or you’ve never heard of it, you can disable it without any issues.
On my 512MB Droplet it was using about 10MB of RAM with the ability to use as much as 200MB. Not really something I’d keep running on a memory constrained system.