Importing and exporting databases is a common task in software development and system administration. You can use data dumps to back up and restore information, duplicate environments, or migrate data to a new server or hosting provider.
In this tutorial, you will work with database dumps in MySQL or MariaDB (the commands are interchangeable). You’ll start by exporting a full database to a .sql
file using mysqldump
, and then learn how to import that file using the mysql
client. You’ll then explore how to export and import only specific tables, how to migrate user accounts and privileges, and how to handle large SQL dump files efficiently. These skills are essential for routine backups, environment cloning, production migrations, or troubleshooting data issues across different environments.
1-Click deploy a database using DigitalOcean Managed Databases. Let DigitalOcean focus on scaling, maintenance, and upgrades for your database.
Key Takeaways:
mysqldump
is the standard tool for logical backups. It exports a database’s structure and data into a .sql
file that can be transferred or restored easily across systems.
You can import a dump into a new or existing database using the mysql
client. Before importing, ensure the target database exists. Use CREATE DATABASE
if needed.
Selective table exports are supported. You can export and import only specific tables using mysqldump
by listing the table names after the database name.
User accounts and privileges must be migrated separately. Since user credentials are stored in the mysql
system database, you must export relevant grant tables or generate GRANT
statements manually.
Compressed exports save space and speed up transfers. You can pipe a mysqldump
output through gzip
and decompress during import to optimize storage and performance.
Use --single-transaction
and --quick
for large InnoDB databases. These options create consistent, non-blocking exports and reduce memory usage.
Large SQL files can be split into chunks for smoother imports. Using the split
command, you can break large dump files into manageable pieces to avoid timeouts or memory issues.
Common issues, like “table already exists” errors, can be avoided with flags like --add-drop-table
. Always inspect or modify the dump file to match the destination environment and prevent unintended overwrites.
To import or export a MySQL or MariaDB database, you will need:
Note: As an alternative to manual installation, you can explore the DigitalOcean Marketplace’s MySQL One-Click Application.
Exporting and importing databases is a routine part of database administration. Whether you’re backing up your data, restoring it to a new environment, or migrating between servers, you’ll often need to create a database dump and then load that dump into a different database instance.
In this section, you’ll first export the contents of an existing MySQL or MariaDB database to a .sql
file using mysqldump
. Then, you’ll import that file into a new database using the mysql
command-line client. These tools are available by default when MySQL or MariaDB is installed, and work the same way for both systems.
The mysqldump
console utility exports databases to SQL text files. This makes it easier to transfer and move databases. You will need your database’s name and credentials for an account whose privileges allow at least full read-only access to the database.
Use mysqldump
to export your database:
- mysqldump -u username -p database_name > data-dump.sql
username
is the username you can log in to the database with.database_name
is the name of the database to export.data-dump.sql
is the file in the current directory that stores the output.The command will produce no terminal output, but you can inspect the contents of data-dump.sql
to check if it’s a legitimate SQL dump file.
Run the following command:
- head -n 5 data-dump.sql
The top of the file should look similar to this, showing a MySQL dump for a database named database_name
.
SQL dump fragment-- MySQL dump 10.13 Distrib 5.7.16, for Linux (x86_64)
--
-- Host: localhost Database: database_name
-- ------------------------------------------------------
-- Server version 5.7.16-0ubuntu0.16.04.1
If any errors occur during the export process, mysqldump
will print them to the screen.
To import an existing dump file into MySQL or MariaDB, you will have to create a new database. This database will hold the imported data.
First, log in to MySQL as root or another user with sufficient privileges to create new databases:
- mysql -u root -p
This command will bring you into the MySQL shell prompt. Next, create a new database with the following command. In this example, the new database is called new_database
:
- CREATE DATABASE new_database;
You’ll see this output confirming the database creation.
OutputQuery OK, 1 row affected (0.00 sec)
Then exit the MySQL shell by pressing CTRL+D
. From the normal command line, you can import the dump file with the following command:
- mysql -u username -p new_database < data-dump.sql
username
is the username you can log in to the database with.newdatabase
is the name of the freshly created database.data-dump.sql
is the data dump file to be imported, located in the current directory.If the command runs successfully, it won’t produce any output. If any errors occur during the process, mysql
will print them to the terminal instead. To check if the import was successful, log in to the MySQL shell and inspect the data. Selecting the new database with USE new_database
and then use SHOW TABLES;
to view the list of tables and verify the imported data.
In some cases, you may want to export or import only a few specific tables instead of the entire database. This can be useful when working with large databases, debugging, or migrating only a subset of your data.
To export one or more specific tables, pass the table names as arguments to the mysqldump
command after the database name:
- mysqldump -u username -p database_name table1 table2 > selected-tables.sql
username
is your database user account.database_name
is the name of the database containing the tables.table1
, table2
, etc., are the specific tables you want to export.selected-tables.sql
is the output file containing the exported table data and structure.For example, to export only the users
and orders
tables from a database named store
, you would run:
- mysqldump -u root -p store users orders > users-orders.sql
This command will generate an SQL dump containing only the specified tables and their data. You can confirm the contents by opening the file in a text editor or inspecting the top few lines:
- head -n 10 users-orders.sql
To import specific tables from a dump file, use the mysql
command just as you would for a full database import. However, make sure that the destination database already exists before running the import.
- mysql -u username -p target_database < selected-tables.sql
For instance, to import the users
and orders
tables from the previous dump into a new database named test_store
, use:
- mysql -u root -p test_store < users-orders.sql
This will recreate and populate only the specified tables in the test_store
database. If the tables already exist in the destination database, they will be overwritten unless the dump file was generated with options to skip table creation or inserts.
To verify the imported tables, log in to the MySQL shell and check the tables:
- mysql -u root -p
Then, in the MySQL prompt:
- USE test_store;
- SHOW TABLES;
You should see only the imported tables listed.
When migrating a MySQL or MariaDB database, it’s not enough to copy just the data. You also need to ensure that the right user accounts and privileges come with it. These credentials and permissions aren’t stored within individual databases; they live in a special system database called mysql
, which maintains all user access control information.
By default, mysqldump
exports only the structure and data of an application database. It doesn’t include user accounts, their passwords, or access rules unless you explicitly ask for them.
There are two common methods for migrating user accounts and their privileges. Each approach has its own trade-offs depending on whether you’re optimizing for completeness or portability.
mysql
DatabaseThis method captures the key internal tables that store user account details and their associated privileges. It’s a fast and direct way to replicate user access, especially when migrating between similar server versions.
To export users along with their global, database-level, table-level, column-level, and routine-level privileges, run:
- mysqldump -u root -p mysql user db tables_priv columns_priv procs_priv > users_and_privileges.sql
This command will create a SQL dump file containing:
Important: Do not dump the entire mysql
database. It contains internal metadata such as server configuration and plugin data, which may not be compatible with your destination server. Stick to just the grant-related tables.
Once the dump file is available on the destination machine, you can import it directly into the mysql
system database:
- mysql -u root -p mysql < users_and_privileges.sql
After the import, manually reload the grant tables so that the server recognizes the changes:
- FLUSH PRIVILEGES;
This command can be run from the MySQL shell. It makes all imported user accounts and their permissions active immediately—no need to restart the database server.
If you’re migrating to a different version of MySQL or MariaDB, or if you want more control and transparency over the users you transfer, this method is often preferred. Instead of copying raw system tables, it extracts the actual GRANT
statements that define user privileges.
You can generate a SQL script of GRANT
statements for all non-system users using the following shell one-liner:
- mysql -B -N -u root -p -e "SELECT CONCAT('SHOW GRANTS FOR ''', user, '''@''', host, ''';') FROM mysql.user WHERE user NOT IN ('mysql.infoschema', 'mysql.session', 'mysql.sys', 'root')" \
- | mysql -B -N -u root -p \
- | sed 's/$/;/' > all_user_grants.sql
Here’s what this command does:
mysql.user
table.SHOW GRANTS
for each user.sed
command appends a semicolon to each line to ensure valid SQL syntax.all_user_grants.sql
.This file is a clean, readable list of GRANT
statements that can be applied on another server.
Open all_user_grants.sql
in a text editor and review it. You can remove any lines related to users you don’t want to migrate.
To apply the privileges on the new server, run:
- mysql -u root -p < all_user_grants.sql
Since the GRANT
statement implicitly creates users and assigns privileges, a manual FLUSH PRIVILEGES
is not strictly necessary but running it afterward is a good practice:
- FLUSH PRIVILEGES;
This method is generally safer and more portable across different MySQL or MariaDB versions, as it avoids directly importing internal system tables.
.sql
Files and Slow ImportsWhen working with large databases, imports can be noticeably slow or resource-intensive. Large .sql
files take longer to process, and depending on the server’s hardware and configuration, this may result in timeouts, locked tables, or degraded performance.
Here are a few strategies to help you manage and speed up the import process for large database dumps.
You can compress a database dump file using gzip
to reduce file size and speed up transfer times. The mysqldump
output can be piped directly into gzip
, eliminating the need to store an uncompressed version.
- mysqldump -u username -p database_name | gzip > database_name.sql.gz
To import from the compressed file, use gunzip
to decompress the data and stream it into the mysql
client:
- gunzip < database_name.sql.gz | mysql -u username -p database_name
This method is especially helpful when moving data across networks or dealing with storage-constrained environments.
During import, MySQL enforces foreign key constraints and performs index updates for each inserted row. You can temporarily disable these checks to improve performance.
At the start of your .sql
file, or before running the import, disable the constraints:
SET foreign_key_checks = 0;
SET unique_checks = 0;
SET autocommit = 0;
At the end of the import, re-enable them and commit the changes:
SET foreign_key_checks = 1;
SET unique_checks = 1;
COMMIT;
You can add these lines manually to your dump file or run them interactively before and after the import. This reduces overhead during large insert operations.
--quick
and --single-transaction
FlagsWhen exporting large InnoDB databases, add the --quick
and --single-transaction
options to your mysqldump
command:
- mysqldump -u username -p --single-transaction --quick database_name > database_name.sql
--single-transaction
creates a consistent snapshot of the database without locking tables.--quick
streams rows directly to the output file without loading them all into memory.This combination is ideal for large InnoDB tables and ensures minimal impact on a running database during the export process.
For extremely large dump files, you can split the file into smaller, more manageable parts using the split
command. This can help avoid hitting memory or timeout limits during the import.
- split -l 5000 large_dump.sql chunk_
This command creates multiple files named chunk_aa
, chunk_ab
, etc., each containing 5,000 lines. You can then import them sequentially:
for file in chunk_*; do
mysql -u username -p database_name < "$file"
done
This approach allows you to monitor progress and recover more easily if the process is interrupted.
LOAD DATA INFILE
for Bulk DataIf your data is available in plain .csv
or .tsv
format, you can use the LOAD DATA INFILE
command for significantly faster bulk imports. This method bypasses standard SQL parsing and inserts data directly into the table.
- LOAD DATA INFILE '/path/to/file.csv'
- INTO TABLE table_name
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- IGNORE 1 LINES;
This is one of the fastest ways to import large volumes of raw tabular data into MySQL or MariaDB, but it does require your data to be in a structured, delimited format.
Exporting and importing MySQL or MariaDB databases is a critical task, especially in production environments. While the process is generally straightforward, small oversights can lead to data loss, failed imports, or broken applications. This section outlines some of the most common mistakes and how to avoid them effectively.
A common mistake is accidentally importing a .sql
file into the wrong database, especially when working across multiple environments. This can overwrite or corrupt existing data without warning. Always double-check the database name before running the import command:
- mysql -u username -p target_database < data-dump.sql
You can also use SHOW TABLES;
and SELECT COUNT(*) FROM table_name;
in the MySQL shell after the import to confirm the expected data is present.
The mysql
import command assumes the destination database already exists. If it doesn’t, you’ll encounter an error such as Unknown database
. Make sure to create the database first:
- CREATE DATABASE new_database;
Alternatively, use the --databases
option with mysqldump
during export to include the CREATE DATABASE
and USE
statements in the dump file.
If the user account used for export or import lacks the required privileges, operations will fail. For example:
mysqldump
may fail without SELECT
, LOCK TABLES
, or SHOW VIEW
privileges.mysql
may fail to import data if the user doesn’t have INSERT
, CREATE
, or ALTER
permissions.Use a user account with full privileges, such as root
, if you’re unsure. Always check for permission errors in the terminal output.
--add-drop-table
During ExportWithout the --add-drop-table
flag, importing a dump file into a database that already contains tables with the same names will result in errors like Table already exists
. This flag ensures that each table is dropped before being recreated:
- mysqldump -u username -p --add-drop-table database_name > data-dump.sql
This is especially important when re-importing data into an existing development or staging environment.
User accounts, roles, and permissions are not included in standard database dumps. If you forget to export these separately, users on the destination server won’t be able to connect or perform actions. You can:
mysql
database (e.g., user
, db
, tables_priv
)GRANT
statements using SHOW GRANTS
Always run FLUSH PRIVILEGES;
after importing grant data to apply changes.
Different servers may use different default character sets (e.g., latin1
vs utf8mb4
), leading to corrupted or unreadable text after import. To avoid issues:
Check the character set and collation on both source and target servers:
- SHOW CREATE DATABASE database_name;
Use the --default-character-set
option with mysqldump
and mysql
:
- mysqldump -u username -p --default-character-set=utf8mb4 database_name > dump.sql
This ensures consistent encoding of your data across environments.
Very large .sql
files can take a long time to import and may hit resource limits. Common symptoms include out-of-memory errors or server timeouts. To reduce import time and load:
SET foreign_key_checks = 0
and SET autocommit = 0
before the import--single-transaction
during export for InnoDB tablesgzip
or split it using the split
commandMonitoring server performance during import can also help you spot bottlenecks early.
A simple yet common issue is trying to import a .sql
file that doesn’t exist in the specified path or cannot be read by the current user. Always check file existence and permissions:
- ls -l data-dump.sql
Make sure the file has the correct read permissions (-rw-r--r--
or similar) and that you are in the correct directory when running your import command.
Avoiding these mistakes can save time, prevent data loss, and ensure a smoother workflow when working with MySQL or MariaDB databases. Always test your process in a non-production environment before running critical imports or migrations.
mysqldump
and a binary backup?mysqldump
creates a logical backup by exporting database contents as SQL statements. It’s portable, human-readable, and ideal for migrating data between servers or versions. In contrast, a binary backup copies the actual data files on disk. Binary backups are faster for large datasets and include everything (including non-SQL data like logs), but they are tied to the server’s file structure and version compatibility. Logical dumps are safer for cross-version or cross-platform moves, while binary backups are better for full, same-server restores.
Yes. MySQL and MariaDB are highly compatible, and mysqldump
files created from a MySQL database can usually be imported into MariaDB without modification. However, if the dump includes features or syntax introduced in newer MySQL versions (e.g., JSON functions or specific storage engine settings), you may need to review the dump file and adjust incompatible statements before import.
To export only the table structure (schema) and skip the data, use the --no-data
flag with mysqldump
:
- mysqldump -u username -p --no-data database_name > schema_only.sql
This creates a dump file that contains all CREATE TABLE
statements but omits INSERT
statements. It’s useful for duplicating database structures or generating templates for development environments.
This error occurs when the target database already contains tables with the same names as those in the dump file. To resolve it, you have a few options:
Drop existing tables manually or include DROP TABLE IF EXISTS
statements in the dump by adding --add-drop-table
to your export command:
- mysqldump -u username -p --add-drop-table database_name > data-dump.sql
Create a new empty database before the import to avoid conflicts.
Edit the dump file and remove the conflicting table statements if you want to skip them.
Always review the contents of the dump file before importing into a production environment to ensure it doesn’t contain destructive operations like DROP TABLE
or TRUNCATE
.
Yes. You can use the --databases
option with mysqldump
followed by a space-separated list of database names:
- mysqldump -u username -p --databases db1 db2 db3 > multi-database-dump.sql
This command will include CREATE DATABASE
and USE
statements for each database, making it easier to restore them later into the same or another server.
To export every database on your MySQL or MariaDB server, use the --all-databases
option:
- mysqldump -u root -p --all-databases > all_databases.sql
This creates a full backup including system databases like mysql
, which contains user accounts and privileges. Only use this for full-server migrations or disaster recovery.
Importing into a production database should be done with caution. If the dump includes DROP TABLE
or INSERT
statements, it can overwrite or duplicate data. For safety:
DROP
or TRUNCATE
.Slow imports can be caused by large datasets, enabled foreign key checks, frequent index updates, or insufficient server resources. To speed things up:
--quick
and --single-transaction
flags during export.gzip
.LOAD DATA INFILE
for large raw datasets if possible.In this tutorial, you learned how to export and import MySQL or MariaDB databases using mysqldump
. You covered full database transfers, selective table exports, migrating user accounts and privileges, and strategies for handling large dump files efficiently. These techniques are essential for backups, server migrations, environment replication, and recovery tasks. Mastering them will help you manage your databases more reliably across development and production systems.
You can learn more about mysqldump
, check out the official mysqldump documentation page.
To learn more about MySQL, check out the following tutorials:
The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Nice and straight tutorial thanks.
it would be great if you write on similar topic for an advance level. example, how to automate mysql backup and upload to external destination(dropbox/google drive) via cron job. furthermore keep only last 3/5 dump and delete older once. i think it would be a very useful doc for most of all. specially me. i found many tutorials on net but honestly, DO’s tutorials are best for beginners in understanding aspect. i would really appreciate that if you do that.
Thank you for your article! It’s very helpful.
One thing:
The command will produce no visual output, but you can inspect the contents of filename.sql to check if it’s a legitimate SQL dump file by using:
And then:
head -n 5 data-dump.sql
Do you mean “you can inspect the contents of data-dump.sql to check if it’s a legitimate SQL dump file by using…”? Or filename.sql is some other file?
If your password contains special characters you must wrap it in quotes
--password='Y0Ürp4$$W0rd??'
Why not keep it simple…?
Just create the db and then, within mysql write
Mysql>source data-dump.sql
Assuming of course your dump in your home directory, and you have USE your new db context.
When exporting, an error occurs: mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege (s) for this operation 'when trying to dump tablespaces
If you would like to dump all databases in MySQL instead of just one database, you can use the same command, but also add the flag --all-databases
instead of specifying a particular database.
Instead of:
$ mysqldump -u username -p database_name > data-dump.sql
You could do:
$ mysqldump -u username -p --all-databases > alldatabases.sql
Here we uses the --all-databases
flag to dump all databases instead of specifying a particular one.
Note: the output will be a single dump file, alldatabases.sql
that includes all the databases you have access to, and not a single file per database.
If you decide to use an automated service to back up mysql, check out ours, it will be helpful in this case.
Islam @ SimpleBackups
A few command issues we see people getting into:
mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces
The simple way to solve this is to add the --no-tablespaces
flag to your mysqldump command, you could also solve this by updating your user privileges. GRANT PROCESS ON *.* TO user@localhost;
(note it has to be done on a global level)
Simon Founder and CEO SnapShooter DigitalOcean Backups
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.