Report this

What is the reason for this report?

How To Import and Export Databases in MySQL or MariaDB

Updated on August 4, 2025
How To Import and Export Databases in MySQL or MariaDB

Introduction

Importing and exporting databases is a common task in software development and system administration. You can use data dumps to back up and restore information, duplicate environments, or migrate data to a new server or hosting provider.

In this tutorial, you will work with database dumps in MySQL or MariaDB (the commands are interchangeable). You’ll start by exporting a full database to a .sql file using mysqldump, and then learn how to import that file using the mysql client. You’ll then explore how to export and import only specific tables, how to migrate user accounts and privileges, and how to handle large SQL dump files efficiently. These skills are essential for routine backups, environment cloning, production migrations, or troubleshooting data issues across different environments.

1-Click deploy a database using DigitalOcean Managed Databases. Let DigitalOcean focus on scaling, maintenance, and upgrades for your database.

Key Takeaways:

  • mysqldump is the standard tool for logical backups. It exports a database’s structure and data into a .sql file that can be transferred or restored easily across systems.

  • You can import a dump into a new or existing database using the mysql client. Before importing, ensure the target database exists. Use CREATE DATABASE if needed.

  • Selective table exports are supported. You can export and import only specific tables using mysqldump by listing the table names after the database name.

  • User accounts and privileges must be migrated separately. Since user credentials are stored in the mysql system database, you must export relevant grant tables or generate GRANT statements manually.

  • Compressed exports save space and speed up transfers. You can pipe a mysqldump output through gzip and decompress during import to optimize storage and performance.

  • Use --single-transaction and --quick for large InnoDB databases. These options create consistent, non-blocking exports and reduce memory usage.

  • Large SQL files can be split into chunks for smoother imports. Using the split command, you can break large dump files into manageable pieces to avoid timeouts or memory issues.

  • Common issues, like “table already exists” errors, can be avoided with flags like --add-drop-table. Always inspect or modify the dump file to match the destination environment and prevent unintended overwrites.

Prerequisites

To import or export a MySQL or MariaDB database, you will need:

Note: As an alternative to manual installation, you can explore the DigitalOcean Marketplace’s MySQL One-Click Application.

Exporting and Importing a MySQL or MariaDB Database

Exporting and importing databases is a routine part of database administration. Whether you’re backing up your data, restoring it to a new environment, or migrating between servers, you’ll often need to create a database dump and then load that dump into a different database instance.

In this section, you’ll first export the contents of an existing MySQL or MariaDB database to a .sql file using mysqldump. Then, you’ll import that file into a new database using the mysql command-line client. These tools are available by default when MySQL or MariaDB is installed, and work the same way for both systems.

Exporting a MySQL or MariaDB Database

The mysqldump console utility exports databases to SQL text files. This makes it easier to transfer and move databases. You will need your database’s name and credentials for an account whose privileges allow at least full read-only access to the database.

Use mysqldump to export your database:

  1. mysqldump -u username -p database_name > data-dump.sql
  • username is the username you can log in to the database with.
  • database_name is the name of the database to export.
  • data-dump.sql is the file in the current directory that stores the output.

The command will produce no terminal output, but you can inspect the contents of data-dump.sql to check if it’s a legitimate SQL dump file.

Run the following command:

  1. head -n 5 data-dump.sql

The top of the file should look similar to this, showing a MySQL dump for a database named database_name.

SQL dump fragment
-- MySQL dump 10.13 Distrib 5.7.16, for Linux (x86_64) -- -- Host: localhost Database: database_name -- ------------------------------------------------------ -- Server version 5.7.16-0ubuntu0.16.04.1

If any errors occur during the export process, mysqldump will print them to the screen.

Importing a MySQL or MariaDB Database

To import an existing dump file into MySQL or MariaDB, you will have to create a new database. This database will hold the imported data.

First, log in to MySQL as root or another user with sufficient privileges to create new databases:

  1. mysql -u root -p

This command will bring you into the MySQL shell prompt. Next, create a new database with the following command. In this example, the new database is called new_database:

  1. CREATE DATABASE new_database;

You’ll see this output confirming the database creation.

Output
Query OK, 1 row affected (0.00 sec)

Then exit the MySQL shell by pressing CTRL+D. From the normal command line, you can import the dump file with the following command:

  1. mysql -u username -p new_database < data-dump.sql
  • username is the username you can log in to the database with.
  • newdatabase is the name of the freshly created database.
  • data-dump.sql is the data dump file to be imported, located in the current directory.

If the command runs successfully, it won’t produce any output. If any errors occur during the process, mysql will print them to the terminal instead. To check if the import was successful, log in to the MySQL shell and inspect the data. Selecting the new database with USE new_database and then use SHOW TABLES; to view the list of tables and verify the imported data.

Exporting and Importing Only Specific Tables

In some cases, you may want to export or import only a few specific tables instead of the entire database. This can be useful when working with large databases, debugging, or migrating only a subset of your data.

Exporting Specific Tables

To export one or more specific tables, pass the table names as arguments to the mysqldump command after the database name:

  1. mysqldump -u username -p database_name table1 table2 > selected-tables.sql
  • username is your database user account.
  • database_name is the name of the database containing the tables.
  • table1, table2, etc., are the specific tables you want to export.
  • selected-tables.sql is the output file containing the exported table data and structure.

For example, to export only the users and orders tables from a database named store, you would run:

  1. mysqldump -u root -p store users orders > users-orders.sql

This command will generate an SQL dump containing only the specified tables and their data. You can confirm the contents by opening the file in a text editor or inspecting the top few lines:

  1. head -n 10 users-orders.sql

Importing Specific Tables

To import specific tables from a dump file, use the mysql command just as you would for a full database import. However, make sure that the destination database already exists before running the import.

  1. mysql -u username -p target_database < selected-tables.sql

For instance, to import the users and orders tables from the previous dump into a new database named test_store, use:

  1. mysql -u root -p test_store < users-orders.sql

This will recreate and populate only the specified tables in the test_store database. If the tables already exist in the destination database, they will be overwritten unless the dump file was generated with options to skip table creation or inserts.

To verify the imported tables, log in to the MySQL shell and check the tables:

  1. mysql -u root -p

Then, in the MySQL prompt:

  1. USE test_store;
  2. SHOW TABLES;

You should see only the imported tables listed.

Exporting and Importing with User Privileges

When migrating a MySQL or MariaDB database, it’s not enough to copy just the data. You also need to ensure that the right user accounts and privileges come with it. These credentials and permissions aren’t stored within individual databases; they live in a special system database called mysql, which maintains all user access control information.

By default, mysqldump exports only the structure and data of an application database. It doesn’t include user accounts, their passwords, or access rules unless you explicitly ask for them.

There are two common methods for migrating user accounts and their privileges. Each approach has its own trade-offs depending on whether you’re optimizing for completeness or portability.

1. Dumping the Grant Tables from the mysql Database

This method captures the key internal tables that store user account details and their associated privileges. It’s a fast and direct way to replicate user access, especially when migrating between similar server versions.

Step 1: Export User Accounts and Privileges

To export users along with their global, database-level, table-level, column-level, and routine-level privileges, run:

  1. mysqldump -u root -p mysql user db tables_priv columns_priv procs_priv > users_and_privileges.sql

This command will create a SQL dump file containing:

  • Usernames and host bindings
  • Hashed passwords
  • Assigned privileges at various levels

Important: Do not dump the entire mysql database. It contains internal metadata such as server configuration and plugin data, which may not be compatible with your destination server. Stick to just the grant-related tables.

Step 2: Import on the Destination Server

Once the dump file is available on the destination machine, you can import it directly into the mysql system database:

  1. mysql -u root -p mysql < users_and_privileges.sql

Step 3: Reload Privileges

After the import, manually reload the grant tables so that the server recognizes the changes:

  1. FLUSH PRIVILEGES;

This command can be run from the MySQL shell. It makes all imported user accounts and their permissions active immediately—no need to restart the database server.

If you’re migrating to a different version of MySQL or MariaDB, or if you want more control and transparency over the users you transfer, this method is often preferred. Instead of copying raw system tables, it extracts the actual GRANT statements that define user privileges.

Step 1: Generate GRANT Commands

You can generate a SQL script of GRANT statements for all non-system users using the following shell one-liner:

  1. mysql -B -N -u root -p -e "SELECT CONCAT('SHOW GRANTS FOR ''', user, '''@''', host, ''';') FROM mysql.user WHERE user NOT IN ('mysql.infoschema', 'mysql.session', 'mysql.sys', 'root')" \
  2. | mysql -B -N -u root -p \
  3. | sed 's/$/;/' > all_user_grants.sql

Here’s what this command does:

  • The first part selects all regular user accounts from the mysql.user table.
  • The second part runs SHOW GRANTS for each user.
  • The sed command appends a semicolon to each line to ensure valid SQL syntax.
  • The final output is saved as all_user_grants.sql.

This file is a clean, readable list of GRANT statements that can be applied on another server.

Step 2: Review and Import the GRANT Script

Open all_user_grants.sql in a text editor and review it. You can remove any lines related to users you don’t want to migrate.

To apply the privileges on the new server, run:

  1. mysql -u root -p < all_user_grants.sql

Since the GRANT statement implicitly creates users and assigns privileges, a manual FLUSH PRIVILEGES is not strictly necessary but running it afterward is a good practice:

  1. FLUSH PRIVILEGES;

This method is generally safer and more portable across different MySQL or MariaDB versions, as it avoids directly importing internal system tables.

Dealing with Large .sql Files and Slow Imports

When working with large databases, imports can be noticeably slow or resource-intensive. Large .sql files take longer to process, and depending on the server’s hardware and configuration, this may result in timeouts, locked tables, or degraded performance.

Here are a few strategies to help you manage and speed up the import process for large database dumps.

Use Compression to Save Space and Time

You can compress a database dump file using gzip to reduce file size and speed up transfer times. The mysqldump output can be piped directly into gzip, eliminating the need to store an uncompressed version.

  1. mysqldump -u username -p database_name | gzip > database_name.sql.gz

To import from the compressed file, use gunzip to decompress the data and stream it into the mysql client:

  1. gunzip < database_name.sql.gz | mysql -u username -p database_name

This method is especially helpful when moving data across networks or dealing with storage-constrained environments.

Temporarily Disable Foreign Key Checks and Indexes

During import, MySQL enforces foreign key constraints and performs index updates for each inserted row. You can temporarily disable these checks to improve performance.

At the start of your .sql file, or before running the import, disable the constraints:

SET foreign_key_checks = 0;
SET unique_checks = 0;
SET autocommit = 0;

At the end of the import, re-enable them and commit the changes:

SET foreign_key_checks = 1;
SET unique_checks = 1;
COMMIT;

You can add these lines manually to your dump file or run them interactively before and after the import. This reduces overhead during large insert operations.

Use the --quick and --single-transaction Flags

When exporting large InnoDB databases, add the --quick and --single-transaction options to your mysqldump command:

  1. mysqldump -u username -p --single-transaction --quick database_name > database_name.sql
  • --single-transaction creates a consistent snapshot of the database without locking tables.
  • --quick streams rows directly to the output file without loading them all into memory.

This combination is ideal for large InnoDB tables and ensures minimal impact on a running database during the export process.

Split the SQL File into Smaller Chunks

For extremely large dump files, you can split the file into smaller, more manageable parts using the split command. This can help avoid hitting memory or timeout limits during the import.

  1. split -l 5000 large_dump.sql chunk_

This command creates multiple files named chunk_aa, chunk_ab, etc., each containing 5,000 lines. You can then import them sequentially:

for file in chunk_*; do
  mysql -u username -p database_name < "$file"
done

This approach allows you to monitor progress and recover more easily if the process is interrupted.

Consider Using LOAD DATA INFILE for Bulk Data

If your data is available in plain .csv or .tsv format, you can use the LOAD DATA INFILE command for significantly faster bulk imports. This method bypasses standard SQL parsing and inserts data directly into the table.

  1. LOAD DATA INFILE '/path/to/file.csv'
  2. INTO TABLE table_name
  3. FIELDS TERMINATED BY ','
  4. LINES TERMINATED BY '\n'
  5. IGNORE 1 LINES;

This is one of the fastest ways to import large volumes of raw tabular data into MySQL or MariaDB, but it does require your data to be in a structured, delimited format.

Common Mistakes to Avoid

Exporting and importing MySQL or MariaDB databases is a critical task, especially in production environments. While the process is generally straightforward, small oversights can lead to data loss, failed imports, or broken applications. This section outlines some of the most common mistakes and how to avoid them effectively.

1. Importing Into the Wrong Database

A common mistake is accidentally importing a .sql file into the wrong database, especially when working across multiple environments. This can overwrite or corrupt existing data without warning. Always double-check the database name before running the import command:

  1. mysql -u username -p target_database < data-dump.sql

You can also use SHOW TABLES; and SELECT COUNT(*) FROM table_name; in the MySQL shell after the import to confirm the expected data is present.

2. Forgetting to Create the Target Database

The mysql import command assumes the destination database already exists. If it doesn’t, you’ll encounter an error such as Unknown database. Make sure to create the database first:

  1. CREATE DATABASE new_database;

Alternatively, use the --databases option with mysqldump during export to include the CREATE DATABASE and USE statements in the dump file.

3. Using Incorrect Credentials or Insufficient Permissions

If the user account used for export or import lacks the required privileges, operations will fail. For example:

  • mysqldump may fail without SELECT, LOCK TABLES, or SHOW VIEW privileges.
  • mysql may fail to import data if the user doesn’t have INSERT, CREATE, or ALTER permissions.

Use a user account with full privileges, such as root, if you’re unsure. Always check for permission errors in the terminal output.

4. Not Using --add-drop-table During Export

Without the --add-drop-table flag, importing a dump file into a database that already contains tables with the same names will result in errors like Table already exists. This flag ensures that each table is dropped before being recreated:

  1. mysqldump -u username -p --add-drop-table database_name > data-dump.sql

This is especially important when re-importing data into an existing development or staging environment.

5. Skipping User Privilege Migration

User accounts, roles, and permissions are not included in standard database dumps. If you forget to export these separately, users on the destination server won’t be able to connect or perform actions. You can:

  • Dump the relevant tables from the mysql database (e.g., user, db, tables_priv)
  • Or generate and apply GRANT statements using SHOW GRANTS

Always run FLUSH PRIVILEGES; after importing grant data to apply changes.

6. Not Verifying Charset and Collation Compatibility

Different servers may use different default character sets (e.g., latin1 vs utf8mb4), leading to corrupted or unreadable text after import. To avoid issues:

  • Check the character set and collation on both source and target servers:

    1. SHOW CREATE DATABASE database_name;
  • Use the --default-character-set option with mysqldump and mysql:

    1. mysqldump -u username -p --default-character-set=utf8mb4 database_name > dump.sql

This ensures consistent encoding of your data across environments.

7. Importing Large Files Without Optimization

Very large .sql files can take a long time to import and may hit resource limits. Common symptoms include out-of-memory errors or server timeouts. To reduce import time and load:

  • Use SET foreign_key_checks = 0 and SET autocommit = 0 before the import
  • Use --single-transaction during export for InnoDB tables
  • Compress the file with gzip or split it using the split command

Monitoring server performance during import can also help you spot bottlenecks early.

8. Overlooking File Permissions or File Paths

A simple yet common issue is trying to import a .sql file that doesn’t exist in the specified path or cannot be read by the current user. Always check file existence and permissions:

  1. ls -l data-dump.sql

Make sure the file has the correct read permissions (-rw-r--r-- or similar) and that you are in the correct directory when running your import command.

Avoiding these mistakes can save time, prevent data loss, and ensure a smoother workflow when working with MySQL or MariaDB databases. Always test your process in a non-production environment before running critical imports or migrations.

FAQs

1. What is the difference between mysqldump and a binary backup?

mysqldump creates a logical backup by exporting database contents as SQL statements. It’s portable, human-readable, and ideal for migrating data between servers or versions. In contrast, a binary backup copies the actual data files on disk. Binary backups are faster for large datasets and include everything (including non-SQL data like logs), but they are tied to the server’s file structure and version compatibility. Logical dumps are safer for cross-version or cross-platform moves, while binary backups are better for full, same-server restores.

2. Can I import a MySQL dump into MariaDB?

Yes. MySQL and MariaDB are highly compatible, and mysqldump files created from a MySQL database can usually be imported into MariaDB without modification. However, if the dump includes features or syntax introduced in newer MySQL versions (e.g., JSON functions or specific storage engine settings), you may need to review the dump file and adjust incompatible statements before import.

3. How do I export only the schema without data?

To export only the table structure (schema) and skip the data, use the --no-data flag with mysqldump:

  1. mysqldump -u username -p --no-data database_name > schema_only.sql

This creates a dump file that contains all CREATE TABLE statements but omits INSERT statements. It’s useful for duplicating database structures or generating templates for development environments.

4. What if I get a “table already exists” error during import?

This error occurs when the target database already contains tables with the same names as those in the dump file. To resolve it, you have a few options:

  • Drop existing tables manually or include DROP TABLE IF EXISTS statements in the dump by adding --add-drop-table to your export command:

    1. mysqldump -u username -p --add-drop-table database_name > data-dump.sql
  • Create a new empty database before the import to avoid conflicts.

  • Edit the dump file and remove the conflicting table statements if you want to skip them.

Always review the contents of the dump file before importing into a production environment to ensure it doesn’t contain destructive operations like DROP TABLE or TRUNCATE.

5. Can I export multiple databases at once?

Yes. You can use the --databases option with mysqldump followed by a space-separated list of database names:

  1. mysqldump -u username -p --databases db1 db2 db3 > multi-database-dump.sql

This command will include CREATE DATABASE and USE statements for each database, making it easier to restore them later into the same or another server.

6. How can I export all databases on the server?

To export every database on your MySQL or MariaDB server, use the --all-databases option:

  1. mysqldump -u root -p --all-databases > all_databases.sql

This creates a full backup including system databases like mysql, which contains user accounts and privileges. Only use this for full-server migrations or disaster recovery.

7. Is it safe to import a dump into a live database?

Importing into a production database should be done with caution. If the dump includes DROP TABLE or INSERT statements, it can overwrite or duplicate data. For safety:

  • Always test imports in a staging environment first.
  • Take a backup of the live database before importing.
  • Review the dump file for destructive commands like DROP or TRUNCATE.

8. Why is my import taking so long?

Slow imports can be caused by large datasets, enabled foreign key checks, frequent index updates, or insufficient server resources. To speed things up:

  • Disable foreign key checks and autocommit.
  • Use the --quick and --single-transaction flags during export.
  • Compress your dump file with gzip.
  • Use LOAD DATA INFILE for large raw datasets if possible.

Conclusion

In this tutorial, you learned how to export and import MySQL or MariaDB databases using mysqldump. You covered full database transfers, selective table exports, migrating user accounts and privileges, and strategies for handling large dump files efficiently. These techniques are essential for backups, server migrations, environment replication, and recovery tasks. Mastering them will help you manage your databases more reliably across development and production systems.

You can learn more about mysqldump, check out the official mysqldump documentation page.

To learn more about MySQL, check out the following tutorials:

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

Still looking for an answer?

Was this helpful?


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Nice and straight tutorial thanks.

it would be great if you write on similar topic for an advance level. example, how to automate mysql backup and upload to external destination(dropbox/google drive) via cron job. furthermore keep only last 3/5 dump and delete older once. i think it would be a very useful doc for most of all. specially me. i found many tutorials on net but honestly, DO’s tutorials are best for beginners in understanding aspect. i would really appreciate that if you do that.

Thank you for your article! It’s very helpful.

One thing:

The command will produce no visual output, but you can inspect the contents of filename.sql to check if it’s a legitimate SQL dump file by using:

And then:

head -n 5 data-dump.sql

Do you mean “you can inspect the contents of data-dump.sql to check if it’s a legitimate SQL dump file by using…”? Or filename.sql is some other file?

If your password contains special characters you must wrap it in quotes

--password='Y0Ürp4$$W0rd??'

Why not keep it simple…?

Just create the db and then, within mysql write

Mysql>source data-dump.sql

Assuming of course your dump in your home directory, and you have USE your new db context.

When exporting, an error occurs: mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege (s) for this operation 'when trying to dump tablespaces

If you would like to dump all databases in MySQL instead of just one database, you can use the same command, but also add the flag --all-databases instead of specifying a particular database.

Instead of:

$ mysqldump -u username -p database_name > data-dump.sql

You could do:

$ mysqldump -u username -p --all-databases > alldatabases.sql

Here we uses the --all-databases flag to dump all databases instead of specifying a particular one.

Note: the output will be a single dump file, alldatabases.sql that includes all the databases you have access to, and not a single file per database.

If you decide to use an automated service to back up mysql, check out ours, it will be helpful in this case.

Islam @ SimpleBackups

A few command issues we see people getting into:

mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces

The simple way to solve this is to add the --no-tablespaces flag to your mysqldump command, you could also solve this by updating your user privileges. GRANT PROCESS ON *.* TO user@localhost; (note it has to be done on a global level)

Simon Founder and CEO SnapShooter DigitalOcean Backups

Creative CommonsThis work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.