Many developers try to import gzipped SQL files using incorrect syntax like:
mysql -uroot -ppassword mydb > myfile.sql.gz
This won't work because:
- The direction of the arrow is wrong (should be < for input)
- MySQL client can't directly read gzipped files
- No decompression is being performed
Method 1: Using zcat/gunzip with pipe
The most efficient way that handles large files well:
zcat myfile.sql.gz | mysql -u username -p database_name
Or alternatively:
gunzip < myfile.sql.gz | mysql -u username -p database_name
Method 2: Decompress first then import
For cases where you want to inspect the SQL first:
gunzip myfile.sql.gz
mysql -u username -p database_name < myfile.sql
Method 3: Using pv for progress tracking
For large imports where you want to monitor progress:
pv myfile.sql.gz | gunzip | mysql -u username -p database_name
Using different compression tools
For .bz2 files:
bzcat myfile.sql.bz2 | mysql -u username -p database_name
For .xz files:
xzcat myfile.sql.xz | mysql -u username -p database_name
Setting connection options
zcat large_dump.sql.gz | mysql --host=remote.server.com --port=3307 -u admin -p production_db
Common issues and solutions:
- Access denied errors: Verify credentials and privileges
- Corrupt gzip files: Try
gzip -t myfile.sql.gz
to test integrity - Out of memory: Use
--quick
flag for large imports - Character encoding issues: Specify
--default-character-set=utf8mb4
For faster imports on large databases:
zcat big_dump.sql.gz | mysql -u username -p --init-command="SET autocommit=0;"
Temporary disable indexes during import:
zcat big_dump.sql.gz | sed 's/^CREATE TABLE/CREATE TABLE /*!32312 IF NOT EXISTS*/ /' |
sed -r 's/^CREATE INDEX/CREATE INDEX /*!32312 IF NOT EXISTS*/ /' |
mysql -u username -p database_name
When working with MySQL databases, you might encounter compressed SQL dump files (.sql.gz) that need to be imported. The common mistake many developers make is trying to use the standard MySQL import command directly on the compressed file, which won't work.
To properly import a gzipped SQL file, you need to decompress it first or use a pipe to handle the decompression and import in one step. Here are the most effective methods:
gunzip myfile.sql.gz
mysql -u username -p database_name < myfile.sql
For a more efficient approach that doesn't require creating an intermediate uncompressed file:
zcat myfile.sql.gz | mysql -u username -p database_name
Or alternatively:
gunzip < myfile.sql.gz | mysql -u username -p database_name
For large files, you might want to monitor the import progress:
pv myfile.sql.gz | gunzip | mysql -u username -p database_name
Error: "Access denied for user"
Solution: Verify your username and password, and ensure the user has proper privileges.
Error: "No such file or directory"
Solution: Check the file path and your current working directory.
You can import directly from a remote server without downloading first:
ssh user@remotehost "cat /path/to/file.sql.gz" | gunzip | mysql -u username -p database_name
For very large databases, consider these optimizations:
mysql -u username -p database_name --max_allowed_packet=1G --net_buffer_length=100000 < myfile.sql
Create a simple bash script for repeated imports:
#!/bin/bash
DB_USER="username"
DB_PASS="password"
DB_NAME="database"
GZ_FILE="$1"
zcat "$GZ_FILE" | mysql -u "$DB_USER" -p"$DB_PASS" "$DB_NAME"