When working with large MySQL databases, one of the most frustrating errors you might encounter during backup operations is:
mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes
when dumping table TICKET_ATTACHMENT at row: 2286
This error occurs when mysqldump attempts to transfer a data packet that exceeds the maximum size defined by either the server or client's max_allowed_packet
setting. What makes this particularly annoying is that you might have already increased this value significantly (say, to 1GB) but still encounter the error.
Here's what actually works in production environments:
# The complete working mysqldump command with all necessary parameters
mysqldump \
--max_allowed_packet=1G \
--net_buffer_length=1000000 \
--opt \
--extended-insert \
--single-transaction \
--create-options \
--default-character-set=utf8 \
--user=your_user \
--password=your_password \
--all-databases \
> "/path/to/backup.sql"
Three key parameters need attention:
- --max_allowed_packet=1G: Must match your server setting
- --net_buffer_length=1000000: Often overlooked but crucial
- Server-side my.cnf settings:
[mysqld]
max_allowed_packet=1G
net_buffer_length=1000000
After making changes:
-- Check server values
SHOW VARIABLES LIKE 'max_allowed_packet';
SHOW VARIABLES LIKE 'net_buffer_length';
-- Check client session values
SELECT @@MAX_ALLOWED_PACKET, @@NET_BUFFER_LENGTH;
For tables with large BLOB data (like TICKET_ATTACHMENT in the error message), consider:
-- Dump the problematic table separately with special options
mysqldump \
--max_allowed_packet=1G \
--skip-extended-insert \
--user=your_user \
--password=your_password \
database_name table_name \
> "/path/to/table_backup.sql"
When mysqldump consistently fails:
- Use
mysqlpump
(MySQL 5.7+) with parallel processing - Consider physical backups with Percona XtraBackup
- For ZRM users, adjust the backup profile to include packet size parameters
- Verify both client and server packet settings match
- Check for multiple my.cnf files that might override settings
- Test with simplified dump commands first
- Monitor memory usage during backup operations
- Consider splitting large tables into separate dump files
When working with large MySQL databases containing BLOB/TEXT-heavy tables like TICKET_ATTACHMENT
, you might encounter:
mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT
This occurs when a single row exceeds the configured max_allowed_packet
size. The challenge is that this limit applies at multiple levels:
- MySQL server configuration
- MySQL client configuration
- mysqldump-specific settings
Here's the full set of configurations needed to properly resolve this:
# Server-side (my.cnf/my.ini)
[mysqld]
max_allowed_packet=1G
# Client-side execution
mysql -u root -p --max_allowed_packet=1G
# mysqldump command with all required parameters
mysqldump --max_allowed_packet=1G \
--net_buffer_length=1000000 \
--opt \
--extended-insert \
--single-transaction \
--create-options \
--default-character-set=utf8 \
--user=root \
--all-databases > backup.sql
Always verify your settings take effect:
-- Check server setting
SHOW VARIABLES LIKE 'max_allowed_packet';
-- Check client session setting
SELECT @@MAX_ALLOWED_PACKET;
For extremely large attachments:
- Use
--skip-extended-insert
to generate single-row INSERT statements - Split the dump process for large tables:
mysqldump --tables TICKET_ATTACHMENT --where="id<10000" > part1.sql mysqldump --tables TICKET_ATTACHMENT --where="id>=10000" > part2.sql
- Consider alternative backup methods like Percona XtraBackup for binary backups
- Forgetting to restart MySQL after changing my.cnf
- Not setting the parameter in all required places (server, client, and mysqldump)
- Using different units (M vs MB) causing size miscalculations