When working with MySQL 5.5.9 on Debian 6.0 x86 systems, you might encounter persistent crashes with error messages like:
InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
130210 21:17:53 InnoDB: Unable to open the first data file
InnoDB: Error in opening ./ibdata1
130210 21:17:53 InnoDB: Operating system error number 11 in a file operation.
The error typically indicates one of these fundamental problems:
- Multiple mysqld processes competing for the same InnoDB files
- File permission issues with ibdata1
- Improper shutdown leaving the database in recovery state
- Possible disk space or inode exhaustion (even with free space reported)
First, ensure no orphaned MySQL processes are running:
# Check for running processes
ps aux | grep mysql
ps aux | grep mysqld
# Force kill all MySQL processes if found
killall -9 mysqld mysql mysqld_safe
# Remove any stale PID files
rm -f /usr/local/mysql/data/website.pid
Check the file system integrity and permissions:
# Verify file permissions
ls -la /usr/local/mysql/data/ibdata1
# Recommended permissions:
chown -R mysql:mysql /usr/local/mysql/data/
chmod -R 660 /usr/local/mysql/data/
For persistent issues, try forcing InnoDB recovery:
- Edit your my.cnf file:
[mysqld]
innodb_force_recovery = 1
Start with level 1 and increment up to 6 if needed, testing after each change.
For 1GB RAM systems, consider these my.cnf adjustments:
[mysqld]
innodb_buffer_pool_size = 256M
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table = 1
To avoid future occurrences:
- Implement proper monitoring for MySQL processes
- Set up automatic crash recovery scripts
- Regularly verify file system health
Create a watchdog script (/usr/local/bin/mysql_watchdog.sh):
#!/bin/bash
if ! pgrep mysqld >/dev/null 2>&1; then
echo "$(date) - MySQL not running, attempting restart" >> /var/log/mysql_watchdog.log
/etc/init.d/mysql restart
fi
Add to cron:
* * * * * /usr/local/bin/mysql_watchdog.sh
When MySQL crashes with the "Unable to lock ./ibdata1, error: 11" message, it typically indicates one of these scenarios:
1. Zombie MySQL processes holding file locks
2. Corrupted InnoDB system tablespace
3. Improper shutdown causing recovery conflicts
4. Filesystem permission issues
First, ensure no MySQL processes are running:
ps aux | grep mysql
kill -9 [process_ids]
rm -f /usr/local/mysql/data/ibdata1
rm -f /usr/local/mysql/data/ib_logfile*
Then attempt a clean restart with force recovery mode:
mysqld_safe --innodb_force_recovery=6 &
Add these to your my.cnf under [mysqld] section:
innodb_file_per_table = 1
innodb_buffer_pool_size = 256M # For 1GB systems
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2 # For better performance
Create a recovery script (e.g., /usr/local/bin/mysql_rescue.sh):
#!/bin/bash
service mysql stop
pkill -9 mysqld
rm -f /var/lib/mysql/ibdata1
rm -f /var/lib/mysql/ib_logfile*
mysqld --initialize-insecure --user=mysql
chown -R mysql:mysql /var/lib/mysql
service mysql start --innodb_force_recovery=6
mysql_upgrade -u root -p
service mysql restart
To identify processes locking InnoDB files:
lsof /var/lib/mysql/ibdata1
fuser -v /var/lib/mysql/ibdata1
As last resort, restore from backup using:
mysql -u root -p dbname < backup.sql
# Or for InnoDB tables:
xtrabackup --copy-back --target-dir=/path/to/backup