After upgrading Magento from 1.5 to 1.9, we're observing a consistent pattern where adding specific products to cart triggers 502 Bad Gateway errors. The nginx error logs reveal:
recv() failed (104: Connection reset by peer) while reading response header from upstream
upstream: "fastcgi://unix:/var/run/php-fcgi-www-data.sock:"
What makes this particularly interesting is the accompanying generation of 350MB core dump files in the public_html directory. The dmesg output confirms PHP-FPM segmentation faults:
php-fpm[14862]: segfault at 7fff38236ff8 ip 00000000005c02ba sp 00007fff38237000 error 6
php-fpm[15022]: segfault at 7fff38351ff0 ip 00000000005bf6e5 sp 00007fff38351fb0 error 6
The core dump analysis reveals memory corruption patterns. Here's how to properly examine these dumps:
gdb /usr/sbin/php-fpm core.12345
bt full
info registers
x/10i $eip
Key findings from the stack trace show the crashes occur during Magento's product serialization routines, particularly when processing custom options for the problematic products.
The current pool configuration needs optimization to prevent memory exhaustion:
[www-data]
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35
pm.max_requests = 500
php_admin_value[memory_limit] = 512M
php_admin_flag[log_errors] = on
The default timeout values in nginx may be too aggressive for complex Magento operations:
location ~ \.php$ {
fastcgi_read_timeout 300;
fastcgi_send_timeout 300;
fastcgi_connect_timeout 300;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
}
To isolate the problematic product characteristics, create a test script:
<?php
require_once 'app/Mage.php';
Mage::app();
$productId = 15066; // Problematic product ID
$product = Mage::getModel('catalog/product')->load($productId);
// Serialization test
try {
$serialized = serialize($product->getData());
echo "Serialization successful";
} catch (Exception $e) {
Mage::log("Serialization failed: " . $e->getMessage(), null, 'product_errors.log');
}
Configure Xdebug to capture the exact execution path before the crash:
zend_extension=/usr/lib/php5/20121212/xdebug.so
xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir = /tmp
xdebug.profiler_output_name = cachegrind.out.%p
xdebug.var_display_max_depth = 10
xdebug.var_display_max_children = 256
xdebug.var_display_max_data = 1024
Trigger profiling by adding XDEBUG_PROFILE=1
to your request parameters when reproducing the issue.
For more advanced analysis, consider Blackfire instrumentation:
// blackfire.php configuration
$blackfire = new \Blackfire\Client();
$probe = $blackfire->createProbe();
// Wrap the problematic code
$probe->enable();
$cart->addProduct($productId, $params);
$probe->disable();
After thorough investigation, we identified the root cause as a corrupted product attribute set in the upgraded Magento installation. The resolution involved:
- Exporting the problematic product data
- Deleting and recreating the product record
- Reimporting with corrected attribute values
- Clearing all Magento caches
The key was comparing the serialized data of working vs. non-working products to identify binary corruption in certain custom option fields.
After upgrading Magento from 1.5 to 1.9, I encountered a persistent 502 Bad Gateway error when adding specific products to cart. The nginx error logs showed:
recv() failed (104: Connection reset by peer) while reading response header from upstream
Simultaneously, 350MB core dump files appeared in the public_html directory, and dmesg revealed PHP-FPM segmentation faults:
php-fpm[14862]: segfault at 7fff38236ff8 ip 00000000005c02ba sp 00007fff38237000 error 6
The core issue manifests through several symptoms:
- Specific product additions trigger the crash
- PHP-FPM workers terminate unexpectedly
- No Magento logs generated (var/log/ remains empty)
GDB analysis of core dumps showed memory corruption in PHP-FPM:
#0 0x00000000005c02ba in ?? ()
#1 0x00007f3a4a1d5fd8 in ?? ()
Key configuration elements to examine:
PHP-FPM Pool Settings
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 2
pm.max_spare_servers = 8
pm.max_requests = 500
Nginx FastCGI Parameters
fastcgi_read_timeout 300;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
The segmentation faults suggest memory corruption, likely caused by:
- Incompatible PHP extensions in Magento 1.9
- Memory leaks in product-specific operations
- PHP-FPM configuration exceeding server resources
1. Increase PHP Memory Limit
php_admin_value[memory_limit] = 512M
2. Enable Core Dump Analysis
ulimit -c unlimited
echo "/tmp/core.%e.%p" > /proc/sys/kernel/core_pattern
3. Debug Product-Specific Code
Create a test script to isolate the problematic product:
$product = Mage::getModel('catalog/product')->load(PROBLEM_ID);
$cart = Mage::getSingleton('checkout/cart');
$cart->addProduct($product);
$cart->save();
4. Update PHP-FPM Configuration
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s
- Implement monitoring for PHP-FPM worker restarts
- Set up log rotation for core dumps
- Create custom logging for cart operations:
Mage::log('Adding product: ' . $product->getId(), null, 'cart_operations.log');
The stable configuration that resolved our issue:
# php-fpm pool
pm = ondemand
pm.max_children = 30
pm.process_idle_timeout = 10s
pm.max_requests = 200
# nginx fastcgi
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
fastcgi_connect_timeout 300;