Handling Non-Zero Exit Codes in PostgreSQL When Executing SQL Script Files via psql -f


1 views

When working with PostgreSQL's command-line interface, I noticed an important behavioral difference between these two approaches:


# This returns non-zero on error
psql -c "CREATE TABLE invalid_syntax (id int"

# This returns zero even when errors occur
psql -f script_with_errors.sql

In shell scripting and CI/CD pipelines, proper error handling is crucial. The current behavior means scripts might silently fail when using -f, which can lead to:

  • Undetected database migration failures
  • False positives in deployment scripts
  • Hidden errors in automated testing

Here are three approaches I've successfully used to handle this:

1. Using ON_ERROR_STOP


psql --set ON_ERROR_STOP=1 -f script.sql

This makes psql exit with non-zero when encountering errors in the script.

2. Combining with -v


psql -v ON_ERROR_STOP=1 -f script.sql

An alternative syntax that accomplishes the same thing.

3. Full Shell Implementation

For complex scenarios, you might want additional error handling:


if ! psql -v ON_ERROR_STOP=1 -f script.sql; then
    echo "SQL script failed with exit code $?"
    # Additional cleanup or notification logic
    exit 1
fi

In a recent migration script, I used this pattern to ensure atomic deployments:


#!/bin/bash

set -e

# Run migrations with proper error handling
if ! psql -v ON_ERROR_STOP=1 -f migrations/001_initial_schema.sql; then
    echo "Migration failed - rolling back"
    psql -c "DROP TABLE IF EXISTS users,accounts CASCADE"
    exit 1
fi

Remember these important details:

  • The exit code will be 1 for SQL errors, 2 for connection issues
  • Works with both single transactions and multiple statements
  • Combine with --single-transaction for atomic migrations

If you're still seeing issues:

  1. Verify your psql version (9.3+ recommended)
  2. Check for silent error suppression in your SQL scripts
  3. Test with simple scripts first to isolate the issue

When automating PostgreSQL database operations, many developers encounter this frustrating behavior:


# Returns non-zero on error (good)
psql -c "CREATE TABLE invalid_table (1invalid_column int);"

# Always returns zero even if SQL fails (bad)
psql -f script_with_errors.sql

This makes error handling in shell scripts particularly challenging when executing SQL files.

The psql client treats -c (command) and -f (file) differently by design. The file execution mode is optimized for batch processing where you might want to continue despite some errors.

Solution 1: Use ON_ERROR_STOP

Add this to your SQL file or command:


\set ON_ERROR_STOP on
-- Your SQL statements below
CREATE TABLE invalid_table (1invalid_column int);

Or pass it as a command-line argument:


psql -v ON_ERROR_STOP=1 -f script.sql

Solution 2: Check psql Variables

Capture the ERROR state in a shell script:


output=$(psql -v ON_ERROR_STOP=1 -f script.sql 2>&1)
if [[ $? != 0 || "$output" =~ "ERROR:" ]]; then
    echo "SQL execution failed"
    exit 1
fi

Solution 3: Transaction Block with Exit

Wrap your SQL in a transaction block:


BEGIN;
-- Your statements here
CREATE TABLE valid_table (id serial);
CREATE TABLE invalid_table (1badname int); -- Will fail
COMMIT;

\if :ERROR
    \q 1  -- Exit with code 1 on error
\endif

For robust production scripts:


#!/bin/bash

PSQL="psql -v ON_ERROR_STOP=1 -q -t -X"

$PSQL -f schema_changes.sql || {
    echo "Schema changes failed"
    exit 1
}

# Verify no errors in system tables
errors=$($PSQL -c "SELECT COUNT(*) FROM pg_stat_activity WHERE state = 'idle in transaction (aborted)'")
if [[ $errors -gt 0 ]]; then
    echo "Found $errors aborted transactions"
    exit 1
fi

Consider using pg_prove for test scripts or pgexec from libpq-dev for more control over execution.