How to Suppress Command Output Only on Success in Linux Shell Scripting


15 views

When automating tasks via shell scripts, we often need to suppress command output to maintain clean execution logs. The typical approach using > /dev/null 2>&1 completely silences both success and error output, leaving us blind when troubleshooting failures.

Here's an improved wrapper that captures output but only displays it on failure:

#!/bin/bash

TEMP_LOG=$(mktemp)
trap 'rm -f "$TEMP_LOG"' EXIT

run_silent() {
    # Execute command and capture all output
    if ! "$@" &>> "$TEMP_LOG"; then
        echo "Command failed:"
        cat "$TEMP_LOG"
        return 1
    fi
    return 0
}

# Example usage with SSH
run_silent ssh user@remote "critical_command"
run_silent ssh user@remote "another_command"

For production environments, consider these enhancements:

#!/bin/bash

set -o errexit -o nounset -o pipefail

readonly LOG_FILE="/tmp/command_output_$(date +%s).log"
readonly RED='\033[0;31m'
readonly NC='\033[0m' # No Color

cleanup() {
    rm -f "$LOG_FILE"
}

trap cleanup EXIT

quiet_run() {
    local cmd=("$@")
    
    printf "Running: %s\n" "${cmd[*]}" >> "$LOG_FILE"
    
    if ! "${cmd[@]}" &>> "$LOG_FILE"; then
        >&2 echo -e "${RED}ERROR: Command failed${NC}"
        >&2 echo "Command: ${cmd[*]}"
        >&2 echo "Output:"
        >&2 cat "$LOG_FILE"
        exit 1
    fi
}

# Example with complex command chains
quiet_run ssh -T user@prod-server <<'EOF'
    sudo systemctl restart important-service
    grep ERROR /var/log/service.log
EOF
  • Captures all output (stdout + stderr) in temporary file
  • Only displays output when command fails
  • Automatically cleans up log files
  • Works seamlessly with SSH commands
  • Provides contextual error information

This technique proves particularly valuable for:

  • CI/CD pipeline scripts
  • Cron job monitoring
  • Remote server management via SSH
  • Automated deployment scripts
  • Long-running batch processes

When running automated scripts or remote commands via SSH, excessive output can be problematic. While standard redirection (> /dev/null 2>&1) silences everything, it leaves us blind when failures occur. What we really need is a solution that:

  • Silences output when commands succeed
  • Dumps accumulated output when commands fail
  • Works seamlessly with existing scripts
  • Provides clear error diagnostics

Here's an improved version of the technique that handles edge cases and provides better integration:

#!/bin/bash

set -eo pipefail

SILENT_LOG=$(mktemp /tmp/silent_log_XXXXXX)
function cleanup {
    [[ -f "$SILENT_LOG" ]] && rm -f "$SILENT_LOG"
}
trap cleanup EXIT

function error_handler {
    echo -e "\\n\\033[1;31mCommand failed with exit code $?:\\033[0m" >&2
    [[ -s "$SILENT_LOG" ]] && cat "$SILENT_LOG" >&2
    exit 1
}

function silent {
    # Reset log for each command
    > "$SILENT_LOG"
    # Execute command with full output capture
    "$@" &>> "$SILENT_LOG" || error_handler
}

This pattern works particularly well for:

Remote SSH Commands

silent ssh user@remote "critical_update_script.sh"

CI/CD Pipelines

silent docker build -t app-image .
silent kubectl apply -f deployment.yaml

Package Management

silent apt-get install -y complex-dependency
  • Uses mktemp for secure temp file creation
  • Implements proper cleanup via trap
  • Includes exit code in error message
  • Resets log between commands
  • Uses set -eo pipefail for better error handling

For those who prefer not to use temp files:

function silent {
    local output
    if ! output=$("$@" 2>&1); then
        echo -e "\\033[1;31mError:\\033[0m" >&2
        echo "$output" >&2
        return 1
    fi
}