How to Configure AWS Credentials for S3 Upload in Jenkins Pipeline: A Step-by-Step Guide


2 views

When working with Jenkins pipelines to upload files to Amazon S3, properly configuring AWS credentials is crucial. The s3Upload function requires correct AWS authentication, which can be achieved through several methods.

First ensure you have the AWS Credentials plugin installed in Jenkins. Then configure your credentials:

steps {
    withAWS(credentials: 'aws-credentials-id') {
        s3Upload(file:'file.txt', bucket:'my-bucket', path:'destination.txt')
    }
}

For simple cases, you can set environment variables directly in your pipeline:

environment {
    AWS_ACCESS_KEY_ID = credentials('aws-access-key')
    AWS_SECRET_ACCESS_KEY = credentials('aws-secret-key')
}

steps {
    s3Upload(file:'file.txt', bucket:'my-bucket', path:'destination.txt')
}

For those preferring AWS profiles (as in your case), ensure the profile is properly set up:

steps {
    withAWS(profile: 'Test Publisher') {
        s3Upload(file:'ok.txt', bucket:'my-buckeck', path:'file.txt')
    }
}

If you're getting "profile file could not find" errors:

  1. Verify the profile exists in Jenkins credentials
  2. Check the profile name matches exactly (case-sensitive)
  3. Ensure the AWS CLI is configured on the Jenkins server
  4. Confirm the Jenkins user has permissions to read the credentials

Here's a full working example:

pipeline {
    agent any
    
    environment {
        AWS_REGION = 'us-east-1'
    }
    
    stages {
        stage('Upload to S3') {
            steps {
                withAWS(credentials: 'my-aws-creds') {
                    s3Upload(
                        file: 'build/output.zip',
                        bucket: 'deployment-bucket',
                        path: 'artifacts/latest.zip'
                    )
                }
            }
        }
    }
}
  • Never hardcode credentials in Jenkinsfiles
  • Use short-lived credentials when possible
  • Restrict AWS permissions to only what's needed
  • Rotate credentials regularly

When working with Jenkins pipelines to upload files to Amazon S3, proper AWS credential configuration is crucial. Many developers encounter the "profile file could not be found" error when attempting to use the s3Upload function, even when credentials appear correctly set in Jenkins.

Here are three effective methods to provide AWS credentials to your Jenkins pipeline:

Method 1: Using Jenkins Credentials Binding

withCredentials([[
  $class: 'AmazonWebServicesCredentialsBinding',
  credentialsId: 'AWS_CREDENTIALS_ID',
  accessKeyVariable: 'AWS_ACCESS_KEY_ID',
  secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
  s3Upload(
    file: 'ok.txt',
    bucket: 'my-bucket',
    path: 'file.txt'
  )
}

Method 2: Environment Variable Configuration

In your Jenkinsfile:

environment {
  AWS_ACCESS_KEY_ID = credentials('aws-access-key-id')
  AWS_SECRET_ACCESS_KEY = credentials('aws-secret-access-key')
  AWS_DEFAULT_REGION = 'us-east-1'
}

stages {
  stage('Upload') {
    steps {
      s3Upload(
        file: 'ok.txt',
        bucket: 'my-bucket',
        path: 'file.txt'
      )
    }
  }
}

Method 3: IAM Role Configuration

For more secure setups, use IAM roles instead of access keys:

withAWS(region: 'us-east-1', role: 'arn:aws:iam::123456789012:role/JenkinsUploadRole') {
  s3Upload(
    file: 'ok.txt',
    bucket: 'my-bucket',
    path: 'file.txt'
  )
}

If you're still encountering the profile not found error:

  • Verify the AWS credentials plugin is installed (AWS Steps, Pipeline: AWS Steps)
  • Check the credential ID matches exactly what's in Jenkins
  • Ensure the IAM user has proper S3 permissions (s3:PutObject)
  • Confirm the bucket policy allows the operation

For complex scenarios requiring multiple AWS accounts:

withAWS(profile: 'production-profile') {
  s3Upload(
    file: 'production-file.txt',
    bucket: 'prod-bucket',
    path: 'prod/file.txt'
  )
}

withAWS(profile: 'staging-profile') {
  s3Upload(
    file: 'staging-file.txt',
    bucket: 'stage-bucket',
    path: 'stage/file.txt'
  )
}
  • Rotate credentials regularly
  • Implement least-privilege permissions
  • Use temporary credentials when possible
  • Consider using AWS Parameter Store for credential management