When working with Terraform and AWS, one of the most common authentication errors you'll encounter is:
NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
This occurs when Terraform cannot find valid AWS credentials through any of the standard provider chain methods.
The AWS provider for Terraform checks credentials in this exact order:
- Directly specified in provider configuration (access_key/secret_key)
- Environment variables (AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY)
- Shared credentials file (~/.aws/credentials)
- EC2 instance role credentials (if running on EC2)
Here are the most effective solutions:
# Option 1: Set environment variables (Linux/macOS)
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"
# Option 2: Configure credentials file (~/.aws/credentials)
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
# Option 3: Hardcode in Terraform (not recommended for production)
provider "aws" {
region = "us-west-2"
access_key = "YOUR_ACCESS_KEY"
secret_key = "YOUR_SECRET_KEY"
}
If your Terraform config uses a profile (like in the original example):
provider "aws" {
region = "${var.aws_region}"
profile = "${var.aws_profile}" # This must match your credentials file
}
Your credentials file should have matching profile sections:
[dev-profile]
aws_access_key_id = DEV_ACCESS_KEY
aws_secret_access_key = DEV_SECRET_KEY
[prod-profile]
aws_access_key_id = PROD_ACCESS_KEY
aws_secret_access_key = PROD_SECRET_KEY
To get detailed debugging information, set this environment variable:
export AWS_SDK_LOAD_CONFIG=1
export AWS_CREDENTIALS_CHAIN_VERBOSE_ERRORS=1
This will show exactly where in the chain your credentials are failing.
For EC2 instances running Terraform, the simplest approach is to assign an IAM role:
resource "aws_iam_role" "terraform_execution" {
name = "terraform_execution_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
}
If you need cross-account access, you'll need to configure the provider differently:
provider "aws" {
alias = "target_account"
region = "us-west-2"
profile = "target_profile"
assume_role {
role_arn = "arn:aws:iam::TARGET_ACCOUNT_ID:role/OrganizationAccountAccessRole"
}
}
When working with Terraform and AWS, the "NoCredentialProviders: no valid providers in chain" error typically occurs when Terraform cannot find valid AWS credentials. In your case, you're using the profile
parameter in your AWS provider configuration, which means Terraform is specifically looking for credentials in your AWS credentials file.
First, check your AWS credentials file location and contents. On Linux/MacOS, this is typically at ~/.aws/credentials
, and on Windows at %UserProfile%\.aws\credentials
.
# Example credentials file content
[default]
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[your_profile_name]
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
The error might occur if:
- The profile name in your Terraform configuration doesn't match any profile in your credentials file
- The AWS credentials file is in the wrong location
- Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) are overriding your profile settings
- Your AWS CLI is not properly configured
Here's a more robust way to configure your AWS provider that handles various credential scenarios:
provider "aws" {
region = var.aws_region
profile = var.aws_profile
shared_credentials_file = "~/.aws/credentials" # Explicit path
max_retries = 5
# Optional: Skip credential validation during plan
skip_credentials_validation = true
skip_metadata_api_check = true
}
Before troubleshooting Terraform, verify your AWS CLI works with the same profile:
# Test your AWS profile
aws sts get-caller-identity --profile your_profile_name
# If this fails, your Terraform will fail too
Several approaches can resolve this issue:
# Option 1: Set environment variables (overrides profile)
export AWS_ACCESS_KEY_ID="anaccesskey"
export AWS_SECRET_ACCESS_KEY="asecretkey"
export AWS_REGION="us-west-2"
# Option 2: Use shared credentials with explicit path
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/path/to/credentials"
profile = "customprofile"
}
# Option 3: Direct credential specification (not recommended for committed code)
provider "aws" {
region = "us-west-2"
access_key = "AKIAXXXXXXXXXXXXXXXX"
secret_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
If you're using IAM roles (which your code suggests with iam_instance_profile
), ensure:
- The role has sufficient permissions
- The role trust relationship is properly configured
- You're not trying to assume a role from invalid credentials
After making changes, run this command to verify Terraform can authenticate:
terraform plan -target=aws_instance.this