Many EKS users face S3 access problems because they misunderstand the IAM permission model for worker nodes. The key misconception is thinking that either the EKS service role or EC2 instance profiles automatically grant permissions to pods. In reality, since Kubernetes 1.12, the recommended approach is using IAM Roles for Service Accounts (IRSA).
Here's the correct way to enable S3 access:
- Enable OIDC provider for your cluster:
- Create an IAM policy for S3 access (avoid using AmazonS3FullAccess for security):
- Create IAM role and attach the policy:
eksctl utils associate-iam-oidc-provider \
--cluster YOUR_CLUSTER_NAME \
--approve
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME",
"arn:aws:s3:::YOUR_BUCKET_NAME/*"
]
}
]
}
eksctl create iamserviceaccount \
--name snakemake-s3-access \
--namespace default \
--cluster YOUR_CLUSTER_NAME \
--attach-policy-arn arn:aws:iam::AWS_ACCOUNT_ID:policy/YOUR_POLICY_NAME \
--approve \
--override-existing-serviceaccounts
After applying these changes, you can verify access by:
kubectl run s3-test --rm -i --tty --image amazon/aws-cli \
--overrides='{
"spec": {
"serviceAccountName": "snakemake-s3-access"
}
}' \
--command -- aws s3 ls s3://YOUR_BUCKET_NAME
If you still face problems:
- Check OIDC provider is properly configured:
aws eks describe-cluster --name YOUR_CLUSTER_NAME --query "cluster.identity.oidc.issuer"
- Verify the trust relationship in your IAM role includes the correct OIDC provider URL
- Ensure your pods specify the correct service account:
serviceAccountName: snakemake-s3-access
If you must use EC2 instance profiles (not recommended), configure your worker node IAM role:
aws iam attach-role-policy \
--role-name YOUR_WORKER_NODE_ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
Then restart the kubelet on worker nodes: sudo systemctl restart kubelet
When working with Amazon EKS, worker nodes typically require S3 access for various use cases like data processing pipelines (Snakemake in this example). The confusion often arises from misunderstanding where IAM permissions should be attached in the EKS architecture.
The attempts mentioned fail because:
- EKS service role only manages control plane operations, not worker node permissions
- CloudFormation role templates the infrastructure but doesn't automatically propagate to pods
- User account permissions don't automatically extend to worker nodes
Here's the correct approach using IRSA:
# First, create an IAM policy for S3 access
cat < s3-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-bucket-name",
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}
EOF
# Create the IAM policy
aws iam create-policy --policy-name EksS3AccessPolicy --policy-document file://s3-policy.json
Before using IRSA, you need to configure an OIDC provider:
eksctl utils associate-iam-oidc-provider \
--cluster your-cluster-name \
--region region-code \
--approve
Now create the role and bind it to a Kubernetes service account:
eksctl create iamserviceaccount \
--name snakemake-sa \
--namespace default \
--cluster your-cluster-name \
--attach-policy-arn arn:aws:iam::your-account-id:policy/EksS3AccessPolicy \
--approve \
--override-existing-serviceaccounts
To test if your pods now have S3 access:
kubectl run s3-test \
--image=amazon/aws-cli \
--restart=Never \
--rm -i \
--serviceaccount=snakemake-sa \
--command -- aws s3 ls s3://your-bucket-name
If you prefer giving S3 access to all worker nodes (less secure):
# Find your worker node role
kubectl -n kube-system describe configmap aws-auth | grep rolearn
# Attach policy directly to worker node role
aws iam attach-role-policy \
--role-name your-node-role-name \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
- Prefer IRSA over node-level permissions for finer-grained access control
- Restrict S3 bucket access to only necessary buckets/prefixes
- Use conditional IAM policies when possible
- Rotate credentials regularly