I recently encountered a frustrating scenario where terminated t1.micro and t2.micro instances kept resurrecting themselves. Despite termination protection being disabled, these instances would reappear in my running instances list within minutes. Here's how I solved this AWS zombie apocalypse.
Before diving deep, let's verify some basics:
aws ec2 describe-instances --instance-ids i-1234567890abcdef0 \
--query 'Reservations[].Instances[].StateReason.Code'
This helps identify if AWS is automatically restarting your instance due to underlying issues.
The AWS-EB (Elastic Beanstalk) factor is crucial here. EB environments automatically maintain instance counts according to your configuration. To properly terminate:
# First check your EB environment status
aws elasticbeanstalk describe-environments \
--environment-names YourEnvName
# Then terminate properly through EB
aws elasticbeanstalk terminate-environment \
--environment-name YourEnvName
Many don't realize their instances might be managed by Auto Scaling Groups. Check with:
aws autoscaling describe-auto-scaling-instances \
--instance-ids i-1234567890abcdef0
If your instance is in an ASG, you'll need to either:
- Detach the instance first:
aws autoscaling detach-instances \ --instance-ids i-1234567890abcdef0 \ --auto-scaling-group-name YourASGName \ --should-decrement-desired-capacity
- Or terminate through ASG:
aws autoscaling terminate-instance-in-auto-scaling-group \ --instance-id i-1234567890abcdef0 \ --should-decrement-desired-capacity
EC2 has automatic recovery features that might be enabled:
aws ec2 describe-instance-attribute \
--instance-id i-1234567890abcdef0 \
--attribute instanceInitiatedShutdownBehavior
If set to "stop" instead of "terminate", the instance might restart after shutdown attempts.
When all else fails, this sequence usually works:
# First deregister from any load balancers
aws elbv2 deregister-targets \
--target-group-arn your-target-group-arn \
--targets Id=i-1234567890abcdef0
# Then terminate with force
aws ec2 terminate-instances \
--instance-ids i-1234567890abcdef0 \
--force
To avoid this issue in the future:
- Always check instance dependencies before termination
- Review EB environment configurations carefully
- Monitor Auto Scaling Group policies
- Consider using AWS Systems Manager for lifecycle management
Recently, while cleaning up my AWS infrastructure, I encountered a bizarre situation where terminated t1.micro and t2.micro instances kept reappearing in my running instances list. Here's what I discovered about this phenomenon and how I permanently solved it.
After investigating several possibilities, I identified these potential causes:
- Auto Scaling Groups (ASG) maintaining desired capacity
- Elastic Beanstalk environment enforcing instance count
- EC2 Auto Recovery configured at the instance level
- Third-party automation tools like Terraform or Ansible
First, I recommended checking these AWS resources:
# Check for Auto Scaling Groups
aws autoscaling describe-auto-scaling-groups \
--query "AutoScalingGroups[?Instances[?InstanceId=='i-1234567890abcdef0']]"
# Verify Elastic Beanstalk environment settings
aws elasticbeanstalk describe-environment-resources \
--environment-name your-env-name
For my EB-configured instance, the solution was to either:
- Terminate the entire EB environment using the console
- Or modify the environment's capacity settings first:
aws elasticbeanstalk update-environment \
--environment-name your-env-name \
--option-settings Namespace=aws:autoscaling:asg,OptionName=MinSize,Value=0 \
Namespace=aws:autoscaling:asg,OptionName=MaxSize,Value=0
For regular EC2 instances that keep restarting:
# Check for instance auto-recovery
aws ec2 describe-instance-attribute \
--instance-id i-1234567890abcdef0 \
--attribute instanceInitiatedShutdownBehavior
# Disable if needed
aws ec2 modify-instance-attribute \
--instance-id i-1234567890abcdef0 \
--instance-initiated-shutdown-behavior terminate
To avoid this situation in the future:
- Always check ASG configurations before termination
- Review CloudWatch alarms that might trigger recovery
- Consider using AWS Config rules to monitor instance lifecycle
- Document all automation that might affect instance states