I started an instance based on my AMI (based on Ubuntu 12.04 server) with the following command.
aws ec2 run-instances --image-id MY_AMI_ID --count 1 --instance-type t1.micro
What's surprising is, after I terminated the instance using the following command, it left an volume.
aws ec2 terminate-instances --instance-id MY_INSTANCE_ID
I would like to have the volume destroyed automatically, not sure if there is an easy option in the command line to do it.
Have you attached the volume after launching the instance?
As Amazon EC2 deletes all volumes that were attached during instance launch. Only volumes attached after instance is launched, will not be deleted.
Your AMI probably has the option set to not terminate block devices. You can adjust this behavior in your AMI by using the "delete-on-termination" option in AWS Console or the AWS CLI ec2-register command:
http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RegisterImage.html
Found that
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html has an example
aws ec2 modify-instance-attribute --instance-id i-63893fed --block-device-mappings "[{\"DeviceName\": \"/dev/sda1\",\"Ebs\":{\"DeleteOnTermination\":true}}]"
That solves my problem: now after an instance is terminated, it will not leave a volume behind.
Related
With a human error if an AMI associated to an EC2 got deleted and unrecovered. Is it possible to add new AMI to existing EC2 which is running? Does this destroy the existing EC2 and do we have to create new EC2?
Once an EC2 instance is created it doesn't matter at all if you delete the AMI. The AMI is not "in use" when an EC2 instance is running. The EBS volume(s) that were created when you launched the instance were copied from the AMI, at which point the AMI is no longer involved in the process at all.
You do not need to "add new AMI to existing EC2" which is impossible anyway.
You can create new AMI for that EC2, make sure you enable no reboot option before create AMI, other wise server will be rebooted.
You can use AWS CLI like below
INSTANCE_ID=`/opt/aws/bin/ec2-metadata -i | /usr/bin/awk '{print $2}'`
/usr/bin/aws ec2 create-image --no-reboot --instance-id $INSTANCEID --name "AMINAME" --description "description"
You can also use AWS console.
Creating an AMI will not destroy ANY EC2. it is backup for EC2 for DR, if EC2 fails you can launch new EC2 from updated AMI.
You can also use AWS AMI scheduledr -
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-systems-manager-ami-automation/
I opened a free tier instance for some practice.
I tried to terminate it, as I've done many times successfully.
But upon selecting Terminate instance from the dropdown;
I got the following error:
Failed to terminate the instance <instance id>
The instance '<instance id>' may not be terminated. Modify its 'disableApiTermination' instance attribute and try again.
Where can I find the disableApiTermination attribute?
According to the documentation
To disable termination protection for a running or stopped instance
Select the instance, and choose Actions, Instance Settings, Change Termination Protection.
Choose Yes, Disable.
Solution: you need to disable api termination protection by changing instance attribute, I'll demonstrate how to do it with aws api.
(documentation attached)
instance_id=$(aws ec2 describe-instances \
--filter "Name=tag:Name,Values=instance-name-example" \
--query "Reservations[].Instances[].InstanceId[]" \
--output text)
aws ec2 modify-instance-attribute --instance-id $instance_id --no-disable-api-termination
You can also enable/disable the instance termination by using aws cli.
To enable protection:
aws ec2 modify-instance-attribute --instance-id <instance-id> --disable-api-termination
To disable protection:
aws ec2 modify-instance-attribute --instance-id <instance-id> --no-disable-api-termination
TL;DR: when launching an instance of an AMI created with CLI aws ec2 create-image the previous applied user-data is gone, whereas launching off an AMI created in the console has all user-data modifications.
Scenario:
I want to automate creation of a custom AMI for our use which itself is based on a regularly updated base AMI. Whenever I get a notification I take the new AMI ID and then run a script, which I'll excerpt from.
I spin up an EC2 instance to which I add user-data of some form. Create files, add packages, etc. This step is straight-forward and works.
# base_ami_id is set elsewhere
ec2_id=$(aws ec2 run-instances \
--image-id ${base_ami_id} \
--count 1 \
--instance-type t2.micro \
--key-name ${key_name} \
--security-group-ids ${security_group} \
--subnet-id ${subnet} \
--user-data file://user-data.sh \
--iam-instance-profile Name=${iamprof} \
--output text --query 'Instances[*].InstanceId' \
)
echo "Instance ID is ${ec2_id}"
echo "Waiting for instance ${ec2_id} to run"
aws ec2 wait instance-running --instance-ids ${ec2_id}
NOTE: At this point, I can ssh into the created instance and verify that cloud-init applied all my user-data correctly. All is well.
Taking the returned Instance ID, I create an AMI image, as per
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html
echo Create image from instance ${ec2_id}
image_id=$(aws ec2 create-image --name ${image_name} --instance-id ${ec2_id} --output text --query 'ImageId')
echo "image ID is ${image_id}"
echo "Waiting for image ${image_id} to be available"
aws ec2 wait image-available --image-ids ${image_id}
echo "Image ${image_id} (${image_name}) available"
Watching in the console, after a while I see my new AMI.
To test, I launch an instance off the AMI created by this step - and am surprised to find that my modifications are NOT in the instance! It's as if I had launched off the original AMI. Which makes no sense: as described above, the user-data was there when I did a test login. And as seen in the shell excerpt above, I used the return $[ec2_id} which I got from the aws ec2 run-instance stage, as basis for the AMI creation and not , inadvertently, some other ID.
Making this even more confusing, I use the console and test by doing a Create Image from exactly that running instance, the one with Instance ID ${ec2_id} as above, which showed that all my user-data was there.
Then I launch an instance off that AMI - and won't you know it has all my modifications! Everything is there.
I've checked and triple-checked and I just don't see where/what I'm doing wrong! I thought maybe there's some extra command line options in aws ec2 create-image which is used in the console equivalent when making the API call. If there is, I can't see it.
What am I missing?!
It's like the AMI created from the console, off the same instance ID and the one from the CLI are different, but I've compared the ID numbers, they're definitely the same. You would think that using the right instance ID implies that the underlying snapshots and/or volumes would be the same, because --instance-id is the only value I can provide to create-image, right?
EDIT:
Following #Michael-sqlbot advice, I looked into the CloudTrail logs. Sadly that made this even more frustrating.
EDIT of EDIT:
I have removed the CloudTrail logs, as they turned out not to be pertinent to the issue and its solution and would quite possibly only confuse things.
I found the issue and how to fix it, and it may help others running into the same issue:
It turns out that using
aws ec2 wait instance-running
is NOT sufficient to ensure that all user-data is complete and has finished.
You may want to use
aws ec2 wait instance-status-ok
either in addition or instead. Even then you may want to be paranoid and add a simple sleep of several minutes to be certain!
If you want to add a tag to an instance when launching, you have to perform two steps:
Launch an instance (run-instances)
Add a tag to the newly created instance (create-tags)
Is there a way to add a tag (or set a name) when launching an instance using a single CLI command?
This request had been pending for a long time and AWS finally supported this in March 2017.
See: Amazon EC2 and Amazon EBS add support for tagging resources upon creation and additional resource-level permissions
Make sure your AWS CLI version is at least 1.11.106
$ aws --version
aws-cli/1.11.109 Python/2.6.9 Linux/4.1.17-22.30.amzn1.x86_64 botocore/1.5.72
CLI to tag the instance when launching:
The following example applies a tag with a key of webserver and
value of production to the instance.
aws ec2 run-instances --image-id ami-abc12345 --count 1 --instance-type t2.micro
--key-name MyKeyPair --subnet-id subnet-6e7f829e
--tag-specifications 'ResourceType=instance,Tags=[{Key=webserver,Value=production}]'
CLI to tag the instance and the volume:
The command also applies a tag with a key of cost-center and a value
of cc123 to any EBS volume that's created (in this case, the root
volume).
aws ec2 run-instances --image-id ami-abc12345 --count 1 --instance-type t2.micro
--key-name MyKeyPair --subnet-id subnet-6e7f829e
--tag-specifications 'ResourceType=instance,Tags=[{Key=webserver,Value=production}]' 'ResourceType=volume,Tags=[{Key=cost-center,Value=cc123}]'
I am using auto-scaling with desired count as 1 for master node. In case the instance terminates, in order to maintain high availability we need to attach the same EBS volume from previously terminated instance with the newly created one.
Provided CLI is configured on my AMI, I tried each of the followings in user data however it did not work.
#!/bin/bash
EC2_INSTANCE_ID=$(ec2metadata --instance-id)
aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $EC2_INSTANCE_ID --device /dev/sdk
#!/bin/bash
echo "aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk" > /tmp/xyz.sh
sudo chmod 755 /tmp/xyz.sh
sudo sh /tmp/xyz.sh 2>>
#!/bin/bash
var='ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk'
aws "$var"
aws ec2 attach-volume --volume-id vol-777099d8 --instance-id $(ec2metadata --instance-id) --device /dev/sdk
Appreciate your help!
It probably did not work because an EBS volume can only be attached to a single instance at one time. If it did not work you should have error messages in response to the CLI commands to help you understand why it did not work so check the instance's log.
I think you should revisit your architecture a bit because trying to do this sends up a red flag for me. First, a HA architecture should not have a single instance running. A good architecture would remain HA as instances are scaled up and down. If you have data that needs to be available to more than one instance then you should use S3 or EFS to store that data and not an EBS volume.