I have 6 spot instances for a project. I only need them up for 15 mins then I need to terminate them to spin then up again.
My idea is as follow. Install aws cli within the AMI. Then create a cron job to terminate ec2 spot instance
Current : aws ec2 terminate-instances
However you need the instance ID, being a spot instance I do not know the ID.
--update--
Per suggestion below, did stop-instance test. Got error
aws ec2 stop-instances --instance-ids ec2metadata
An error occurred (InvalidInstanceID.Malformed) when calling the StopInstances operation: Invalid id: "ec2metadata" (expecting "i-...")
How can I terminate the instances within the instance I want to terminate?
curl -s http://169.254.169.254/latest/meta-data/instance-id
will get the instance-id of the instance. Something like:
aws ec2 stop-instances --instance-ids `curl -s http://169.254.169.254/latest/meta-data/instance-id`
or create a shell script and execute it as a cron job.
#!/bin/sh
instid = `curl -s http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 stop-instances --instance-ids $instid
Related
I am trying to write a bash script that will delete my EC2 instances and the auto scaling group that launched them:
EC2s=$(aws ec2 describe-instances --region=eu-west-3 \
--filters "Name=tag:Name,Values=*-my-dev-eu-west-3" \
--query "Reservations[].Instances[].InstanceId" \
--output text)
for id in $EC2s
do
aws ec2 terminate-instances --region=eu-west-3 --instance-ids $id
done
aws autoscaling delete-auto-scaling-group --region eu-west-3 \
--auto-scaling-group-name my-asg-dev-eu-west-3
But it fails with this error:
An error occurred (ResourceInUse) when calling the DeleteAutoScalingGroup operation:
You cannot delete an AutoScalingGroup while there are instances or pending Spot
instance request(s) still in the group.
There is no issue if I use the AWS console to do the same thing. Why does the aws cli prevent me from deleting the ASG if I have terminated all the instances?
if you really want to do this with CLI, you may first want to use aws autoscaling suspend-processes command to prevent ASG from creating new instances. Then use aws ec2 terminate-instances like you are doing. Then use aws ec2 wait instance-terminated command and pass instance ids. Once all that is done, you should be able use aws autoscaling delete-auto-scaling-group
aws ec2 terminate-instances will return before the instances have finished terminating (which could take several minutes).
I highly recommend using something like CloudFormation or Terraform for this sort of thing instead of the AWS CLI tool.
You can force delete the ASG with active spot instance requests with AWS cli:
aws autoscaling delete-auto-scaling-group --auto-scaling-group-name Your-ASG-Name --force-delete
I'm looking for terminating multiple EC2 instances via AWS CLI. Yes, can able to terminate an EC2 instance by executing the below command.
Syntax:
aws ec2 terminate-instances --instance-ids <intance id> --profile <profile name>
Example:
aws ec2 terminate-instances --instance-ids <i-...> --profile xxx
But I have a big list of instances that I need to terminate so I'm searching for a solution to terminating a batch of EC2 instances by providing the list of instance ids. I tried with multiple instance ids as below but those not working.
aws ec2 terminate-instances --instance-ids ("instance-id1", "intance-id2") --profile xxx
aws ec2 terminate-instances --instance-ids ("instance-id1intance-id2") --profile xxx
aws ec2 terminate-instances --instance-ids (instance-id1,intance-id2) --profile xxx
Kindly let me know if there is any possibility to terminate a batch of instances.
I can able to achieve this by following the below command as recommended by luk2302
aws ec2 terminate-instances --instance-ids instance-id1 instance-id2 --profile xxx
Also as recommended by Alex Bailey, we can try with the shell script (.sh) or batch (.bat) which will make our job easier.
Instead of running all the instance ID's through at once I would create a loop in a shell script to do this.
Assuming you have each instance ID on a separate line in a text file you could do something like:
#!/usr/bin/env bash
while read ins_id; do
aws ec2 terminate-instances --instance-ids $ins_id --profile <profile name> || echo "error terminating ${ins_id}"
done < instance_ids.txt
That's not tested and I'm not great with shell scripting so if you try using it just try with one or two instances first and see what happens.
aws ec2 describe-instance-status --instance-id "*****"
This cli command works only for the region where you run this command, If i want to check Instance state of other location, unable to identify instance ID
Error: An error occurred (InvalidInstanceID.NotFound) when calling the DescribeInstanceStatus operation: The instance ID '***********' does not exist
If you wish to run the query in another region, simply define this when running the command:
For example, to run in us-west-2:
aws --region us-west-2 ec2 describe-instances --instance-ids <ids>
I frequently need to start AWS EC2 instances to work with from the command line to work with over SSH, and would like to write a short script to do this, but I'm stuck at the most basic steps.
For example, I can get started with
aws ec2 start-instances --instance-ids i-84Sd8jdf
and would like to continue by grabbing the IP address assigned to the instance and using it as an environment variable or script variable to preform subsequent operations such as
ssh ubuntu#<theIP>
or
scp ubuntu#<theIP>:~/soruce_stuff/* ~/dest_folder/
but I can't figure out how to get the IP address from the start-instances command, or from any of the JSON emitted by other commands.
How do I script starting of EC2 instances, wearing for the IP to be assigned, and capturing the assigned IP address for subsequent use?
An example (based on this excellent answer) in bash where the instance is started, the script sleeps (specified in seconds), and the public ip address is saved to a local variable:
aws ec2 start-instances --instance-ids i-aaaa1111
sleep 10
ec2Address=$(aws ec2 describe-instances --instance-ids i-aaaa1111 --query "Reservations[*].Instances[*].PublicIpAddress" --output=text)
To verify:
echo $ec2Address
11.1.111.111
Amazon offers wait condition to wait for the instance to be ready before you can perform other tasks on it. Here is an example might help you
aws ec2 start-instances --instance-ids $instance_id
aws ec2 wait instance-running --instance-ids $instance_id
aws ec2 describe-instances --instance-ids $instance_id --output text|grep ASSOCIATION |awk '{print $3}'|head -1
I am working on shell script, witch does follow:
creates snapshot of EBS Volume;
creates AMI image based on this snapshot.
1) I use follow command to create snapshot:
SNAPSHOT_ID=$(aws ec2 create-snapshot "${DRYRUN}" --volume-id "${ROOT_VOLUME_ID}" --description "${SNAPSHOT_DESCRIPTION}" --query 'SnapshotId')
2) I use waiter to wait complete state:
aws ec2 wait snapshot-completed --snapshot-ids "${SNAPSHOT_ID}"
When I test it with EBS Volume 8 GB size everything goes well.
When it is 40 GB, I have an exception:
Waiter SnapshotCompleted failed: Max attempts exceeded
Probably, 40 GB takes more time, then 8 GB one, just need to wait.
AWS Docs (http://docs.aws.amazon.com/cli/latest/reference/ec2/wait/snapshot-completed.html) don't have any timeout or attempts quantity option.
May be some of you have faced the same issue?
So, finally, I used follow way to solve it:
Create snapshot
Use loop to check exit status of command aws ec2 wait snapshot-completed
If exit status is not 0 then print current state, progress and run waiter again.
# Create snapshot
SNAPSHOT_DESCRIPTION="Snapshot of Primary frontend instance $(date +%Y-%m-%d)"
SNAPSHOT_ID=$(aws ec2 create-snapshot "${DRYRUN}" --volume-id "${ROOT_VOLUME_ID}" --description "${SNAPSHOT_DESCRIPTION}" --query 'SnapshotId')
while [ "${exit_status}" != "0" ]
do
SNAPSHOT_STATE="$(aws ec2 describe-snapshots --filters Name=snapshot-id,Values=${SNAPSHOT_ID} --query 'Snapshots[0].State')"
SNAPSHOT_PROGRESS="$(aws ec2 describe-snapshots --filters Name=snapshot-id,Values=${SNAPSHOT_ID} --query 'Snapshots[0].Progress')"
echo "### Snapshot id ${SNAPSHOT_ID} creation: state is ${SNAPSHOT_STATE}, ${SNAPSHOT_PROGRESS}%..."
aws ec2 wait snapshot-completed --snapshot-ids "${SNAPSHOT_ID}"
exit_status="$?"
done
If you have something that can improve it, please share with us.
you should probably use until in bash, looks a bit cleaner and you don't have to repeat.
echo "waiting for snapshot $snapshot"
until aws ec2 wait snapshot-completed --snapshot-ids $snapshot 2>/dev/null
do
do printf "\rsnapshot progress: %s" $progress;
sleep 10
progress=$(aws ec2 describe-snapshots --snapshot-ids $snapshot --query "Snapshots[*].Progress" --output text)
done
aws ec2 wait snapshot-completed takes a while to time out. This snippet uses aws ec2 describe-snapshots to get the progress. When it's 100% it calls snapshot-completed.
# create snapshot
SNAPSHOTID=$(aws ec2 create-snapshot --volume-id $VOLUMEID --output text --query "SnapshotId")
echo "Waiting for Snapshot ID: $SNAPSHOTID"
SNAPSHOTPROGRESS=$(aws ec2 describe-snapshots --snapshot-ids $SNAPSHOTID --query "Snapshots[*].Progress" --output text)
while [ $SNAPSHOTPROGRESS != "100%" ]
do
sleep 15
echo "Snapshot ID: $SNAPSHOTID $SNAPSHOTPROGRESS"
SNAPSHOTPROGRESS=$(aws ec2 describe-snapshots --snapshot-ids $SNAPSHOTID --query "Snapshots[*].Progress" --output text)
done
aws ec2 wait snapshot-completed --snapshot-ids "$SNAPSHOTID"
This is essentially the same thing as above, but prints out a progress message every 15 seconds. Snapshots that are completed return 100% immediately.
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-retries.html
You can set a variable or use the config file to increase the timeouts.
AWS_MAX_ATTEMPTS=100
~/.aws/config
[default]
retry_mode = standard
max_attempts = 6
ISSUE: In ci/cd we had command to wait ecs service to be steady and got this error
aws ecs wait services-stable \
--cluster MyCluster \
--services MyService
ERROR MSG : Waiter ServicesStable failed: Max attempts exceeded
FIX
in order to fix this issue we followed this doc
-> https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/load-balancer-healthcheck.html
aws elbv2 modify-target-group --target-group-arn <arn of target group> --healthy-threshold-count 2 --health-check-interval-seconds 5 --health-check-timeout-seconds 4
-> https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/load-balancer-connection-draining.html
aws elbv2 modify-target-group-attributes --target-group-arn <arn of target group> --attributes Key=deregistration_delay.timeout_seconds,Value=10
this fixed the issue
In case you have more target groups to edit just output the target groups arns to a file and run this in a loop.