AWS - ECS: List cluster and their Amazon EC2 instances - amazon-web-services

Is there a way to list
all your ecs clusters
the ec2 instance(s) comprising each cluster?
The aws cli does not seem to support such option.
I am trying to create an inventory of such resources and I want the above info to be recorded (ECS clusters + instance number / type of each of those instances)

Do you have the latest AWS CLI installed so you have ecs subcommand available?
how to list available cluster - it will return a list of clusters ARNs:
aws ecs list-clusters
how to get container instances of the cluster - it will return a list of container instances ARNs in the cluster:
aws ecs list-container-instances --cluster FOOBAR
finally, how to get EC2 instance(s) ID of the container instance(s):
aws ecs describe-container-instances --cluster FOOBAR --container-instances FOOBAR_CLUSTER_CONTAINER_INSTANCES_ARNS
The last command will describe particular container instance(s) where you can filter out ec2InstanceId parameter to find out EC2 instance(s) ID.

Related

Is there a way to get the private IP for an ECS task using aws cli?

So I want to get the private IP for an ECS task using AWS CLI without having to make more than one request. The only solution I found is:
1 - list tasks:
aws ecs list-tasks --cluster $CLUSTER_NAME
2 - Iterate the ARNS with a description request:
aws ecs describe-tasks --cluster $CLUSTER_NAME --task $TASK_ID
Is there a way to compose one single request to do that using $CLUSTER_NAME and the name of the task?`

How to determine if Fargate is using Spot Instances

Background: I'm running docker-compose ecs locally and need to ensure I use Spot instances due to my hobbyist budget.
Question: How do I determine and guarantee that instances are running as Fargate Spot instances?
Evidence:
I have setup the default capacity provider strategy as FARGATE_SPOT
I have both the default-created capacity providers 'FARGATE' and 'FARGATE_SPOT'
capacity providers
default strategy
You can see this in the web console when you view a specific task:
To find this page open click on your cluster from within ECS, then go to the "Tasks" tab and click on the task id.
You can also see this through the aws cli:
aws ecs describe-tasks --cluster <your cluster name> --tasks <your task id> | grep capacityProviderName

How to delete autoscaling groups with aws cli?

I am trying to write a bash script that will delete my EC2 instances and the auto scaling group that launched them:
EC2s=$(aws ec2 describe-instances --region=eu-west-3 \
--filters "Name=tag:Name,Values=*-my-dev-eu-west-3" \
--query "Reservations[].Instances[].InstanceId" \
--output text)
for id in $EC2s
do
aws ec2 terminate-instances --region=eu-west-3 --instance-ids $id
done
aws autoscaling delete-auto-scaling-group --region eu-west-3 \
--auto-scaling-group-name my-asg-dev-eu-west-3
But it fails with this error:
An error occurred (ResourceInUse) when calling the DeleteAutoScalingGroup operation:
You cannot delete an AutoScalingGroup while there are instances or pending Spot
instance request(s) still in the group.
There is no issue if I use the AWS console to do the same thing. Why does the aws cli prevent me from deleting the ASG if I have terminated all the instances?
if you really want to do this with CLI, you may first want to use aws autoscaling suspend-processes command to prevent ASG from creating new instances. Then use aws ec2 terminate-instances like you are doing. Then use aws ec2 wait instance-terminated command and pass instance ids. Once all that is done, you should be able use aws autoscaling delete-auto-scaling-group
aws ec2 terminate-instances will return before the instances have finished terminating (which could take several minutes).
I highly recommend using something like CloudFormation or Terraform for this sort of thing instead of the AWS CLI tool.
You can force delete the ASG with active spot instance requests with AWS cli:
aws autoscaling delete-auto-scaling-group --auto-scaling-group-name Your-ASG-Name --force-delete

ecs-cli refers to old cluster after changing default profile; doesn't show EC2 instances

I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations

Is it possible to create and Auto Scaling Group Launch config with the CLI and define the instance tags in one command?

Is it possible to create and Auto Scaling Group Launch config with the CLI and define the instance tags in one command?
Maybe I am missing something but right now it looks like have to do it in two steps.
i.e.
aws autoscaling create-launch-configuration ...
and then
aws autoscaling create-or-update-tags --tags ...
Since you need to have asg LC created first to tag it, it is two step process as you mentioned.
https://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html
This example creates a launch configuration based on an existing instance. In addition, it also specifies launch configuration attributes such as a security group, tenancy, Amazon EBS optimization, and a bootstrapping script:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-tagging.html
aws autoscaling create-launch-configuration --launch-configuration-name my-launch-config --key-name my-key-pair --instance-id i-7e13c876 --security-groups sg-eb2af88e --instance-type m1.small --user-data file://myuserdata.txt --instance-monitoring Enabled=true --no-ebs-optimized --no-associate-public-ip-address --placement-tenancy dedicated --iam-instance-profile my-autoscaling-role
aws autoscaling create-or-update-tags --tags "ResourceId=my-asg,ResourceType=auto-scaling-group,Key=environment,Value=test,PropagateAtLaunch=true"