I'm trying to associate the Elastic IP address with Auto scaling group, so whenever the autoscaling triggers it will automatically associate with the EIP.
For this I'm trying to add the script in user data.
My intention is to we have 2 servers so its associated with 2 EIP's, whenever the autoscaling triggers it has to check whether the EIP is free or not if its free it has to associate with that instance using the instance id.
Below is my script where I'm getting the error
In EIP_LIST im getting this error Extra characters after interpolation expression; Expected a closing brace to end the interpolation expression, but found extra characters.
INSTANCE_ID=$(ec2-metadata --instance-id | cut -d " " -f 2);
MAXWAIT=10
# Get list of EIPs
EIP_LIST=${"eipalloc-09e7274dd3c641ae6" "eipalloc-05e8e926926f9de55"}
# Iterate over EIP list
for EIP in ${EIP_LIST}; do
echo "Checking if EIP with ALLOC_ID[$EIP] is free...."
ISFREE=$(aws ec2 describe-addresses --allocation-ids $EIP --query Addresses[].InstanceId --output text --region ap-south-1)
STARTWAIT=$(date +%s)
while [ ! -z "$ISFREE" ]; do
if [ "$(($(date +%s) - $STARTWAIT))" -gt $MAXWAIT ]; then
echo "WARNING: We waited 30 seconds, we're forcing it now."
ISFREE=""
else
echo "Waiting for EIP with ALLOC_ID[$EIP] to become free...."
sleep 3
ISFREE=$(aws ec2 describe-addresses --allocation-ids $EIP --query Addresses[].InstanceId --output text --region ap-south-1)
fi
done
echo Running: aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $EIP --allow-reassociation --region ap-south-1
aws ec2 associate-address --instance-id $INSTANCE_ID --allocation-id $EIP --allow-reassociation --region ap-south-1
$ is TF symbol used for interpolation of variables. It clashes with the same symbol used in bash. You have to escape it using $$ if you want to us $ in bash, not in TF, e.g.
$${EIP_LIST}
will result in ${EIP_LIST} in your script.
I am trying to create a Fargate cluster with ecs-cli using a load balancer
I came up so far with a script to deploy it without, so far my script is
building image
pushing it to ECR
echo ""
echo "creating task execution role"
aws iam wait role-exists --role-name $task_execution_role 2>/dev/null || \ aws iam --region $REGION create-role --role-name $task_execution_role \
--assume-role-policy-document file://task-execution-assume-role.json || return 1
echo ""
echo "adding AmazonECSTaskExecutionRole Policy"
aws iam --region $REGION attach-role-policy --role-name $task_execution_role \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy || return 1
echo ""
echo "creating task role"
aws iam wait role-exists --role-name $task_role 2>/dev/null || \
aws iam --region $REGION create-role --role-name $task_role \
--assume-role-policy-document file://task-role.json
echo ""
echo "adding AmazonS3ReadOnlyAccess Policy"
aws iam --region $REGION attach-role-policy --role-name $task_role \
--policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess || return 1
echo ""
echo "configuring cluster"
ecs-cli configure --cluster $CLUSTER --default-launch-type FARGATE --config-name $CLUSTER --region $REGION || return 1
ecs-cli down --force --cluster-config $CLUSTER --ecs-profile $profile_name || return 1
ecs-cli up --force --cluster-config $CLUSTER --ecs-profile $profile_name || return 1
echo ""
echo "adding ingress rules to security groups"
aws ec2 authorize-security-group-ingress --group-id $SGid --protocol tcp \
--port 80 --cidr 0.0.0.0/0 --region $REGION || return
ecs-cli compose --project-name $SERVICE_NAME service up --create-log-groups \
--cluster-config $CLUSTER --ecs-profile $profile_name
ecs-cli compose --project-name $SERVICE_NAME service ps \
--cluster-config $CLUSTER --ecs-profile $profile_name
aws ec2 describe-instances --query 'Reservations[].Instances[].[InstanceId,InstanceType,PublicIpAddress,Tags[?Key==`Name`]| [0].Value]' --output table
this works. service is up and I can access it from the public ip.
I now would like to add a load balancer
so I can expose a DNS with route53
Following a few other questions’ advice (this one in particular)
I came up with this
echo ""
echo "configuring cluster"
ecs-cli compose --project-name $CLUSTER create
ecs-cli configure --cluster $CLUSTER --default-launch-type FARGATE --config-name $CLUSTER --region $REGION
echo ""
echo "creating a new AWS CloudFormation stack called amazon-ecs-cli-setup-"$CLUSTER
ecs-cli up --force --cluster-config $CLUSTER --ecs-profile $profile_name
echo "create elb & add a dns CNAME for the elb dns"
aws elb create-load-balancer --load-balancer-name $SERVICE_NAME --listeners Protocol="TCP,LoadBalancerPort=8080,InstanceProtocol=TCP,InstancePort=80" --subnets $subnet1 $subnet2 --security-groups $SGid --scheme internal
echo "create service with above created task definition & elb"
aws ecs create-service \
--cluster $CLUSTER \
--service-name ecs-simple-service-elb \
--cli-input-json file://ecs-simple-service-elb.json
ecs-cli compose --project-name $SERVICE_NAME service up --create-log-groups \
--cluster-config $CLUSTER --ecs-profile $profile_name
echo ""
echo "here are the containers that are running in the service"
ecs-cli compose --project-name $SERVICE_NAME service ps --cluster-config $CLUSTER --ecs-profile $profile_name
and I get the following error messages:
create elb & add a dns CNAME for the elb dns
An error occurred (InvalidParameterException) when calling the CreateService operation: Unable to assume role and validate the listeners configured on your load balancer. Please verify that the ECS service role being passed has the proper permissions.
INFO[0002] Using ECS task definition TaskDefinition="dashboard:4"
WARN[0003] Failed to create log group dashboard-ecs in us-east-1: The specified log group already exists
INFO[0003] Auto-enabling ECS Managed Tags
ERRO[0003] Error creating service error="InvalidParameterException: subnet cannot be blank." service=dashboard
INFO[0003] Created an ECS service service=dashboard taskDefinition="dashboard:4"
FATA[0003] InvalidParameterException: subnet cannot be blank.
here are the containers that are running in the service
Name State Ports TaskDefinition Health
dashboard/4d0ebb65b20e4010b93cb99fb5b9e21d/web STOPPED ExitCode: 137 80->80/tcp dashboard:4 UNKNOWN
My task execution role and task role have this policy attached
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
and the JSON I pass to create service is (copied from the documentation):
{
"serviceName": "dashboard",
"taskDefinition": "dashboard",
"loadBalancers": [
{
"loadBalancerName": "dashboard",
"containerName": "dashboard",
"containerPort": 80
}
],
"desiredCount": 10,
"role": "ecsTaskExecutionRole"
}
what permissions am I missing and what should I change?
IIRC, your ECS service role should have AmazonEC2ContainerServiceRole role permissions to access your ELB and validate the listeners.
See here - https://aws.amazon.com/premiumsupport/knowledge-center/assume-role-validate-listeners/
and here - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_managed_policies.html#AmazonEC2ContainerServiceRole
I have been struggling to get basic metrics from the CloudWatch agent. I've been getting this error and I have no idea what it means nor can I find resources online which talk much about it
refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
I followed the instructions here and have read through the documentation carefully. Again, the goal is just to read in some basic metrics from my EC2 instance to CloudWatch. Here are the steps I have followed:
Followed instructions here "To create the IAM role necessary to run the CloudWatch agent on EC2 instances" and then assigned it to my instance.
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
ami id is ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20190628 (ami-0cfee17793b08a293)
Install the .deb with command sudo dpkg --install --skip-same-version ./amazon-cloudwatch-agent.deb
note --install and --skip-same-version is just -i -E as done in the docs
generated a config.json with the wizard, located here /opt/aws/amazon-cloudwatch-agent/bin/config.json. I pasted the contents under the error message below.
modify the /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml file to point to new credentials of cwagent (since not using root user) with the following:
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# tail -n 4 common-config.toml
#### BEGIN ANSIBLE MANAGED BLOCK ####
[credentials]
shared_credential_file = "/home/cwagent/.aws/credentials"
#### END ANSIBLE MANAGED BLOCK ####
fetch config and start agent with sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
here's the error I'm seeing in the logs now, and I'm assuming why this is why I can't see any metrics
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/logs# tail -n 20 amazon-cloudwatch-agent.log
2019/10/29 22:41:08 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json ...
2019/10/29 22:41:08 Reading json config file path: /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json ...
2019/10/29 22:41:08 I! Detected runAsUser: cwagent
2019/10/29 22:41:08 I! Change ownership to cwagent:cwagent
2019/10/29 22:41:08 I! Set HOME: /home/cwagent
2019-10-29T22:41:08Z I! will use file based credentials provider
2019-10-29T22:41:08Z I! cloudwatch: get unique roll up list []
2019-10-29T22:41:08Z I! Starting AmazonCloudWatchAgent (version 1.230621.0)
2019-10-29T22:41:08Z I! Loaded outputs: cloudwatch
2019-10-29T22:41:08Z I! cloudwatch: publish with ForceFlushInterval: 1m0s, Publish Jitter: 37s
2019-10-29T22:41:08Z I! Loaded inputs: disk mem
2019-10-29T22:41:08Z I! Tags enabled: host=ip-172-31-71-5
2019-10-29T22:41:08Z I! Agent Config: Interval:10s, Quiet:false, Hostname:"ip-172-31-71-5", Flush Interval:1s
2019-10-29T22:41:08Z I! will use file based credentials provider
2019-10-29T22:41:08Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:42:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:43:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:46:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:49:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
2019-10-29T22:52:37Z E! refresh EC2 Instance Tags failed: SharedCredsLoad: failed to get profile, metrics will be dropped until it got fixed
and the config.json I used
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/bin# cat config.json
{
"agent": {
"metrics_collection_interval": 10,
"run_as_user": "cwagent"
},
"metrics": {
"namespace": "TestNamespace",
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
EDITS
I got it working after I removed the credentials modification
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# tail -n 4 common-config.toml
#### BEGIN ANSIBLE MANAGED BLOCK ####
#[credentials]
#shared_credential_file = "/home/cwagent/.aws/credentials"
#### END ANSIBLE MANAGED BLOCK ####
and after I went ahead and copied the config file to the default location it checks (even though the docs say you can pass the file name as I did).
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/bin# cp config.json /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/bin# cd ../etc/
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# chown cwagent:cwagent amazon-cloudwatch-agent.json
root#ip-172-31-71-5:/opt/aws/amazon-cloudwatch-agent/etc# ls -l
total 16
drwxr-xr-x 2 cwagent cwagent 4096 Oct 30 22:05 amazon-cloudwatch-agent.d
-rwxr-xr-x 1 cwagent cwagent 611 Oct 30 22:11 amazon-cloudwatch-agent.json
-rw-rw-r-- 1 cwagent cwagent 1144 Oct 30 22:05 amazon-cloudwatch-agent.toml
-rw-r--r-- 1 cwagent cwagent 1073 Oct 30 22:05 common-config.toml
The error appears to be related to accessing tags that are associated with Amazon EC2 instances.
The installation instructions you linked suggest creating an IAM Role with the CloudWatchAgentServerPolicy policy attached. This policy includes permission to describe tags:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeVolumes",
"ec2:DescribeTags",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:*:*:parameter/AmazonCloudWatch-*"
}
]
}
It would appear that the CloudWatch Agent on that server is not receiving such permissions, and is therefore unable to list the tags.
Therefore:
Confirm that an IAM Role has been created and that it includes the CloudWatchAgentServerPolicy policy
Confirm that this IAM Role has been assigned to the Amazon EC2 instance that is running the CloudWatch Agent
If it is still failing, check whether there are any credentials stored locally on the instance that the Agent could be using instead of the IAM Role assigned to the instance
I am using Amazon EC2 Auto Scaling in my environment, whenever Auto Scaling triggers a new instance, I need to change the IP manually in Route 53. I want to automate this process.
Tried using Lifecycle Hooks but didn't see any update for Route 53.
You can use this User Data script to update the Route 53 record when an instance is launched.
It gathers required information from Instance Metadata.
#!
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
PRIVATE_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)
DOMAIN_NAME=$(aws route53 get-hosted-zone --id "${HOSTED_ZONE_ID}" --query 'HostedZone.Name' --output text | sed 's/.$//')
hostnamectl set-hostname hostname."${DOMAIN_NAME}"
aws route53 change-resource-record-sets --hosted-zone-id "${HOSTED_ZONE_ID}" --change-batch '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "'"$(hostname)"'","Type": "A","TTL": 60,"ResourceRecords": [{"Value": "'"${PRIVATE_IP}"'"}]}}]}'
You would need to insert a value for the ${HOSTED_ZONE_ID} to identify the record to update.
EDIT:
If You have multiple instances provisioned by ASG, You can develop a script to name new host.
I.e:
Use ASG to apply "role" TAG to every instance provisioned by ASG i.e role=webserver
In userdata script list all instances with TAG role=webserver
Check returned instances for TAG "node"
If none, then this instance is node1 -> hostname webserver-node1.${DOMAIN_NAME}
If some, parse TAG "node" to check which instance was replaced by ASG. I.e in result you can have node=node1, node=node3, node=node4. node2 is missing, so this instance is node2 -> hostname webserver-node2.${DOMAIN_NAME}
At the end of userdata you have to add "node" TAG to this instance, so next one is not named as existing one.
# !/bin/bash
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
PRIVATE_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)
DOMAIN_NAME=$(aws route53 get-hosted-zone --id "<Hosted Zone ID >" --query 'HostedZone.Name' --output text | sed 's/.$//')
hostnamectl set-hostname hostname."${DOMAIN_NAME}"
CN=`echo $PRIVATE_IP | cut -d . -f 3`
echo $CN
a=5
if [ $CN == $a ]
then
aws route53 change-resource-record-sets --hosted-zone-id "<Hosted Zone ID >" --change-batch '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "'"Dns Name"'","Type": "A","TTL": 60,"ResourceRecords": [{"Value": "'"${PRIVATE_IP}"'"}]}}]}'
else
aws route53 change-resource-record-sets --hosted-zone-id "<Hosted Zone ID >" --change-batch '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "'"< Dns Name>"'","Type": "A","TTL": 60,"ResourceRecords": [{"Value": "'"${PRIVATE_IP}"'"}]}}]}'
fi
I switch instances between different regions frequently and sometimes I forget to turn off my running instance from a different region. I couldn't find any way to see all the running instances on Amazon console.
Is there any way to display all the running instances regardless of region?
Nov 2021 Edit: AWS has recently launched the Amazon EC2 Global View with initial support for Instances, VPCs, Subnets, Security Groups and Volumes.
See the announcement or documentation for more details
A non-obvious GUI option is the Tag Editor in the Resource Groups console. Here you can find all instances across all regions, even if the instances were not tagged.
I don't think you can currently do this in the AWS GUI. But here is a way to list all your instances across all regions with the AWS CLI:
for region in `aws ec2 describe-regions --region us-east-1 --output text | cut -f4`
do
echo -e "\nListing Instances in region:'$region'..."
aws ec2 describe-instances --region $region
done
Taken from here (If you want to see full discussion)
Also, if you're getting a
You must specify a region. You can also configure your region by running "aws configure"
You can do so with aws configure set region us-east-1, thanks #Sabuncu for the comment.
Update
Now (in 2019) the cut command should be applied on the 4th field: cut -f4
In Console
Go to VPC dashboard https://console.aws.amazon.com/vpc/home and click on Running instances -> See all regions.
In CLI
Add this for example to .bashrc. Reload it source ~/.bashrc, and run it
Note: Except for aws CLI you need to have jq installed
function aws.print-all-instances() {
REGIONS=`aws ec2 describe-regions --region us-east-1 --output text --query Regions[*].[RegionName]`
for REGION in $REGIONS
do
echo -e "\nInstances in '$REGION'..";
aws ec2 describe-instances --region $REGION | \
jq '.Reservations[].Instances[] | "EC2: \(.InstanceId): \(.State.Name)"'
done
}
Example output:
$ aws.print-all-instances
Listing Instances in region: 'eu-north-1'..
"EC2: i-0548d1de00c39f923: terminated"
"EC2: i-0fadd093234a1c21d: running"
Listing Instances in region: 'ap-south-1'..
Listing Instances in region: 'eu-west-3'..
Listing Instances in region: 'eu-west-2'..
Listing Instances in region: 'eu-west-1'..
Listing Instances in region: 'ap-northeast-2'..
Listing Instances in region: 'ap-northeast-1'..
Listing Instances in region: 'sa-east-1'..
Listing Instances in region: 'ca-central-1'..
Listing Instances in region: 'ap-southeast-1'..
Listing Instances in region: 'ap-southeast-2'..
Listing Instances in region: 'eu-central-1'..
Listing Instances in region: 'us-east-1'..
Listing Instances in region: 'us-east-2'..
Listing Instances in region: 'us-west-1'..
Listing Instances in region: 'us-west-2'..
From VPC Dashboard:
First go to VPC Dashboard
Then find Running instances and expand see all regions. Here you can find all the running instances of all region:
From EC2 Global view:
Also you can use AWS EC2 Global View to watch Resource summary
and Resource counts per Region.
#imTachu solution works well. To do this via the AWS console...
AWS console
Services
Networking & Content Delivery
VPC
Look for a block named "Running Instances", this will show you the current region
Click the "See all regions" link underneath
Every time you create a resource, tag it with a name and now you can use Resource Groups to find all types of resources with a name tag across all regions.
After reading through all the solutions and trying bunch of stuff, the one that worked for me was-
List item
Go to Resource Group
Tag Editor
Select All Regions
Select EC2 Instance in resource type
Click Search Resources
Based on imTachus answer but less verbose, plus faster. You need to have jq and aws-cli installed.
set +m
for region in $(aws ec2 describe-regions --query "Regions[*].[RegionName]" --output text); do
aws ec2 describe-instances --region "$region" | jq ".Reservations[].Instances[] | {type: .InstanceType, state: .State.Name, tags: .Tags, zone: .Placement.AvailabilityZone}" &
done; wait; set -m
The script runs the aws ec2 describe-instances in parallel for each region (now 15!) and extracts only the relevant bits (state, tags, availability zone) from the json output. The set +m is needed so the background processes don't report when starting/ending.
Example output:
{
"type": "t2.micro",
"state": "stopped",
"tags": [
{
"Key": "Name",
"Value": "MyEc2WebServer"
},
],
"zone": "eu-central-1b"
}
You can run DescribeInstances() across all regions.
Additionally, you can:
Automate it through Lambda and Cloud watch.
Create api endpoint using Lambda and api gateway and use it in your code
A sample in NodeJS:
Create and array of regions (endpoints). [can also use AWS describeRegions() ]
var regionNames = ['us-west-1', 'us-west-2', 'us-east-1', 'eu-west-1', 'eu-central-1', 'sa-east-1', 'ap-southeast-1', 'ap-southeast-2', 'ap-northeast-1', 'ap-northeast-2'];
regionNames.forEach(function(region) {
getInstances(region);
});
Then, in getInstances function, DescribeInstances() can be
called.
function getInstances(region) {
EC2.describeInstances(params, function(err, data) {
if (err) return console.log("Error connecting to AWS, No Such Instance Found!");
data.Reservations.forEach(function(reservation) {
//do any operation intended
});
}
And Off Course, feel free to use ES6 and above.
I wrote a lambda function to get you all the instances in any state [running, stopped] and from any regions, will also give details about instance type and various other parameters.
The Script runs across all AWS regions and calls DescribeInstances(), to get the instances.
You just need to create a lambda function with run-time nodejs.
You can even create API out of it and use it as and when required.
Additionally, You can see AWS official Docs For DescribeInstances to explore many more options.
A quick bash oneliner command to print all the instance IDs in all regions:
$ aws ec2 describe-regions --query "Regions[].{Name:RegionName}" --output text |xargs -I {} aws ec2 describe-instances --query Reservations[*].Instances[*].[InstanceId] --output text --region {}
# Example output
i-012344b918d75abcd
i-0156780dad25fefgh
i-0490122cfee84ijkl
...
My script below, based on various tips from this post and elsewhere. The script is easier to follow (for me at least) than the long command lines.
The script assumes credential profile(s) are stored in file ~/.aws/credentials looking something like:
[default]
aws_access_key_id = foobar
aws_secret_access_key = foobar
[work]
aws_access_key_id = foobar
aws_secret_access_key = foobar
Script:
#!/usr/bin/env bash
#------------------------------------#
# Script to display AWS EC2 machines #
#------------------------------------#
# NOTES:
# o Requires 'awscli' tools (for ex. on MacOS: $ brew install awscli)
# o AWS output is tabbed - we convert to spaces via 'column' command
#~~~~~~~~~~~~~~~~~~~~#
# Assemble variables #
#~~~~~~~~~~~~~~~~~~~~#
regions=$(aws ec2 describe-regions --output text | cut -f4 | sort)
query_mach='Reservations[].Instances[]'
query_flds='PrivateIpAddress,InstanceId,InstanceType'
query_tags='Tags[?Key==`Name`].Value[]'
query_full="$query_mach.[$query_flds,$query_tags]"
#~~~~~~~~~~~~~~~~~~~~~~~~#
# Output AWS information #
#~~~~~~~~~~~~~~~~~~~~~~~~#
# Iterate through credentials profiles
for profile in 'default' 'work'; do
# Print profile header
echo -e "\n"
echo -e "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo -e "Credentials profile:'$profile'..."
echo -e "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
# Iterate through all regions
for region in $regions; do
# Print region header
echo -e "\n"
echo -e "Region: $region..."
echo -e "--------------------------------------------------------------"
# Output items for the region
aws ec2 describe-instances \
--profile $profile \
--region $region \
--query $query_full \
--output text \
| sed 's/None$/None\n/' \
| sed '$!N;s/\n/ /' \
| column -t -s $'\t'
done
done
AWS has recently launched the Amazon EC2 Global View with initial support for Instances, VPCs, Subnets, Security Groups, and Volumes.
To see all running instances go to EC2 or VPC console and click EC2 Global View in the top left corner.
Then click on Global Search tab and filter by Resource type and select Instance. Unfortunately, this will show instances in all states:
pending
running
stopping
stopped
shutting-down
terminated
I created an open-source script that helps you to list all AWS instances. https://github.com/Appnroll/aws-ec2-instances
That's a part of the script that lists the instances for one profile recording them into an postgreSQL database with using jq for json parsing:
DATABASE="aws_instances"
TABLE_NAME="aws_ec2"
SAVED_FIELDS="state, name, type, instance_id, public_ip, launch_time, region, profile, publicdnsname"
# collects the regions to display them in the end of script
REGIONS_WITH_INSTANCES=""
for region in `aws ec2 describe-regions --output text | cut -f3`
do
# this mappping depends on describe-instances command output
INSTANCE_ATTRIBUTES="{
state: .State.Name,
name: .KeyName, type: .InstanceType,
instance_id: .InstanceId,
public_ip: .NetworkInterfaces[0].Association.PublicIp,
launch_time: .LaunchTime,
\"region\": \"$region\",
\"profile\": \"$AWS_PROFILE\",
publicdnsname: .PublicDnsName
}"
echo -e "\nListing AWS EC2 Instances in region:'$region'..."
JSON=".Reservations[] | ( .Instances[] | $INSTANCE_ATTRIBUTES)"
INSTANCE_JSON=$(aws ec2 describe-instances --region $region)
if echo $INSTANCE_JSON | jq empty; then
# "Parsed JSON successfully and got something other than false/null"
OUT="$(echo $INSTANCE_JSON | jq $JSON)"
# check if empty
if [[ ! -z "$OUT" ]] then
for row in $(echo "${OUT}" | jq -c "." ); do
psql -c "INSERT INTO $TABLE_NAME($SAVED_FIELDS) SELECT $SAVED_FIELDS from json_populate_record(NULL::$TABLE_NAME, '${row}') ON CONFLICT (instance_id)
DO UPDATE
SET state = EXCLUDED.state,
name = EXCLUDED.name,
type = EXCLUDED.type,
launch_time = EXCLUDED.launch_time,
public_ip = EXCLUDED.public_ip,
profile = EXCLUDED.profile,
region = EXCLUDED.region,
publicdnsname = EXCLUDED.publicdnsname
" -d $DATABASE
done
REGIONS_WITH_INSTANCES+="\n$region"
else
echo "No instances"
fi
else
echo "Failed to parse JSON, or got false/null"
fi
done
To run jobs in parallel and use multiple profiles use this script.
#!/bin/bash
for i in profile1 profile2
do
OWNER_ID=`aws iam get-user --profile $i --output text | awk -F ':' '{print $5}'`
tput setaf 2;echo "Profile : $i";tput sgr0
tput setaf 2;echo "OwnerID : $OWNER_ID";tput sgr0
for region in `aws --profile $i ec2 describe-regions --output text | cut -f4`
do
tput setaf 1;echo "Listing Instances in region $region";tput sgr0
aws ec2 describe-instances --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value , InstanceId]' --profile $i --region $region --output text
done &
done
wait
Screenshot:
Not sure how long this option's been here, but you can see a global view of everything by searching for EC2 Global View
https://console.aws.amazon.com/ec2globalview/home#
Using bash-my-aws:
region-each instances
Based on #hansaplast code I created Windows friendly version that supports multiple profiles as an argument. Just save that file as cmd or bat file. You also need to have jq command.
#echo off
setlocal enableDelayedExpansion
set PROFILE=%1
IF "%1"=="" (SET PROFILE=default)
echo checkin instances in all regions for %PROFILE% account
FOR /F "tokens=* USEBACKQ" %%F IN (`aws ec2 describe-regions --query Regions[*].[RegionName] --output text --profile %PROFILE%`) DO (
echo === region: %%F
aws ec2 describe-instances --region %%F --profile %PROFILE%| jq ".Reservations[].Instances[] | {type: .InstanceType, state: .State.Name, tags: .Tags, zone: .Placement.AvailabilityZone}"
)
You may use cli tool designed for enumerating cloud resources (cross-region and cross-accounts scan) - https://github.com/scopely-devops/skew
After short configuration you may use the following code for list all instances in all US AWS regions (assuming 123456789012 is your AWS account number).
from skew import scan
arn = scan('arn:aws:ec2:us-*:123456789012:instance/*')
for resource in arn:
print(resource.data)
Good tool to CRUD AWS resources. Find [EC2|RDS|IAM..] in all regions. There can do operations (stop|run|terminate) on filters results.
python3 awsconsole.py ec2 all // return list of all instances
python3 awsconsole.py ec2 all -r eu-west-1
python3 awsconsole.py ec2 find -i i-0552e09b7a54fa2cf --[terminate|start|stop]