To assume an AWS role in the CLI, I do the following command:
aws sts assume-role --role-arn arn:aws:iam::123456789123:role/myAwesomeRole --role-session-name test --region eu-central-1
This gives to me an output that follows the schema:
{
"Credentials": {
"AccessKeyId": "someAccessKeyId",
"SecretAccessKey": "someSecretAccessKey",
"SessionToken": "someSessionToken",
"Expiration": "2020-08-04T06:52:13+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "idOfTheAssummedRole",
"Arn": "theARNOfTheRoleIWantToAssume"
}
}
And then I manually copy and paste the values of AccessKeyId, SecretAccessKey and SessionToken in a bunch of exports like this:
export AWS_ACCESS_KEY_ID="someAccessKeyId"
export AWS_SECRET_ACCESS_KEY="someSecretAccessKey"
export AWS_SESSION_TOKEN="someSessionToken"
To finally assume the role.
How can I do this in one go? I mean, without that manual intervention of copying and pasting the output of the aws sts ... command on the exports.
No jq, no eval, no multiple exports - using the printf built-in (i.e. no credential leakage through /proc) and command substitution:
export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" \
$(aws sts assume-role \
--role-arn arn:aws:iam::123456789012:role/MyAssumedRole \
--role-session-name MySessionName \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
--output text))
Finally, a colleague shared with me this awesome snippet that gets the work done in one go:
eval $(aws sts assume-role --role-arn arn:aws:iam::123456789123:role/myAwesomeRole --role-session-name test | jq -r '.Credentials | "export AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey)\nexport AWS_SESSION_TOKEN=\(.SessionToken)\n"')
Apart from the AWS CLI, it only requires jq which is usually installed in any Linux Desktop.
You can store an IAM Role as a profile in the AWS CLI and it will automatically assume the role for you.
Here is an example from Using an IAM role in the AWS CLI - AWS Command Line Interface:
[profile marketingadmin]
role_arn = arn:aws:iam::123456789012:role/marketingadminrole
source_profile = user1
This is saying:
If a user specifies --profile marketingadmin
Then use the credentials of profile user1
To call AssumeRole on the specified role
This means you can simply call a command like this and it will assume the role and use the returned credentials automatically:
aws s3 ls --profile marketingadmin
Arcones's answer is good but here's a way that doesn't require jq:
eval $(aws sts assume-role \
--role-arn arn:aws:iam::012345678901:role/TrustedThirdParty \
--role-session-name=test \
--query 'join(``, [`export `, `AWS_ACCESS_KEY_ID=`,
Credentials.AccessKeyId, ` ; export `, `AWS_SECRET_ACCESS_KEY=`,
Credentials.SecretAccessKey, `; export `, `AWS_SESSION_TOKEN=`,
Credentials.SessionToken])' \
--output text)
I have had the same problem and I managed using one of the runtimes that the CLI served me.
Once obtained the credentials I used this approach, even if not so much elegant (I used PHP runtime, but you could use what you have available in your CLI):
- export AWS_ACCESS_KEY_ID=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->AccessKeyId;'`
- export AWS_SECRET_ACCESS_KEY=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->SecretAccessKey;'`
- export AWS_SESSION_TOKEN=`php -r 'echo json_decode(file_get_contents("credentials.json"))->Credentials->SessionToken;'`
where credentials.json is the output of the assumed role:
aws sts assume-role --role-arn "arn-of-the-role" --role-session-name "arbitrary-session-name" > credentials.json
Obviously this is just an approach, particularly helping in case of you are automating the process. It worked to me, but I don't know if it's the best. For sure not the most linear.
You can use aws config with external source following the guide: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html.
Create a shell script, for example assume-role.sh:
#!/bin/sh
aws sts --profile $2 assume-role --role-arn arn:aws:iam::123456789012:role/$1 \
--role-session-name test \
--query "Credentials" \
| jq --arg version 1 '. + {Version: $version|tonumber}'
At ~/.aws/config config profile with shell script:
[profile desktop]
region=ap-southeast-1
output=json
[profile external-test]
credential_process = "/path/assume-role.sh" test desktop
[profile external-test2]
credential_process = "/path/assume-role.sh" test2 external-test
Incase anyone wants to use credential file login:
#!/bin/bash
# Replace the variables with your own values
ROLE_ARN=<role_arn>
PROFILE=<profile_name>
REGION=<region>
# Assume the role
TEMP_CREDS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name "temp-session" --output json)
# Extract the necessary information from the response
ACCESS_KEY=$(echo $TEMP_CREDS | jq -r .Credentials.AccessKeyId)
SECRET_KEY=$(echo $TEMP_CREDS | jq -r .Credentials.SecretAccessKey)
SESSION_TOKEN=$(echo $TEMP_CREDS | jq -r .Credentials.SessionToken)
# Put the information into the AWS CLI credentials file
aws configure set aws_access_key_id "$ACCESS_KEY" --profile "$PROFILE"
aws configure set aws_secret_access_key "$SECRET_KEY" --profile "$PROFILE"
aws configure set aws_session_token "$SESSION_TOKEN" --profile "$PROFILE"
aws configure set region "$REGION" --profile "$PROFILE"
# Verify the changes have been made
aws configure list --profile "$PROFILE"
based on Nev Stokes's answer if you want to add credentials to a file
using printf
printf "
[ASSUME-ROLE]
aws_access_key_id = %s
aws_secret_access_key = %s
aws_session_token = %s
x_security_token_expires = %s" \
$(aws sts assume-role --role-arn "arn:aws:iam::<acct#>:role/<role-name>" \
--role-session-name <session-name> \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]" \
--output text) >> ~/.aws/credentials
if you prefere awk
aws sts assume-role \
--role-arn "arn:aws:iam::<acct#>:role/<role-name>" \
--role-session-name <session-name> \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken,Expiration]" \
--output text | awk '
BEGIN {print "[ROLE-NAME]"}
{ print "aws_access_key_id = " $1 }
{ print "aws_secret_access_key = " $2 }
{ print "aws_session_token = " $3 }
{ print "x_security_token_expires = " $4}' >> ~/.aws/credentials
to update credentials in ~/.aws/credentials file run blow sed command before running one of the above command.
sed -i -e '/ROLE-NAME/,+4d' ~/.aws/credentials
I am trying to get the details from the AWS SSM Parameter store. I have the data stored for which the value in SSM parameter store like this:
CompanyName\credits
Please find the SSM command executed through AWS CLI, the output is as follows:
aws ssm get-parameters --names "/Data/Details"
- Output
{
"Parameters": [
{
"Name": "/Data/Details",
"Type": "String",
"Value": "CompanyName\\Credits",
"Version": 1,
"LastModifiedDate": "2019-08-13T18:16:40.836000+00:00",
"ARN": "arn:aws:ssm:us-west-1:8484848448444:parameter/Data/Details"
}
],
"InvalidParameters": []
} ```
In the above output, I am trying to print only **Credits** in **"Value": "CompanyName\\Credits"**, so I have added more filters to my command as follows:
``` aws ssm get-parameters --names "/Data/Details" |grep -i "Value" |sed -e 's/[",]//g' |awk -F'\\' '{print $2}' ```
The above command gives nothing in the output. But when I am trying to print the first field I am able to see the output as ** Value: CompanyName ** using the following command:
``` aws ssm get-parameters --names "/Data/Details" |grep -i "Value" |sed -e 's/[",]//g' |awk -F'\\' '{print $1}' ```
Since this is Linux machine, the double slash in the field **Value : CompanyName\\Credits ** occurred to escape the '\' character for the actual value CompanyName\Credits. Can someone let me know how can I modify the command to print only the value **Credits** in my output.
Regards,
Karthik
I think this should work:
param_info=$(aws ssm get-parameters \
--names "/Data/Details" \
--query 'Parameters[0].Value' \
--output text)
echo ${param_info} | awk -F'\\' {'print $3'}
# or without awk
echo ${param_info#*\\\\} # for Credit
echo ${param_info%\\\\*} # for CompanyName
This uses query and output parameters to control output from the aws cli. Also bash parameter expansions are used.
I have multiple aws accounts and i don't remember in which aws account this EC2 instance was created, is there any optimal way to figure out in very less time?
Note: i need to know account DNS name or Alias name.(Not account number)
If you have access to the instance you could use Instance metadata API:
[ec2-user ~]$ curl http://169.254.169.254/latest/dynamic/instance-identity/document
It returns json with accountId field.
If you configure AWS CLI for all account, then you can get the Account ID, ARN and user ID.
The script does the following.
Get the list of AWS configuration profile
Loop over all profile
Get a list of All Ec2 public IP address
print account info if IP matched and exit
RUN
./script.sh 52.x.x.x
script.sh
#!/bin/bash
INSTANCE_IP="${1}"
if [ -z "${INSTANCE_IP}" ]; then
echo "pls provide instance IP"
echo "./scipt.sh 54.x.x.x"
exit 1
fi
PROFILE_LIST=$(grep -o "\\[[^]]*]" < ~/.aws/credentials | tr -d "[]")
for PROFILE in $PROFILE_LIST; do
ALL_IPS=$(aws ec2 describe-instances --profile "${PROFILE}" --query "Reservations[].Instances[][PublicIpAddress]" --output text | tr '\r\n' ' ')
echo "looking against profile ${PROFILE}"
for IP in $ALL_IPS; do
if [ "${INSTANCE_IP}" == "${IP}" ]; then
echo "Instance IP matched in below account"
aws sts get-caller-identity
exit 0
fi
done
done
echo "seems like instance not belong to these profile"
echo "${PROFILE_LIST}"
exit 1
loop over accounts
loop over regions
also be aware of lightsail!
I came up with the following and helped me. I didn't exclude the regions that did not have lightsail
for region in `aws ec2 describe-regions --output text --query 'Regions[*].[RegionName]' --region eu-west-1` ; do \
echo $region; \
aws ec2 describe-network-interfaces --output text --filters Name=addresses.private-ip-address,Values="IPv4 address" --region $region ; \
aws lightsail get-instances --region eu-west-1 --output text --query 'instances[*].[name,publicIpAddress]' --region $region; \
done
I am asked to list all users accesses in my company's aws account. Is there any way that I can list out the resources and respective permissions a user has? I feel it is difficult to get the details by looking into both IAM polices as well as Resource based policies. And it is much difficult if the user has cross account access.
There is no single command that you can list all the permission. if you are interested to use some tool then you can try a tool for quickly evaluating IAM permissions in AWS.
You can try this script as well, As listing permission with single command is not possible you can check with a combination of multiple commands.
#!/bin/bash
username=ahsan
echo "**********************"
echo "user info"
aws iam get-user --user-name $username
echo "***********************"
echo ""
# if [ $1=="test" ]; then
# all_users=$(aws iam list-users --output text | cut -f 6)
# echo "users in account are $all_users"
# fi
echo "get user groups"
echo "***********************************************"
Groups=$(aws iam list-groups-for-user --user-name ahsan --output text | awk '{print $5}')
echo "user $username belong to $Groups"
echo "***********************************************"
echo "listing policies in group"
for Group in $Groups
do
echo ""
echo "***********************************************"
echo "list attached policies with group $Group"
aws iam list-attached-group-policies --group-name $Group --output table
echo "***********************************************"
echo ""
done
echo "list attached policies"
aws iam list-attached-user-policies --user-name $username --output table
echo "-------- Inline Policies --------"
for Group in $Groups
do
aws iam list-group-policies --group-name $Group --output table
done
aws iam list-user-policies --user-name $username --output table
Kindly run below one liner bash script regarding to list all users with their policies, groups,attached polices.
aws iam list-users |grep -i username > list_users ; cat list_users |awk '{print $NF}' |tr '\"' ' ' |tr '\,' ' '|while read user; do echo "\n\n--------------Getting information for user $user-----------\n\n" ; aws iam list-user-policies --user-name $user --output yaml; aws iam list-groups-for-user --user-name $user --output yaml;aws iam list-attached-user-policies --user-name $user --output yaml ;done ;echo;echo
I switch instances between different regions frequently and sometimes I forget to turn off my running instance from a different region. I couldn't find any way to see all the running instances on Amazon console.
Is there any way to display all the running instances regardless of region?
Nov 2021 Edit: AWS has recently launched the Amazon EC2 Global View with initial support for Instances, VPCs, Subnets, Security Groups and Volumes.
See the announcement or documentation for more details
A non-obvious GUI option is the Tag Editor in the Resource Groups console. Here you can find all instances across all regions, even if the instances were not tagged.
I don't think you can currently do this in the AWS GUI. But here is a way to list all your instances across all regions with the AWS CLI:
for region in `aws ec2 describe-regions --region us-east-1 --output text | cut -f4`
do
echo -e "\nListing Instances in region:'$region'..."
aws ec2 describe-instances --region $region
done
Taken from here (If you want to see full discussion)
Also, if you're getting a
You must specify a region. You can also configure your region by running "aws configure"
You can do so with aws configure set region us-east-1, thanks #Sabuncu for the comment.
Update
Now (in 2019) the cut command should be applied on the 4th field: cut -f4
In Console
Go to VPC dashboard https://console.aws.amazon.com/vpc/home and click on Running instances -> See all regions.
In CLI
Add this for example to .bashrc. Reload it source ~/.bashrc, and run it
Note: Except for aws CLI you need to have jq installed
function aws.print-all-instances() {
REGIONS=`aws ec2 describe-regions --region us-east-1 --output text --query Regions[*].[RegionName]`
for REGION in $REGIONS
do
echo -e "\nInstances in '$REGION'..";
aws ec2 describe-instances --region $REGION | \
jq '.Reservations[].Instances[] | "EC2: \(.InstanceId): \(.State.Name)"'
done
}
Example output:
$ aws.print-all-instances
Listing Instances in region: 'eu-north-1'..
"EC2: i-0548d1de00c39f923: terminated"
"EC2: i-0fadd093234a1c21d: running"
Listing Instances in region: 'ap-south-1'..
Listing Instances in region: 'eu-west-3'..
Listing Instances in region: 'eu-west-2'..
Listing Instances in region: 'eu-west-1'..
Listing Instances in region: 'ap-northeast-2'..
Listing Instances in region: 'ap-northeast-1'..
Listing Instances in region: 'sa-east-1'..
Listing Instances in region: 'ca-central-1'..
Listing Instances in region: 'ap-southeast-1'..
Listing Instances in region: 'ap-southeast-2'..
Listing Instances in region: 'eu-central-1'..
Listing Instances in region: 'us-east-1'..
Listing Instances in region: 'us-east-2'..
Listing Instances in region: 'us-west-1'..
Listing Instances in region: 'us-west-2'..
From VPC Dashboard:
First go to VPC Dashboard
Then find Running instances and expand see all regions. Here you can find all the running instances of all region:
From EC2 Global view:
Also you can use AWS EC2 Global View to watch Resource summary
and Resource counts per Region.
#imTachu solution works well. To do this via the AWS console...
AWS console
Services
Networking & Content Delivery
VPC
Look for a block named "Running Instances", this will show you the current region
Click the "See all regions" link underneath
Every time you create a resource, tag it with a name and now you can use Resource Groups to find all types of resources with a name tag across all regions.
After reading through all the solutions and trying bunch of stuff, the one that worked for me was-
List item
Go to Resource Group
Tag Editor
Select All Regions
Select EC2 Instance in resource type
Click Search Resources
Based on imTachus answer but less verbose, plus faster. You need to have jq and aws-cli installed.
set +m
for region in $(aws ec2 describe-regions --query "Regions[*].[RegionName]" --output text); do
aws ec2 describe-instances --region "$region" | jq ".Reservations[].Instances[] | {type: .InstanceType, state: .State.Name, tags: .Tags, zone: .Placement.AvailabilityZone}" &
done; wait; set -m
The script runs the aws ec2 describe-instances in parallel for each region (now 15!) and extracts only the relevant bits (state, tags, availability zone) from the json output. The set +m is needed so the background processes don't report when starting/ending.
Example output:
{
"type": "t2.micro",
"state": "stopped",
"tags": [
{
"Key": "Name",
"Value": "MyEc2WebServer"
},
],
"zone": "eu-central-1b"
}
You can run DescribeInstances() across all regions.
Additionally, you can:
Automate it through Lambda and Cloud watch.
Create api endpoint using Lambda and api gateway and use it in your code
A sample in NodeJS:
Create and array of regions (endpoints). [can also use AWS describeRegions() ]
var regionNames = ['us-west-1', 'us-west-2', 'us-east-1', 'eu-west-1', 'eu-central-1', 'sa-east-1', 'ap-southeast-1', 'ap-southeast-2', 'ap-northeast-1', 'ap-northeast-2'];
regionNames.forEach(function(region) {
getInstances(region);
});
Then, in getInstances function, DescribeInstances() can be
called.
function getInstances(region) {
EC2.describeInstances(params, function(err, data) {
if (err) return console.log("Error connecting to AWS, No Such Instance Found!");
data.Reservations.forEach(function(reservation) {
//do any operation intended
});
}
And Off Course, feel free to use ES6 and above.
I wrote a lambda function to get you all the instances in any state [running, stopped] and from any regions, will also give details about instance type and various other parameters.
The Script runs across all AWS regions and calls DescribeInstances(), to get the instances.
You just need to create a lambda function with run-time nodejs.
You can even create API out of it and use it as and when required.
Additionally, You can see AWS official Docs For DescribeInstances to explore many more options.
A quick bash oneliner command to print all the instance IDs in all regions:
$ aws ec2 describe-regions --query "Regions[].{Name:RegionName}" --output text |xargs -I {} aws ec2 describe-instances --query Reservations[*].Instances[*].[InstanceId] --output text --region {}
# Example output
i-012344b918d75abcd
i-0156780dad25fefgh
i-0490122cfee84ijkl
...
My script below, based on various tips from this post and elsewhere. The script is easier to follow (for me at least) than the long command lines.
The script assumes credential profile(s) are stored in file ~/.aws/credentials looking something like:
[default]
aws_access_key_id = foobar
aws_secret_access_key = foobar
[work]
aws_access_key_id = foobar
aws_secret_access_key = foobar
Script:
#!/usr/bin/env bash
#------------------------------------#
# Script to display AWS EC2 machines #
#------------------------------------#
# NOTES:
# o Requires 'awscli' tools (for ex. on MacOS: $ brew install awscli)
# o AWS output is tabbed - we convert to spaces via 'column' command
#~~~~~~~~~~~~~~~~~~~~#
# Assemble variables #
#~~~~~~~~~~~~~~~~~~~~#
regions=$(aws ec2 describe-regions --output text | cut -f4 | sort)
query_mach='Reservations[].Instances[]'
query_flds='PrivateIpAddress,InstanceId,InstanceType'
query_tags='Tags[?Key==`Name`].Value[]'
query_full="$query_mach.[$query_flds,$query_tags]"
#~~~~~~~~~~~~~~~~~~~~~~~~#
# Output AWS information #
#~~~~~~~~~~~~~~~~~~~~~~~~#
# Iterate through credentials profiles
for profile in 'default' 'work'; do
# Print profile header
echo -e "\n"
echo -e "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo -e "Credentials profile:'$profile'..."
echo -e "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
# Iterate through all regions
for region in $regions; do
# Print region header
echo -e "\n"
echo -e "Region: $region..."
echo -e "--------------------------------------------------------------"
# Output items for the region
aws ec2 describe-instances \
--profile $profile \
--region $region \
--query $query_full \
--output text \
| sed 's/None$/None\n/' \
| sed '$!N;s/\n/ /' \
| column -t -s $'\t'
done
done
AWS has recently launched the Amazon EC2 Global View with initial support for Instances, VPCs, Subnets, Security Groups, and Volumes.
To see all running instances go to EC2 or VPC console and click EC2 Global View in the top left corner.
Then click on Global Search tab and filter by Resource type and select Instance. Unfortunately, this will show instances in all states:
pending
running
stopping
stopped
shutting-down
terminated
I created an open-source script that helps you to list all AWS instances. https://github.com/Appnroll/aws-ec2-instances
That's a part of the script that lists the instances for one profile recording them into an postgreSQL database with using jq for json parsing:
DATABASE="aws_instances"
TABLE_NAME="aws_ec2"
SAVED_FIELDS="state, name, type, instance_id, public_ip, launch_time, region, profile, publicdnsname"
# collects the regions to display them in the end of script
REGIONS_WITH_INSTANCES=""
for region in `aws ec2 describe-regions --output text | cut -f3`
do
# this mappping depends on describe-instances command output
INSTANCE_ATTRIBUTES="{
state: .State.Name,
name: .KeyName, type: .InstanceType,
instance_id: .InstanceId,
public_ip: .NetworkInterfaces[0].Association.PublicIp,
launch_time: .LaunchTime,
\"region\": \"$region\",
\"profile\": \"$AWS_PROFILE\",
publicdnsname: .PublicDnsName
}"
echo -e "\nListing AWS EC2 Instances in region:'$region'..."
JSON=".Reservations[] | ( .Instances[] | $INSTANCE_ATTRIBUTES)"
INSTANCE_JSON=$(aws ec2 describe-instances --region $region)
if echo $INSTANCE_JSON | jq empty; then
# "Parsed JSON successfully and got something other than false/null"
OUT="$(echo $INSTANCE_JSON | jq $JSON)"
# check if empty
if [[ ! -z "$OUT" ]] then
for row in $(echo "${OUT}" | jq -c "." ); do
psql -c "INSERT INTO $TABLE_NAME($SAVED_FIELDS) SELECT $SAVED_FIELDS from json_populate_record(NULL::$TABLE_NAME, '${row}') ON CONFLICT (instance_id)
DO UPDATE
SET state = EXCLUDED.state,
name = EXCLUDED.name,
type = EXCLUDED.type,
launch_time = EXCLUDED.launch_time,
public_ip = EXCLUDED.public_ip,
profile = EXCLUDED.profile,
region = EXCLUDED.region,
publicdnsname = EXCLUDED.publicdnsname
" -d $DATABASE
done
REGIONS_WITH_INSTANCES+="\n$region"
else
echo "No instances"
fi
else
echo "Failed to parse JSON, or got false/null"
fi
done
To run jobs in parallel and use multiple profiles use this script.
#!/bin/bash
for i in profile1 profile2
do
OWNER_ID=`aws iam get-user --profile $i --output text | awk -F ':' '{print $5}'`
tput setaf 2;echo "Profile : $i";tput sgr0
tput setaf 2;echo "OwnerID : $OWNER_ID";tput sgr0
for region in `aws --profile $i ec2 describe-regions --output text | cut -f4`
do
tput setaf 1;echo "Listing Instances in region $region";tput sgr0
aws ec2 describe-instances --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value , InstanceId]' --profile $i --region $region --output text
done &
done
wait
Screenshot:
Not sure how long this option's been here, but you can see a global view of everything by searching for EC2 Global View
https://console.aws.amazon.com/ec2globalview/home#
Using bash-my-aws:
region-each instances
Based on #hansaplast code I created Windows friendly version that supports multiple profiles as an argument. Just save that file as cmd or bat file. You also need to have jq command.
#echo off
setlocal enableDelayedExpansion
set PROFILE=%1
IF "%1"=="" (SET PROFILE=default)
echo checkin instances in all regions for %PROFILE% account
FOR /F "tokens=* USEBACKQ" %%F IN (`aws ec2 describe-regions --query Regions[*].[RegionName] --output text --profile %PROFILE%`) DO (
echo === region: %%F
aws ec2 describe-instances --region %%F --profile %PROFILE%| jq ".Reservations[].Instances[] | {type: .InstanceType, state: .State.Name, tags: .Tags, zone: .Placement.AvailabilityZone}"
)
You may use cli tool designed for enumerating cloud resources (cross-region and cross-accounts scan) - https://github.com/scopely-devops/skew
After short configuration you may use the following code for list all instances in all US AWS regions (assuming 123456789012 is your AWS account number).
from skew import scan
arn = scan('arn:aws:ec2:us-*:123456789012:instance/*')
for resource in arn:
print(resource.data)
Good tool to CRUD AWS resources. Find [EC2|RDS|IAM..] in all regions. There can do operations (stop|run|terminate) on filters results.
python3 awsconsole.py ec2 all // return list of all instances
python3 awsconsole.py ec2 all -r eu-west-1
python3 awsconsole.py ec2 find -i i-0552e09b7a54fa2cf --[terminate|start|stop]