I'm currently using the following CLI command to get the instance PublicIPAddress and LaunchTime for a given instance Name tag, 'myInstanceName':
aws ec2 describe-instances --filters 'Name=tag:Name,Values=myInstanceName' \
--region us-east-1 \
--query 'Reservations[*].Instances[*].{PublicIpAddress: PublicIpAddress, LaunchTime: LaunchTime}'
This results in the following:
[
{
"LaunchTime": "2019-01-25T11:49:06.000Z",
"PublicIpAddress": "11.111.111.11"
}
]
This is fine, but if there are two instances with the same name I will get two results in my result JSON. I need to find a way to get the most recent instance for a given name.
Solution Update
This question is quite specific to EC2 instances. The issue can be resolved using two different methods, answered below:
Parsing Result with jq
Using JMESPath
Please see this related question for more general sorting by date with JMESPath, and for further reading.
Here's a method for finding the latest-launched instance, and displaying data about it:
aws ec2 describe-instances --query 'sort_by(Reservations[].Instances[], &LaunchTime)[:-1].[InstanceId,PublicIpAddress,LaunchTime]'
sort_by(Reservations[].Instances[], &LaunchTime)[:-1] will return the last instance launched. The fields are then retrieved from those instances.
To understand this type of fun, see:
JMESPath Tutorial — JMESPath
JMESPath Examples — JMESPath
Try using the jq utility. It's a command-line JSON parser. If you're not familiar with it then I'd recommend the jq playground for experimentation.
First flatten the awcli results, as follows:
aws ec2 describe-instances \
--query 'Reservations[].Instances[].{ip: PublicIpAddress, tm: LaunchTime}' \
--filters 'Name=tag:Name,Values= myInstanceName'
Note that I've aliased LaunchTime to tm for brevity. That will result in (unsorted) output like this:
[
{
"ip": "54.4.5.6",
"tm": "2019-01-04T19:54:11.000Z"
},
{
"ip": "52.1.2.3",
"tm": "2019-03-04T20:04:00.000Z"
}
]
Next, pipe this result into jq and sort by descending tm (the alias for LaunchTime), as follows:
jq 'sort_by(.tm) | reverse'
That will result in output like this:
[
{
"ip": "52.1.2.3",
"tm": "2019-03-04T20:04:00.000Z"
},
{
"ip": "54.4.5.6",
"tm": "2019-01-04T19:54:11.000Z"
}
]
Finally, use jq to filter out all but the first result, as follows:
jq 'sort_by(.tm) | reverse | .[0]'
This will yield one result, the most recently launched instance:
{
"ip": "52.1.2.3",
"tm": "2019-03-04T20:04:00.000Z"
}
Putting it all together, the final command is:
aws ec2 describe-instances \
--query 'Reservations[].Instances[].{ip: PublicIpAddress, tm: LaunchTime}' \
--filters 'Name=tag:Name,Values= myInstanceName' | \
jq 'sort_by(.tm) | reverse | .[0]'
Related
I'm new to the AWS CLI and I am trying to build a CSV server inventory of my project's AWS RDS instances that includes their tags.
I have done so successfully with EC2 instances using this:
aws ec2 describe-isntances\
--query 'Reservations[*].Instances[*].[PrivateIpAddress, InstanceType, [Tags[?Key=='Name'.Value] [0][0], [Tags[?Key=='ENV'.Value] [0][0] ]'\
--output text | sed -E 's/\s+/,/g' >> ec2list.csv
The above command gives me a CSV with the Ip address, instance type, as well as the values of the listed tags.
However, I am currently trying to do so unsuccessfully on RDS instances with this:
aws rds describe-db-isntances\
--query 'DBInstances[*].[DBInstanceIdentifier, DBInstanceArn, [Tags[?Key=='Component'.Value] [0][0], [Tags[?Key=='Engine'.Value] [0][0] ]'
--output text | sed -E 's/\s+/,/g' >> rdslist.csv
The RDS command only returns the instance arn and identifier but the tag values show up as none even though they definitely do have a value.
What modifications need to be made to my RDS query to show the tag values/is this even possible? Thanks
Probably you will need one more command https://docs.aws.amazon.com/AmazonRDS/latest/APIReference//API_ListTagsForResource.html.
You can wrap the 2 scripts in shell script like the below example.
#!/bin/bash
ARNS=$(aws rds describe-db-instances --query "DBInstances[].DBInstanceArn" --output text)
for line in $ARNS; do
TAGS=$(aws rds list-tags-for-resource --resource-name "$line" --query "TagList[]")
echo $line $TAGS
done
Realized that tags can be displayed in my original query. It does not use Tags like EC2 instances but TagList. E.g,
aws rds describe-db-isntances\
--query 'DBInstances[*].[DBInstanceIdentifier, DBInstanceArn, [TagList[?Key=='Component'.Value] [0][0], [TagList[?Key=='Engine'.Value] [0][0] ]'
--output text | sed -E 's/\s+/,/g' >> rdslist.csv
I am getting an extra None in aws-cli (version 1.11.160) with --query parameter and --output text when fetching the first element of the query output.
See the examples below.
$ aws kms list-aliases --query "Aliases[?contains(AliasName,'alias/foo')].TargetKeyId|[0]" --output text
a3a1f9d8-a4de-4d0e-803e-137d633df24a
None
$ aws kms list-aliases --query "Aliases[?contains(AliasName,'alias/foo-bar')].TargetKeyId|[0]" --output text
None
None
As far as I know this was working till yesterday but from today onwards this extra None comes in and killing our ansible tasks.
Anyone experienced anything similar?
Thanks
I started having this issue in the past few days too. In my case I was querying exports from a cfn stack.
My solution was (since I'll only ever get one result from the query) to change | [0].Value to .Value, which works with --output text.
Some examples:
$ aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | []'
[
{
"ExportingStackId": "arn:aws:cloudformation:ap-southeast-2:111122223333:stack/stack-name/83ea7f30-ba0b-11e8-8b7d-50fae957fc4a",
"Name": "kms-key-arn",
"Value": "arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa"
}
]
$ aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | [].Value'
[
"arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa"
]
$ aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | [].Value' --output text
arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa
aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | [0].Value' --output text
arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa
None
I'm no closer to finding out why it's happening, but it disproves #LHWizard's theory, or at least indicates there are conditions where that explanation isn't sufficient.
The best explanation is that not every match for your query statement has a TargetKeyId. On my account, there are several Aliases that only have AliasArn and AliasName key/value pairs. The None comes from a null value for TargetKeyId, in other words.
I came across the same issue when listing step functions. I consider it to be a bug. I don't like solutions that ignore the first or last element, expecting it will always be None at that position - at some stage the issue will get fixed and your workaround has introduced a nasty bug.
So, in my case, I did this as a safe workaround (adapt to your needs):
#!/usr/bin/env bash
arn="<step function arn goes here>"
arns=()
for arn in $(aws stepfunctions list-executions --state-machine-arn "$arn" --max-items 50 --query 'executions[].executionArn' --output text); do
[[ $arn == 'None' ]] || arns+=("$arn")
done
# process execution arns
for arn in "${arns[#]}"; do
echo "$arn" # or whatever
done
Supposing you need only the first value:
Replace --output text with --output json and you could parsed with jq
Therefore, you'll have something like
Ps. the -r option with jq is to remove the quotes around the response
aws kms list-aliases --query "Aliases[?contains(AliasName,'alias/foo')].TargetKeyId|[0]" --output | jq -r '.'
I am trying to use aws cli command to filter based on volumetype and status with no name tag and additional tag something like below
aws ec2 describe-volumes --filters Name=volume-type,Values=gp2 Name=status,Values="available" --query 'Volumes[?!not_null(Tags[?Key == `Name`].Value,Tags[?Key == `Alias`].Value)]'
The above cli works but the notnull part is not applying to both the tags. Its only filtering volumes which doesnt have Tag as "Name" but it is still listing all the volumes which have the tag as "Alias"
I would like both of them(tagged as Name and Alias) NOT come up - basically
Well, based on this link : which is only filtering one tag
aws ec2 describe-volumes --filters Name=volume-type,Values=gp2 Name=status,Values="available" --query 'Volumes[?!not_null(Tags[?Key == `Name`]'
EDIT: trying to do something similar to describe snapshots with StartTime
aws ec2 describe-snapshots --owner-ids "***********" --query 'Snapshots[?!not_null(Tags[?Key == `Name`]) && !not_null(Tags[?Key == `Alias`]) && ?StartTime>=`2017-09-15`]'
Getting a error...Is it possible to provide a date range above?
You can use JMESPath and expression so writing something similar as
aws ec2 describe-volumes \
--filters Name=volume-type,Values=gp2 Name=status,Values="available" \
--query 'Volumes[?!not_null(Tags[?Key == `Name`]) && !not_null(Tags[?Key == `Alias`])]'
Is this below given command will work or not to delete older than month AWS EC2 Snapshot.
aws describe-snapshots | grep -v (date +%Y-%m-) | grep snap- | awk '{print $2}' | xargs -n 1 -t aws delete-snapshot
Your command won't work mostly because of a typo: aws describe-snapshots should be aws ec2 describe-snapshots.
Anyway, you can do this without any other tools than aws:
snapshots_to_delete=$(aws ec2 describe-snapshots --owner-ids xxxxxxxxxxxx --query 'Snapshots[?StartTime<=`2017-02-15`].SnapshotId' --output text)
echo "List of snapshots to delete: $snapshots_to_delete"
# actual deletion
for snap in $snapshots_to_delete; do
aws ec2 delete-snapshot --snapshot-id $snap
done
Make sure you always know what are you deleting. By echo $snap, for example.
Also, adding --dry-run to aws ec2 delete-snapshot can show you that there are no errors in request.
Edit:
There are two things to pay attention at in the first command:
--owner-ids - you account unique id. Could easily be found manually in top right corner of AWS Console: Support->Support Center->Account Number xxxxxxxxxxxx
--query - JMESPath query which gets only snapshots created later than specified date (e.g.: 2017-02-15): Snapshots[?StartTime>=`2017-02-15`].SnapshotId
+1 to #roman-zhuzha for getting me close. i did have trouble when $snapshots_to_delete didn't parse into a long string of snapshots separated by whitespaces.
this script, below, does parse them into a long string of snapshot ids separated by whitespaces on my Ubuntu (trusty) 14.04 in bash with awscli 1.16:
#!/usr/bin/env bash
dry_run=1
echo_progress=1
d=$(date +'%Y-%m-%d' -d '1 month ago')
if [ $echo_progress -eq 1 ]
then
echo "Date of snapshots to delete (if older than): $d"
fi
snapshots_to_delete=$(aws ec2 describe-snapshots \
--owner-ids xxxxxxxxxxxxx \
--output text \
--query "Snapshots[?StartTime<'$d'].SnapshotId" \
)
if [ $echo_progress -eq 1 ]
then
echo "List of snapshots to delete: $snapshots_to_delete"
fi
for oldsnap in $snapshots_to_delete; do
# some $oldsnaps will be in use, so you can't delete them
# for "snap-a1234xyz" currently in use by "ami-zyx4321ab"
# (and others it can't delete) add conditionals like this
if [ "$oldsnap" = "snap-a1234xyz" ] ||
[ "$oldsnap" = "snap-c1234abc" ]
then
if [ $echo_progress -eq 1 ]
then
echo "skipping $oldsnap known to be in use by an ami"
fi
continue
fi
if [ $echo_progress -eq 1 ]
then
echo "deleting $oldsnap"
fi
if [ $dry_run -eq 1 ]
then
# dryrun will not actually delete the snapshots
aws ec2 delete-snapshot --snapshot-id $oldsnap --dry-run
else
aws ec2 delete-snapshot --snapshot-id $oldsnap
fi
done
Switch these variables as necesssary:
dry_run=1 # set this to 0 to actually delete
echo_progress=1 # set this to 0 to not echo stmnts
Change the date -d string to a human readable version of the number of days, months, or years back you want to delete "older than":
d=$(date +'%Y-%m-%d' -d '15 days ago') # half a month
Find your account id and update these XXXX's to that number:
--owner-ids xxxxxxxxxxxxx \
Here is an example of where you can find that number:
If running this in a cron, you only want to see errors and warnings. A frequent warning will be that there are snapshots in use. The two example snapshot id's (snap-a1234xyz, snap-c1234abc) are ignored since they would otherwise print something like:
An error occurred (InvalidSnapshot.InUse) when calling the DeleteSnapshot operation: The snapshot snap-a1234xyz is currently in use by ami-zyx4321ab
See the comments near "snap-a1234xyx" example snapshot id for how to handle this output.
And don't forget to check on the handy examples and references in the 1.16 aws cli describe-snapshots manual.
you can use 'self' in '--owner-ids' and delete the snapshots created before a specific date (e.g. 2018-01-01) with this one-liner command:
for i in $(aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[?StartTime<=`2018-01-01`].SnapshotId' --output text); do echo Deleting $i; aws ec2 delete-snapshot --snapshot-id $i; sleep 1; done;
Date condition must be within Parenthesis ()
aws ec2 describe-snapshots \
--owner-ids 012345678910 \
--query "Snapshots[?(StartTime<='2020-03-31')].[SnapshotId]"
I'm trying to remove all my AWS EC2 snapshots except the last 6 with this script:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
# Backup script
Volume="{VOL-DATA}"
Owner="{OWNER}"
Description="{DESCRIPTION}"
Local_numbackups=6
Local_region="us-west-1"
# Remove old snapshots associated to a description, keep the last $Local_numbackups
aws ec2 describe-snapshots --filters Name=description,Values=$Description | grep "SnapshotId" | head -n -$Local_numbackups | awk '{print $2}' | sed -e 's/,//g' | xargs -n 1 -t aws ec2 delete-snapshot --snapshot-id
However it doesn't work. It deletes instances, but not the oldest ones. Why?
You're trying to do something too complex to be handled (gracefully) in one line, so we'll need to break it down a bit. First, let's get the snapshots sorted by age, oldest to newest:
aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n
Then we can drop the StartTime field to get the snapshot ID alone:
aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n | sed -e 's/^.*\t//'
head (or tail) aren't really suitable for discarding the fixed number of snapshots we want to keep. We need to filter those out another way. So, putting it altogether:
# Get array of snapshot IDs sorted by age (oldest to newest)
snapshots=($(aws ec2 describe-snapshots --filters Name=description,Values=$Description --query 'Snapshots[*].[StartTime,SnapshotId]' --output text | sort -n | sed -e 's/^.*\t//'))
# Get number of snapshots
count=${#snapshots[#]}
if [ "$count" -lt "$Local_numbackups" ]; then
echo "We already have less than $Local_numbackups snapshots"
exit 0
else
# Drop the last (newest) $Local_numbackups IDs from the array
snapshots=(${snapshots[#]:0:$((count - Local_numbackups))})
# Loop through the remaining snapshots and delete
for snapshot in ${snapshots[#]}; do
aws ec2 delete-snapshot --snapshot-id $snapshot
done
fi
(While it's obviously possible to do this in bash with the AWS CLI, it's complex enough that I'd personally rather use a more robust language and the AWS SDK.)
Here is a sample.
days2keep="30"
region="us-west-2"
name="jdoe"
#date - -v is for Osx
cutoffdate=`date -j -v-${days2keep}d '+%Y-%m-%d'`
echo "Finding list of snapshots before $cutoffdate "
oldsnapids=$(aws ec2 describe-snapshots --region $region --filters Name=tag:Name,Values=$name --query Snapshots[?StartTime\<=\`$cutoffdate\`].SnapshotId --output text)
for snapid in $oldsnapids
do
echo Deleting snapshot $snapid
aws ec2 delete-snapshot --snapshot-id $snapid --region $region
done
We can delete all old snapshots using below steps:-
List out all snapshots ID's they are old and put in one file like:- /opt/snapshot.txt
And then use "aws configure" command for setup access AWS account from command line, at this time we need to provide credentials:-
Such as:
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXXXXX
Default region name [None]: XXXXXXXXXXXXXXXX
After that we can use below shell script, we need to give snapshots ID's file name
Codes:
#!/bin/bash
list=$(cat /opt/snapshot.txt)
for i in $list
do
aws ec2 delete-snapshot --snapshot-id $i
if [ $? -eq 0 ]; then
echo Going Good
else
echo FAIL
fi
done
Thanks