AWS CLI expression: Bad jmespath expression: Unknown token - amazon-web-services

I run the query below and it works.
aws ec2 describe-security-groups \
--filters Name=ip-permission.from-port,Values=21 Name=ip-permission.to-Port,Values=21 \
--query 'SecurityGroups[].[Tags[?Key==`Owner`] | [0].Value, GroupId]' \
--output text
But trying to get security groups that have open traffic for all and the value of the Tag=Owner, I run this and get jmespath error.
aws ec2 describe-security-groups --filters Name=ip-permission.protocol,Values=-1 --query SecurityGroups[?IpPermissions[?IpProtocol == '-1' && contains(IpRanges[].CidrIp,'0.0.0.0/0')]].[Tags[?Key==`Owner`] | [0].Value, GroupId]' --output=text
Bad value for --query SecurityGroups[?IpPermissions[?IpProtocol == -1 && contains(IpRanges[].CidrIp,0.0.0.0/0)]].[Tags[?Key==Owner] | [0].Value, GroupId]: Bad jmespath expression: Unknown token /:""

I had to wrap the chars that threw an error in a 'quote' symbol and successfully retrieved an output afterwards:
aws rds describe-db-instances \
--query "*[].[dbidentifier,'dbidentifier.cx32323sss6ib.eu-central-1.rds.amazonaws.com','5432',admin]"

Personally I prefer Steampipe, a CLI that can query AWS resources using SQL. It can be more verbose than JMES, but is much easier to read and more flexible to query.
Here is your first query as SQL using the aws_vpc_security_group_rule table:
select
sg.tags ->> 'Owner' as owner,
sg.group_id
from
aws_vpc_security_group as sg
join aws_vpc_security_group_rule as rule on sg.group_id = rule.group_id
where
rule.type = 'ingress'
and from_port = 22
and to_port = 22;
And here is a query to find the open ports:
select
sg.tags->>'Owner',
sg.group_id
from
aws_vpc_security_group as sg
join aws_vpc_security_group_rule as rule on sg.group_id = rule.group_id
where
rule.type = 'ingress'
and rule.ip_protocol = '-1'
and rule.cidr_ip = '0.0.0.0/0'

Related

How to list all AWS RDS instances and their tags in CSV

I'm new to the AWS CLI and I am trying to build a CSV server inventory of my project's AWS RDS instances that includes their tags.
I have done so successfully with EC2 instances using this:
aws ec2 describe-isntances\
--query 'Reservations[*].Instances[*].[PrivateIpAddress, InstanceType, [Tags[?Key=='Name'.Value] [0][0], [Tags[?Key=='ENV'.Value] [0][0] ]'\
--output text | sed -E 's/\s+/,/g' >> ec2list.csv
The above command gives me a CSV with the Ip address, instance type, as well as the values of the listed tags.
However, I am currently trying to do so unsuccessfully on RDS instances with this:
aws rds describe-db-isntances\
--query 'DBInstances[*].[DBInstanceIdentifier, DBInstanceArn, [Tags[?Key=='Component'.Value] [0][0], [Tags[?Key=='Engine'.Value] [0][0] ]'
--output text | sed -E 's/\s+/,/g' >> rdslist.csv
The RDS command only returns the instance arn and identifier but the tag values show up as none even though they definitely do have a value.
What modifications need to be made to my RDS query to show the tag values/is this even possible? Thanks
Probably you will need one more command https://docs.aws.amazon.com/AmazonRDS/latest/APIReference//API_ListTagsForResource.html.
You can wrap the 2 scripts in shell script like the below example.
#!/bin/bash
ARNS=$(aws rds describe-db-instances --query "DBInstances[].DBInstanceArn" --output text)
for line in $ARNS; do
TAGS=$(aws rds list-tags-for-resource --resource-name "$line" --query "TagList[]")
echo $line $TAGS
done
Realized that tags can be displayed in my original query. It does not use Tags like EC2 instances but TagList. E.g,
aws rds describe-db-isntances\
--query 'DBInstances[*].[DBInstanceIdentifier, DBInstanceArn, [TagList[?Key=='Component'.Value] [0][0], [TagList[?Key=='Engine'.Value] [0][0] ]'
--output text | sed -E 's/\s+/,/g' >> rdslist.csv

`aws secretsmanager list-secrets` command to return secrets and filter them by tag

How do I call the aws secretsmanager list-secrets command and filter secrets by their tags? I don't see examples of this here: https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/list-secrets.html
Also, Amazon's docs seem to be wrong. It says --max-items on that page but really should be --max-results. Also there is no mention of how to filter on that wiki page as well.
[Original: December 2019]
You can use jq, for example:
aws secretsmanager list-secrets \
| jq '.SecretList[] | select((.Tags[]|select(.Key=="Name")|.Value) | test("^Production$|^Staging$"))'
You can also use the awscli's in-built query option, for example:
aws secretsmanager list-secrets \
--query "SecretList[?Tags[?Key=='Name' && Value=='Production']]"
You can use boolean tests with the awscli's in-built query option, for example:
aws secretsmanager list-secrets \
--query "SecretList[?Tags[?Key=='Name' && (Value=='Production' || Value='Staging')]]"
Here's an outline of a solution using Python and boto3:
from functools import partial
import boto3
def filter_tags(key, values, secret):
for tag in secret['Tags']:
if tag['Key'] == key and tag['Value'] in values:
return True
return False
sm = boto3.client('secretsmanager')
paginator = sm.get_paginator('list_secrets')
secrets_list_iterator = paginator.paginate()
filter_production = partial(filter_tags, 'Name', ['Production', 'Staging'])
for secrets in secrets_list_iterator:
for s in filter(filter_production, secrets['SecretList']):
print(s['Name'], s['Tags'])
[Updated: January 2021]
The aws secretsmanager list-secrets command now supports filtering via the --filters option. But .. I recommend that you do NOT use it unless you understand how it actually works (see below) and you would benefit from its particular implementation.
Here's an example of how to use it to filter on secrets with a name that begins with Production:
aws secretsmanager list-secrets \
--filters Key=name,Values=Production
Note that you cannot do an exact match with the --filters option, just a 'begins with' match, so be cautious when using it. If you have secrets with names of Production and Production-Old, both will be returned. That may not be what you want so in that case use the original client-side queries described above.
Here's an example of how to use it to filter on secrets with a name that begins with Production or Staging:
aws secretsmanager list-secrets \
--filters Key=name,Values=Production,Staging
Here's an example of how to use it to filter on secrets with a tag key that begins with stage or a tag value that begins with dev:
aws secretsmanager list-secrets \
--filters Key=tag-key,Values=stage Key=tag-value,Values=dev
Note: the --filters option implements logical OR, not logical AND.
Here's a boto3 example, filtering on tag keys that begin with Name or tag values that begin with Production or Staging:
import boto3
sm = boto3.client('secretsmanager')
res = sm.list_secrets(Filters=[
{ 'Key': 'tag-key', 'Values': ['Name'] },
{ 'Key': 'tag-value', 'Values': ['Production', 'Staging'] },
])
for secret in res['SecretList']:
print(secret['Name'], secret['Tags'])

Query AWS CLI to populate Jenkins "Active Choices Reactive Parameter" (Linux)

I have a Jenkins 2.0 job where I require the user to select the list of servers to execute the job against via a Jenkins "Active Choices Reactive Parameter". These servers which the job will execute against are AWS EC2 instances. Instead of hard-coding the available servers in the "Active Choices Reactive Parameter", I'd like to query the AWS CLI to get a list of servers.
A few notes:
I've assigned the Jenkins 2.0 EC2 an IAM role which has sufficient privileges to query AWS via the CLI.
The AWS CLI is installed on the Jenkins EC2.
The "Active Choices Reactive Parameter" will return a list of checkboxes if I hardcode values in a Groovy script, such as:
return ["10.1.1.1", "10.1.1.2", 10.1.1.3"]
I know my awk commands can be improved, I'm not yet sure how, but my primary goal is to get the list of servers dynamically loaded in Jenkins.
I can run the following command directly on the EC2 instance which is hosting Jenkins:
aws ec2 describe-instances --region us-east-2 --filters
"Name=tag:Env,Values=qa" --query
"Reservations[*].Instances[*].PrivateIpAddress" | grep -o
'\"[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\"' | awk
{'printf $0 ", "'} | awk {'print "[" $0'} | awk {'gsub(/^[ \t]+|[
\t]+$/, ""); print'} | awk {'print substr ($0, 1, length($0)-1)'} |
awk {'print $0 "]"'}
This will return the following, which is in the format expected by the "Active Choices Reactive Parameter":
["10.1.1.1", "10.1.1.2", 10.1.1.3"]
So, in the "Script" textarea of the "Active Choices Reactive Parameter", I have the following script. The problem is that my server list is never populated. I've tried numerous variations of this script without luck. Can someone please tell me where I've went wrong and what I can do to correct this script so that my list of server IP addresses is dynamically loaded into a Jenkins job?
def standardOut = new StringBuffer(), standardErr = new StringBuffer()
def command = $/
aws ec2 describe-instances --region us-east-2 --filters "Name=tag:Env,Values=qaint" --query "Reservations[*].Instances[*].PrivateIpAddress" |
grep -o '\"[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\"' |
awk {'printf $0 ", "'} |
awk {'print "[" $0'} |
awk {'gsub(/^[ \t]+|[ \t]+$/, ""); print'} |
awk {'print substr ($0, 1, length($0)-1)'} |
awk {'print $0 "]"'}
/$
def proc = command.execute()
proc.consumeProcessOutput(standardOut, standardErr)
proc.waitForOrKill(1000)
return standardOut
I tried to execute your script and the standardErr had some errors, Looks like groovy didn't like the double quotes in the AWS CLI. Here is a cleaner way to do without using awk
def command = 'aws ec2 describe-instances \
--filters Name=tag:Name,Values=Test \
--query Reservations[*].Instances[*].PrivateIpAddress \
--output text'
def proc = command.execute()
proc.waitFor()
def output = proc.in.text
def exitcode= proc.exitValue()
def error = proc.err.text
if (error) {
println "Std Err: ${error}"
println "Process exit code: ${exitcode}"
return exitcode
}
//println output.split()
return output.split()
This script works with Jenkins Active Choices Parameter, and returns the list of IP addresses:
def aws_cmd = 'aws ec2 describe-instances \
--filters Name=instance-state-name,Values=running \
Name=tag:env,Values=dev \
--query Reservations[].Instances[].PrivateIpAddress[] \
--region us-east-2 \
--output text'
def aws_cmd_output = aws_cmd.execute()
// probably is required if execution takes long
//aws_cmd_output.waitFor()
def ip_list = aws_cmd_output.text.tokenize()
return ip_list

awscli query not working with no Tags and no Name - multi tags

I am trying to use aws cli command to filter based on volumetype and status with no name tag and additional tag something like below
aws ec2 describe-volumes --filters Name=volume-type,Values=gp2 Name=status,Values="available" --query 'Volumes[?!not_null(Tags[?Key == `Name`].Value,Tags[?Key == `Alias`].Value)]'
The above cli works but the notnull part is not applying to both the tags. Its only filtering volumes which doesnt have Tag as "Name" but it is still listing all the volumes which have the tag as "Alias"
I would like both of them(tagged as Name and Alias) NOT come up - basically
Well, based on this link : which is only filtering one tag
aws ec2 describe-volumes --filters Name=volume-type,Values=gp2 Name=status,Values="available" --query 'Volumes[?!not_null(Tags[?Key == `Name`]'
EDIT: trying to do something similar to describe snapshots with StartTime
aws ec2 describe-snapshots --owner-ids "***********" --query 'Snapshots[?!not_null(Tags[?Key == `Name`]) && !not_null(Tags[?Key == `Alias`]) && ?StartTime>=`2017-09-15`]'
Getting a error...Is it possible to provide a date range above?
You can use JMESPath and expression so writing something similar as
aws ec2 describe-volumes \
--filters Name=volume-type,Values=gp2 Name=status,Values="available" \
--query 'Volumes[?!not_null(Tags[?Key == `Name`]) && !not_null(Tags[?Key == `Alias`])]'

Getting a list of instances in an EC2 auto scale group?

Is there a utility or script available to retrieve a list of all instances from AWS EC2 auto scale group?
I need a dynamically generated list of production instance to hook into our deploy process. Is there an existing tool or is this something I am going to have to script?
Here is a bash command that will give you the list of IP addresses of your instances in an AutoScaling group.
for ID in $(aws autoscaling describe-auto-scaling-instances --region us-east-1 --query AutoScalingInstances[].InstanceId --output text);
do
aws ec2 describe-instances --instance-ids $ID --region us-east-1 --query Reservations[].Instances[].PublicIpAddress --output text
done
(you might want to adjust the region and to filter per AutoScaling group if you have several of them)
On a higher level point of view - I would question the need to connect to individual instances in an AutoScaling Group. The dynamic nature of AutoScaling would encourage you to fully automate your deployment and admin processes. To quote an AWS customer : "If you need to ssh to your instance, change your deployment process"
--Seb
The describe-auto-scaling-groups command from the AWS Command Line Interface looks like what you're looking for.
Edit: Once you have the instance IDs, you can use the describe-instances command to fetch additional details, including the public DNS names and IP addresses.
You can use the describe-auto-scaling-instances cli command, and query for your autoscale group name.
Example:
aws autoscaling describe-auto-scaling-instances --region us-east-1
--query 'AutoScalingInstances[?AutoScalingGroupName==`YOUR_ASG`]' --output text
Hope that helps
You can also use below command to fetch private ip address without any jq/awk/sed/cut
$ aws autoscaling describe-auto-scaling-instances --region us-east-1 --output text \
--query "AutoScalingInstances[?AutoScalingGroupName=='ASG-GROUP-NAME'].InstanceId" \
| xargs -n1 aws ec2 describe-instances --instance-ids $ID --region us-east-1 \
--query "Reservations[].Instances[].PrivateIpAddress" --output text
courtesy this
I actually ended up writing a script in Python because I feel more comfortable in Python then Bash,
#!/usr/bin/env python
"""
ec2-autoscale-instance.py
Read Autoscale DNS from AWS
Sample config file,
{
"access_key": "key",
"secret_key": "key",
"group_name": "groupName"
}
"""
from __future__ import print_function
import argparse
import boto.ec2.autoscale
try:
import simplejson as json
except ImportError:
import json
CONFIG_ACCESS_KEY = 'access_key'
CONFIG_SECRET_KEY = 'secret_key'
CONFIG_GROUP_NAME = 'group_name'
def main():
arg_parser = argparse.ArgumentParser(description=
'Read Autoscale DNS names from AWS')
arg_parser.add_argument('-c', dest='config_file',
help='JSON configuration file containing ' +
'access_key, secret_key, and group_name')
args = arg_parser.parse_args()
config = json.loads(open(args.config_file).read())
access_key = config[CONFIG_ACCESS_KEY]
secret_key = config[CONFIG_SECRET_KEY]
group_name = config[CONFIG_GROUP_NAME]
ec2_conn = boto.connect_ec2(access_key, secret_key)
as_conn = boto.connect_autoscale(access_key, secret_key)
try:
group = as_conn.get_all_groups([group_name])[0]
instances_ids = [i.instance_id for i in group.instances]
reservations = ec2_conn.get_all_reservations(instances_ids)
instances = [i for r in reservations for i in r.instances]
dns_names = [i.public_dns_name for i in instances]
print('\n'.join(dns_names))
finally:
ec2_conn.close()
as_conn.close()
if __name__ == '__main__':
main()
Gist
The answer at https://stackoverflow.com/a/12592543/20774 was helpful in developing this script.
Use the below snippet for sorting out ASGs with specific tags and listing out its instance details.
#!/usr/bin/python
import boto3
ec2 = boto3.resource('ec2', region_name='us-west-2')
def get_instances():
client = boto3.client('autoscaling', region_name='us-west-2')
paginator = client.get_paginator('describe_auto_scaling_groups')
groups = paginator.paginate(PaginationConfig={'PageSize': 100})
#print groups
filtered_asgs = groups.search('AutoScalingGroups[] | [?contains(Tags[?Key==`{}`].Value, `{}`)]'.format('Application', 'CCP'))
for asg in filtered_asgs:
print asg['AutoScalingGroupName']
instance_ids = [i for i in asg['Instances']]
running_instances = ec2.instances.filter(Filters=[{}])
for instance in running_instances:
print(instance.private_ip_address)
if __name__ == '__main__':
get_instances()
for ruby using aws-sdk gem v2
First create ec2 object as this:
ec2 = Aws::EC2::Resource.new(region: 'region',
credentials: Aws::Credentials.new('IAM_KEY', 'IAM_SECRET')
)
instances = []
ec2.instances.each do |i|
p "instance id---", i.id
instances << i.id
end
This will fetch all instance ids in particular region and can use more filters like ip_address.