I can get the details with
$ aws lambda get-function --function-name random_number
{
"Configuration": {
"FunctionName": "random_number",
"FunctionArn": "arn:aws:lambda:us-east-2:193693970645:function:random_number",
"Runtime": "ruby2.5",
"Role": "arn:aws:iam::193693970645:role/service-role/random_number-role-8cy8a1a7",
...
But how can get just a couple of fields like function name ?
I tried:
$ aws lambda get-function --function-name random_number --query "Configuration[*].[FunctionName]"
but I get null
Your overall approach is correct, you just need to adjust the query:
$ aws lambda get-function --function-name random_number \
--query "Configuration.FunctionName" --output text
I also added a parameter to convert the result to text, which makes processing a bit easier.
Here is a simple awk (standard Linux gnu awk) script that does the trick: Extract the values of quoted field #3, only for line having /FunctionName/.
awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
Piped with your initial command:
$ aws lambda get-function --function-name random_number | awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
One way to achieve that is by using jq.
therefore, the output must be JSON.
From the docs :
jq is like sed for JSON data - you can use it to slice and filter and
map and transform structured data with the same ease that sed, awk,
grep and friends let you play with text.
Usage example :
aws lambda get-function --function-name test --output json | jq -r '.Configuration.FunctionName'
Use get-function-configuration as in the following:
aws lambda get-function-configuration --function-name MyFunction --query "[FunctionName]"
Related
I want to return only the current AWS username using AWS CLI. I'm on Windows 11. I think there's a way to do it using a regex but I can't figure out how. I think I need to use a pipe along with a regex but there's no related examples on the JMESPath website. I want to have something like "only return the text after 'user/' ".
Here's what I have so far:
aws sts get-caller-identity --output text --query 'Arn'
which returns `"arn:aws:iam::999999009999:user/joe.smith"
I just want to return "joe.smith".
jmes does not support splitting a string nor matching a substring in a string, so you'll have to resort to a native command like:
> (aws sts get-caller-identity --output text --query 'Arn').Split("/")[-1]
or use something like jq :
$ aws sts get-caller-identity --output json | jq '.Arn | split("/")[-1]' -r
The question is, how to easily fetch sensitive information from AWS Secret Manager within Bash scripts?To get the response form aws cli command it's quite straightforward:
json_value=$(aws secretsmanager get-secret-value --secret-id "$1")
The problem is, the response is returned in json format, and it will take some space to deserialize and parse all the parameters. Is there any easy way to do it?
If you have stored the secrets as simple strings, you can retrieve them using
aws secretsmanager get-secret-value --secret-id "$SECRET_ID" --query "SecretString" --output text
I know it's Q&A, just wanted to share with you very handy bash function to get all the information in a very convenient way(python on instance required).
# Usage Ex. exportSecrets <Secrets-Name> <Key-Name-1> <Key-Name-2>...
exportSecrets() {
local json_value;
json_value=$(aws secretsmanager get-secret-value --secret-id "$1")
echo "------->"
printf "Secrets RESULT. Json: \n%s\n" "$json_value"
shift; local json_keys=("$#")
fetchJson() {
python - "$json_value" "$json_keys" <<EOF
import json, sys
secrets = json.loads(json.loads(
sys.argv[1])['SecretString']
)
ans = []
for k in sys.argv[2].split(' '):
ans.append(secrets[k])
print(' '.join(ans))
EOF
}
SECRETS=$(fetchJson)
echo "------->"
printf "Resolved Secrets: \n%s\n" "$SECRETS"
}
Now with above, you can simple call the function with params and get back exported variable with response in list for next usage.
exportSecrets "YOUR-KEY-STORAGE" "KEY-NAME-1" "KEY-NAME-2"
local key1=$(echo $SECRETS | cut -d' ' -f1)
echo $key1
local key2=$(echo $SECRETS | cut -d' ' -f2)
echo $key2
I'm running this shell command using groovy (which worked in bash):
aws --profile profileName --region us-east-1 dynamodb update-item --table-name tableName --key '{"group_name": {"S": "group_1"}}' --attribute-updates '{"attr1": {"Value": {"S": "STOP"},"Action": "PUT"}}'
This updates the value of an item to STOP in DynamoDB. In my groovy script, I'm running this command like so:
String command = "aws --profile profileName --region us-east-1 dynamodb update-item --table-name tableName --key '{\"group_name\": {\"S\": \"group_1\"}}' --attribute-updates '{\"attr1\": {\"Value\": {\"S\": \"STOP\"},\"Action\": \"PUT\"}}'"
println(command.execute().text)
When I run this with groovy afile.groovy, nothing is printed out and when I check the table in DynamoDB, it's not updated to STOP. There is something wrong with the way I'm escaping the quotes but I'm not sure what. Would appreciate any insights.
Sidenote: When I do a simple aws command like aws s3 ls it works and prints out the results so it's something with this particular command that is throwing it off.
You don't quote for groovy (and the underlying exec) -- you would have to quote for your shell. The execute() on a String does not work like a shell - the underlyting code just splits at whitespace - any quotes are just passed down as part of the argument.
Use ["aws", "--profile", profile, ..., "--key", '{"group_name": ...', ...].execute() and ignore any quoting.
And instead of banging strings together to generate JSON, use groovy.json.JsonOutput.toJson([group_name: [S: "group_1"]])
I'm currently using the following CLI command to get the instance PublicIPAddress and LaunchTime for a given instance Name tag, 'myInstanceName':
aws ec2 describe-instances --filters 'Name=tag:Name,Values=myInstanceName' \
--region us-east-1 \
--query 'Reservations[*].Instances[*].{PublicIpAddress: PublicIpAddress, LaunchTime: LaunchTime}'
This results in the following:
[
{
"LaunchTime": "2019-01-25T11:49:06.000Z",
"PublicIpAddress": "11.111.111.11"
}
]
This is fine, but if there are two instances with the same name I will get two results in my result JSON. I need to find a way to get the most recent instance for a given name.
Solution Update
This question is quite specific to EC2 instances. The issue can be resolved using two different methods, answered below:
Parsing Result with jq
Using JMESPath
Please see this related question for more general sorting by date with JMESPath, and for further reading.
Here's a method for finding the latest-launched instance, and displaying data about it:
aws ec2 describe-instances --query 'sort_by(Reservations[].Instances[], &LaunchTime)[:-1].[InstanceId,PublicIpAddress,LaunchTime]'
sort_by(Reservations[].Instances[], &LaunchTime)[:-1] will return the last instance launched. The fields are then retrieved from those instances.
To understand this type of fun, see:
JMESPath Tutorial — JMESPath
JMESPath Examples — JMESPath
Try using the jq utility. It's a command-line JSON parser. If you're not familiar with it then I'd recommend the jq playground for experimentation.
First flatten the awcli results, as follows:
aws ec2 describe-instances \
--query 'Reservations[].Instances[].{ip: PublicIpAddress, tm: LaunchTime}' \
--filters 'Name=tag:Name,Values= myInstanceName'
Note that I've aliased LaunchTime to tm for brevity. That will result in (unsorted) output like this:
[
{
"ip": "54.4.5.6",
"tm": "2019-01-04T19:54:11.000Z"
},
{
"ip": "52.1.2.3",
"tm": "2019-03-04T20:04:00.000Z"
}
]
Next, pipe this result into jq and sort by descending tm (the alias for LaunchTime), as follows:
jq 'sort_by(.tm) | reverse'
That will result in output like this:
[
{
"ip": "52.1.2.3",
"tm": "2019-03-04T20:04:00.000Z"
},
{
"ip": "54.4.5.6",
"tm": "2019-01-04T19:54:11.000Z"
}
]
Finally, use jq to filter out all but the first result, as follows:
jq 'sort_by(.tm) | reverse | .[0]'
This will yield one result, the most recently launched instance:
{
"ip": "52.1.2.3",
"tm": "2019-03-04T20:04:00.000Z"
}
Putting it all together, the final command is:
aws ec2 describe-instances \
--query 'Reservations[].Instances[].{ip: PublicIpAddress, tm: LaunchTime}' \
--filters 'Name=tag:Name,Values= myInstanceName' | \
jq 'sort_by(.tm) | reverse | .[0]'
I am getting an extra None in aws-cli (version 1.11.160) with --query parameter and --output text when fetching the first element of the query output.
See the examples below.
$ aws kms list-aliases --query "Aliases[?contains(AliasName,'alias/foo')].TargetKeyId|[0]" --output text
a3a1f9d8-a4de-4d0e-803e-137d633df24a
None
$ aws kms list-aliases --query "Aliases[?contains(AliasName,'alias/foo-bar')].TargetKeyId|[0]" --output text
None
None
As far as I know this was working till yesterday but from today onwards this extra None comes in and killing our ansible tasks.
Anyone experienced anything similar?
Thanks
I started having this issue in the past few days too. In my case I was querying exports from a cfn stack.
My solution was (since I'll only ever get one result from the query) to change | [0].Value to .Value, which works with --output text.
Some examples:
$ aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | []'
[
{
"ExportingStackId": "arn:aws:cloudformation:ap-southeast-2:111122223333:stack/stack-name/83ea7f30-ba0b-11e8-8b7d-50fae957fc4a",
"Name": "kms-key-arn",
"Value": "arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa"
}
]
$ aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | [].Value'
[
"arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa"
]
$ aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | [].Value' --output text
arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa
aws cloudformation list-exports --query 'Exports[?Name==`kms-key-arn`] | [0].Value' --output text
arn:aws:kms:ap-southeast-2:111122223333:key/a13a4bad-672e-45a3-99c2-c646a9470ffa
None
I'm no closer to finding out why it's happening, but it disproves #LHWizard's theory, or at least indicates there are conditions where that explanation isn't sufficient.
The best explanation is that not every match for your query statement has a TargetKeyId. On my account, there are several Aliases that only have AliasArn and AliasName key/value pairs. The None comes from a null value for TargetKeyId, in other words.
I came across the same issue when listing step functions. I consider it to be a bug. I don't like solutions that ignore the first or last element, expecting it will always be None at that position - at some stage the issue will get fixed and your workaround has introduced a nasty bug.
So, in my case, I did this as a safe workaround (adapt to your needs):
#!/usr/bin/env bash
arn="<step function arn goes here>"
arns=()
for arn in $(aws stepfunctions list-executions --state-machine-arn "$arn" --max-items 50 --query 'executions[].executionArn' --output text); do
[[ $arn == 'None' ]] || arns+=("$arn")
done
# process execution arns
for arn in "${arns[#]}"; do
echo "$arn" # or whatever
done
Supposing you need only the first value:
Replace --output text with --output json and you could parsed with jq
Therefore, you'll have something like
Ps. the -r option with jq is to remove the quotes around the response
aws kms list-aliases --query "Aliases[?contains(AliasName,'alias/foo')].TargetKeyId|[0]" --output | jq -r '.'