aws elbv2 describe-target-group-attributes \
--target-group-arn arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067
provides
{
"Attributes": [
{
"Value": "false",
"Key": "stickiness.enabled"
},
{
"Value": "300",
"Key": "deregistration_delay.timeout_seconds"
},
{
"Value": "lb_cookie",
"Key": "stickiness.type"
},
{
"Value": "86400",
"Key": "stickiness.lb_cookie.duration_seconds"
},
{
"Value": "0",
"Key": "slow_start.duration_seconds"
}
]
}
I would like to fetch deregistration_delay.timeout_seconds from the output
I tried which works for this case when deregistration_delay.timeout_seconds appears on the second position.
aws elbv2 describe-target-group-attributes \
--target-group-arn arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067
| jq -r '.Attributes[1].Value'
but for some target groups the deregistration_delay.timeout_seconds is placed at a different number.
How can I use jq to fetch deregistration_delay.timeout_seconds
You can actually use JMESPATH in the AWS CLI without needing to use jq:
aws elbv2 describe-target-group-attributes \
--target-group-arn arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 \
--query "Attributes[?Key=='deregistration_delay.timeout_seconds']|[0].Value" \
--output text
JMESPATH was created by James Saryerwinnie, one of the authors of the AWS CLI. The tutorial is well worth reading.
Related
I am trying to write a query in AWS CLI which will provide with the elasticache snapshots names older than a specific creation date.
I tried with a JMESPath query like:
aws elasticache describe-snapshots \
--region ap-southeast-1 \
--snapshot-source "manual" \
--query 'Snapshots[*].NodeSnapshots[?SnapshotCreateTime >`2022-10-01`] | [?not_null(node)]'
But, this is giving me an empty result.
Snippet of aws elasticache describe-snapshots:
{
"Snapshots": [{
"SnapshotName": "snapshot-name",
"ReplicationGroupId": "rep-id",
"ReplicationGroupDescription": "redis cluster",
"CacheClusterId": null,
"SnapshotStatus": "available",
"SnapshotSource": "automated",
"CacheNodeType": "cache.r6g.large",
"Engine": "redis",
"EngineVersion": "6.0.5",
"NumCacheNodes": null,
"PreferredAvailabilityZone": null,
"CacheClusterCreateTime": null,
"PreferredMaintenanceWindow": "sun:20:00-sun:20:00",
"TopicArn": null,
"Port": "6379",
"CacheParameterGroupName": "default.redis6.x.cluster.on",
"CacheSubnetGroupName": "redis-group",
"VpcId": "vpc-01bcajghfghj",
"AutoMinorVersionUpgrade": "true",
"SnapshotRetentionLimit": "18",
"SnapshotWindow": "20:00-21:00",
"NumNodeGroups": "1",
"AutomaticFailover": "enabled",
"NodeSnapshots": [{
"CacheClusterId": "redis-cluster-01",
"NodeGroupId": "001",
"CacheNodeId": "001",
"NodeGroupConfiguration": null,
"CacheSize": "20 GB",
"CacheNodeCreateTime": "1632909889675",
"SnapshotCreateTime": "1667246439000"
}],
"KmsKeyId": "kms-id.."
}]
}
If we take as an example the JSON given in the documentation:
{
"Snapshots": [
{
"SnapshotName": "automatic.my-cluster2-002-2019-12-05-06-38",
"NodeSnapshots": [
{
"CacheNodeId": "0001",
"SnapshotCreateTime": "2019-12-05T06:38:23Z"
}
]
},
{
"SnapshotName": "myreplica-backup",
"NodeSnapshots": [
{
"CacheNodeId": "0001",
"SnapshotCreateTime": "2019-11-26T00:25:01Z"
}
]
},
{
"SnapshotName": "my-cluster",
"NodeSnapshots": [
{
"CacheNodeId": "0001",
"SnapshotCreateTime": "2019-11-26T03:08:33Z"
}
]
}
]
}
Then, you can realise that you need to filter on SnapshotCreateTime, nested under the array NodeSnapshots.
So, what you need, here, is a double filter:
One to filter by date:
[?SnapshotCreateTime > `2022-10-01`]
Then, one to exclude all snapshots where the NodeSnapshots array have been emptied by the previous filter:
[?NodeSnapshots[?SnapshotCreateTime > `2022-10-01`]]
And, so, if you only care about the name of the snapshot, you could do with the query:
Snapshots[?NodeSnapshots[?SnapshotCreateTime > `2022-10-01`]].SnapshotName
So, your command ends up being:
aws elasticache describe-snapshots \
--region ap-southeast-1 \
--snapshot-source "manual" \
--query 'Snapshots[?
NodeSnapshots[?SnapshotCreateTime > `2022-10-01`]
].SnapshotName'
Now, with what you are showing as an output in your question, your issue is also coming from the fact that your SnapshotCreateTime is in epoch format in liliseconds, so you just have to convert 2022-10-01 in the right format.
If you are on Linux, you can do this within your command, with date:
aws elasticache describe-snapshots \
--region ap-southeast-1 \
--snapshot-source "manual" \
--query "Snapshots[?
NodeSnapshots[?
SnapshotCreateTime > \`$(date --date='2022-10-01' +'%s')000\`
]
].SnapshotName"
by using the below command i can able to get the details of my autoscaling group.
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name my-ASG --region=eu-west-1
but I need the value of a particular tag value only instead of the whole data in windows terminal. can someone help me
You can get all the tags for your ASG like the following.
aws autoscaling describe-tags --filters Name=auto-scaling-group,Values=my-asg-autoscaling --query 'Tags[].{Key: Key, Value: Value}'
Here is the result(dummy values)
[
{
"Key": "Env",
"Value": "qa"
},
{
"Key": "Function",
"Value": "as"
},
{
"Key": "Name",
"Value": "my-asg-autoscaling"
},
{
"Key": "Project",
"Value": "test"
},
{
"Key": "VPC",
"Value": "nonprod"
},
{
"Key": "monitored",
"Value": "non-prod"
}
]
If instead you want to get a particular tag e.g Function in the example, you can get it with the following query.
aws autoscaling describe-tags --filters Name=auto-scaling-group,Values=my-asg-autoscaling --query 'Tags[?Key==`Function`].Value[]'
I have a script that fetches list of instances having tag x having abc value. The count of ec2 instances returned are in hundreds, now for each instance I need to fetch 2 tag values. Not all instances will have both the tags, it could be 1 or both or none. For now I am issuing 2 calls to get the value of each tag (this is a bash shell)
market=`aws ec2 describe-tags --filters "Name=resource-id,Values=$id" "Name=key,Values=market" --query Tags[].Value --region $aws_region --output text`
service=`aws ec2 describe-tags --filters "Name=resource-id,Values=$id" "Name=key,Values=service" --query Tags[].Value --region $aws_region --output text`
Is there any way to fetch the values of both tags in a single call?
I have 4 instances like this:
i-020f43a6253e1dd25 tags:market=1
i-0a5c4b42fe3e75c15 tags:service=1
i-027ca3de0fe11f1d3 tags:market=4,service=4
i-0e77b17601f9b2fd2 tags:none
Server side filtering using --filters returns 4 matching records
% aws ec2 describe-tags --filters "Name=key,Values=market,service"
{
"Tags": [
{
"Key": "market",
"ResourceId": "i-020f43a6253e1dd25",
"ResourceType": "instance",
"Value": "1"
},
{
"Key": "market",
"ResourceId": "i-027ca3de0fe11f1d3",
"ResourceType": "instance",
"Value": "4"
},
{
"Key": "service",
"ResourceId": "i-027ca3de0fe11f1d3",
"ResourceType": "instance",
"Value": "4"
},
{
"Key": "service",
"ResourceId": "i-0a5c4b42fe3e75c15",
"ResourceType": "instance",
"Value": "1"
}
]
}
I'm trying to parse JSON output from the AWS CLI. What I'm looking for are security group names with specific tags below them. The two commands that work are
$aws ec2 describe-security-groups | jq -r '.SecurityGroups[].GroupName'
default
mysqlsg
apachesg
default
Then I run
$ aws ec2 describe-security-groups | jq -r '.SecurityGroups[].Tags[]|select(.Key == "Service")'
{
"Key": "Service",
"Value": "default"
}
{
"Key": "Service",
"Value": "MySQL"
}
{
"Key": "Service",
"Value": "Apache"
}
{
"Key": "Service",
"Value": "default"
}
I'd like each group to have the Service Tag below it so I tried this but it didn't work.
$ aws ec2 describe-security-groups | jq -r '.SecurityGroups[].GroupName,.SecurityGroups[].Tags[]|select(.Key == "Service")'
jq: error (at <stdin>:225): Cannot index string with string "Key"
You can do this with aws-cli query parameters, try the below and it should work.
aws ec2 describe-security-groups --query 'SecurityGroups[].{Tags:Tags[?Key==`Name`].Value|[0],GroupName:GroupName}'
output
{
"Tags": "demo",
"GroupName": "demo"
}
I'm curious if we could create a trigger on a AWS S3 Bucket programmatically?
Given is a S3-Bucket and a AWS Lambda function.
The AWS Lambda function was created per CLI and can be updated/recreated at any time with CLI-based commands.
aws lambda create-function \
--region us-east-1 \
--function-name encodeVideo \
--zip-file fileb:///tmp/encode_video.zip \
--role $LAMBDA_ROLE_ARN \
--handler encode_video.handler \
--runtime nodejs6.10 \
--timeout 10 \
--memory-size 1024
aws lambda add-permission \
--function-name encodeVideo \
--region us-east-1 \
--statement-id some-unique-id \
--action "lambda:InvokeFunction" \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::**** \
--source-account ***********
Now i want to configure a S3-bucket that it will invoke the Lambda function automatically on every new object that was created.
For now i did this in AWS Console in web browser as one can see in the screenshot. But i want to be able to setup the whole scenario with CLI-commands. How can i do this?
I've figured out that it needs something like:
aws s3api put-bucket-notification-configuration --region us-east-1 \
--bucket **** \
--notification-configuration file://encodeVideoConfiguration.json
But i couldn't figure out what the content of encodeVideoConfiguration.json should be?
The document structure of --notification-configuration is described in detail at at AWS CLI docs for the same call:
{
"TopicConfigurations": [
{
"Id": "string",
"TopicArn": "string",
"Events": ["s3:ReducedRedundancyLostObject"|"s3:ObjectCreated:*"|"s3:ObjectCreated:Put"|"s3:ObjectCreated:Post"|"s3:ObjectCreated:Copy"|"s3:ObjectCreated:CompleteMultipartUpload"|"s3:ObjectRemoved:*"|"s3:ObjectRemoved:Delete"|"s3:ObjectRemoved:DeleteMarkerCreated", ...],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix"|"suffix",
"Value": "string"
}
...
]
}
}
}
...
],
"QueueConfigurations": [
{
"Id": "string",
"QueueArn": "string",
"Events": ["s3:ReducedRedundancyLostObject"|"s3:ObjectCreated:*"|"s3:ObjectCreated:Put"|"s3:ObjectCreated:Post"|"s3:ObjectCreated:Copy"|"s3:ObjectCreated:CompleteMultipartUpload"|"s3:ObjectRemoved:*"|"s3:ObjectRemoved:Delete"|"s3:ObjectRemoved:DeleteMarkerCreated", ...],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix"|"suffix",
"Value": "string"
}
...
]
}
}
}
...
],
"LambdaFunctionConfigurations": [
{
"Id": "string",
"LambdaFunctionArn": "string",
"Events": ["s3:ReducedRedundancyLostObject"|"s3:ObjectCreated:*"|"s3:ObjectCreated:Put"|"s3:ObjectCreated:Post"|"s3:ObjectCreated:Copy"|"s3:ObjectCreated:CompleteMultipartUpload"|"s3:ObjectRemoved:*"|"s3:ObjectRemoved:Delete"|"s3:ObjectRemoved:DeleteMarkerCreated", ...],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "prefix"|"suffix",
"Value": "string"
}
...
]
}
}
}
...
]
}
For your case, you'd just provide the LambdaFunctionConfigurations field of JSON structure.
This is the JSON configuration you want to create.
{
"LambdaFunctionConfigurations": [
{
"Id": "s3eventtriggerslambda",
"LambdaFunctionArn": "theactualarn",
"Events": ["s3:ObjectCreated:*"],
"Filter": {
"Key": {
"FilterRules": [
{
"Name": "suffix",
"Value": "thesuffix"
},
{
"Name": "prefix",
"Value": "theprefix"
}
]
}
}
}
]
}
Copy the above json to a file named "s3triggerlambdaconfig.json"
From aws cli:
aws s3api put-bucket-notification-configuration \
--bucket bucketname \
--notification-configuration file://s3triggerlambdaconfig.json
Example lambda arn will be like this - arn:aws:lambda:us-east-1:550060223145:function:lambda-function-test
Were you ever able to get this to work? I am looking for something very similar and so far have not been able to get it to work.
I want to trigger Lambda on s3 object add/delete and want to do it from the cli with the source bucket passed as an argument.