Deleting cloudwatch group via cli - amazon-web-services

I need to delete Cloud watch group via the aws cli and running this script
aws logs describe-log-groups --region eu-west-2 | \
jq -r .logGroups[].logGroupName | \
xargs -L 1 -I {} aws logs delete-log-group --log-group-name {}
The first bit works fine , returning the log group names. Unfortunately, I cant seem to pipe it to xargs and it returns the following error:
An error occurred (ResourceNotFoundException) when calling the DeleteLogGroup operation: The specified log group does not exist.
i will be appreciative of any pointers.

As an alternative to using the AWS CLI, here's a Python script to delete log groups:
import boto3
logs_client = boto3.client('logs')
response = logs_client.describe_log_groups()
for log in response['logGroups']:
logs_client.delete_log_group(logGroupName=log['logGroupName'])

Related

How can I find out the remaining duration in my current STS role session?

I am using the AWS CLI to communicate with my AWS account. I start by assuming a role in my AWS account with a command like this:
aws sts assume-role \
--role-arn arn:aws:iam::000011112222:role/MyRole \
--role-session-name my_session_name \
--serial-number arn:aws:iam::333344445555:mfa/me \
--token-code [redacted] \
--duration-seconds 21600
At a later point I would like to use the AWS CLI to query AWS to understand how much time I have left in my role before the role expires.
Does an AWS CLI command exist for this purpose?
I've not found a good solution for this; but a hacky solution that worked for me:
Looking under my .aws/sso/cache/ folder I found a number of json files.
Those JSON files represent recent sessions; these contain a property expiresAt which gives the expiry date for the related session.
On a Windows device you can run the below PowerShell to get a quick peek at whats in all these JSON files:
gci "$env:userprofile\.aws\sso\cache" -filter '*.json' | %{get-content $_.FullName -raw | convertfrom-json | fl}

Adding SQS redrive policy using AWS CLI command

I am trying to set redrive policy for SQS using the AWS CLI Command below , but seeing an error related to redrive JSON. Can you please let me know how I can fix this?
redrive_policy="{\"RedrivePolicy\":{\"deadLetterTargetArn\":\"$dlq_arn\",\"maxReceiveCount\":\"15\"}}"
AWS CLI COMMAND
aws sqs set-queue-attributes --queue-url https://queue.amazonaws.com/12345678/test-queue --attributes $redrive_policy --region=us-east-1
Error Message
Parameter validation failed: Invalid type for parameter
Attributes.RedrivePolicy, value: OrderedDict([(u'deadLetterTargetArn',
u'arn:aws:sqs:us-east-1:12345678:dlq'), (u'maxReceiveCount', u'15')]),
type: , valid types:
Have you tried just creating the JSON in a separate file and passing it as an argument to your AWS CLI command? I find it's difficult to get all of the escaping correct when passing the JSON as a parameter. So you'd basically do it as the example shows in the AWS documentation:
https://docs.aws.amazon.com/cli/latest/reference/sqs/set-queue-attributes.html#examples
So first you'd create a new file called "set-queue-attributes.json" like so:
{
"DelaySeconds": "10",
"MaximumMessageSize": "131072",
"MessageRetentionPeriod": "259200",
"ReceiveMessageWaitTimeSeconds": "20",
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"VisibilityTimeout": "60"
}
Then run the command like this:
aws sqs set-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue --attributes file://set-queue-attributes.json --region=us-east-1
if you want to run in the same command you can use this example:
aws sqs set-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue \
--attributes '{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":\"1000\"}",
"MessageRetentionPeriod": "259200",
"VisibilityTimeout": "90"
}'
Three Methods to achieve this:
Note: The solutions also work on any other AWS CLI commands that require a stringified JSON
1. Using the Command-line JSON processor jq (Recommended)
This method is recommended because of many reasons:
I've found jq a handy tool to use when working with AWS CLI as the need to stringify JSON comes up quite frequently.
Install for Ubuntu: sudo apt install jq
Basic Options:
jq -R: Returns the stringified JSON
jq -c: Eliminates spacing and newline characters
The benefit is that you can write JSON as JSON and Pipe the result into the jq -R command.
Method 1:
aws sqs set-queue-attributes \
--queue-url "https://sqs.ap-south-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes RedrivePolicy=$(echo '{"maxReceiveCount":500,"deadLetterTargetArn":"arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue"}' | jq -R)
OR if you have a sqs-redrive-policy.json file:
Method 2:
In sqs-redrive-policy.json,
{
"maxReceiveCount": 500,
"deadLetterTargetArn": "arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue"
}
Run in Command Line:
aws sqs set-queue-attributes \
--queue-url "https://sqs.ap-south-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes RedrivePolicy=$(cat ~/path/to/file/sqs-redrive-policy.json | jq -c | jq -R)
As you can see the second benefit is that you can isolately modify only the --redrive-policy without having to touch any of the other attributes.
Common Confusion: A confusion is the name set-queue-attributes (it would be better named put-queue-attributes). as it doesn't overwrite all attributes but only overwrites the attributes mentioned with the command. So if you already set a Policy attribute earlier during create-queue, this will not overwrite the Policy to null. In other words, this is safe to use.
2. Using a stringified JSON
This is a pain to be honest, and I avoid this.
aws sqs set-queue-attributes \
--queue-url "https://sqs.us-east-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes '{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue\",\"maxReceiveCount\":\"500\"}",
}'
3. Use a filePathURL to the JSON file for attributes.json NOT sqs-redrive-policy.json
This is my last preference.
Reason:
This means setting all the attributes specified in the attributes.json file again at a single go.
Doesn't escape the pain of writing stringified JSON as text.
In attributes.json,
{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:ap-south-1:IAMEXAMPLE12345678:ExampleDeadLetterQueue\", \"maxReceiveCount\":\"5\"}"
}
Run in command line:
aws sqs set-queue-attributes \
--queue-url "https://sqs.ap-south-1.amazonaws.com/IAMEXAMPLE12345678/ExampleQueue" \
--attributes file:///home/yourusername/path/to/file/attributes.json

AWS Lambda Logs using AWSCLI: How can I access Lambda logs using AWSCLI?

I was trying to work with AWS Lambda using the awscli on an ubuntu ec2 instance, and I do not have access to the aws console. Note that I am not using serverless or zapper, I directly zip my main.py file along with the dependency files as mentioned here
I run the function like this
aws lambda invoke --function-name python-test --invocation-type RequestResponse outfile.txt
The errors given in the outfile are very vague and dont help in debugging, rather they confuse me more. Using the admin's system, I am able to recognize the errors when I run a test on the console, but How can I check those logs using the awscli ?
So I tried running aws cloudwatch list-metrics > cloudwatch_logs.log
and searching the function name 'python-test' in the cloudwatch_logs.log file, I am able to find Namespace, MetricName, Dimensions for this function, but how do you access the logs ?
Any help with links to similar examples greatly appreciated !
First, get the log group name:
aws logs describe-log-groups --query logGroups[*].logGroupName
[
"/aws/lambda/MyFunction"
]
Then, list the log streams for that log group:
aws logs describe-log-streams --log-group-name '/aws/lambda/MyFunction' --query logStreams[*].logStreamName
[
"2018/02/07/[$LATEST]140c61ffd59442b7b8405dc91d708fdc"
]
Finally, get the log events for that stream:
aws logs get-log-events --log-group-name '/aws/lambda/MyFunction' --log-stream-name '2018/02/07/[$LATEST]140c61ffd59442b7b8405dc91d708fdc'
{
"nextForwardToken": "f/33851760153448034063427449515194237355552440866456338433",
"events": [
{
"ingestionTime": 1517965421523,
"timestamp": 1517965421526,
"message": "START RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168 Version: $LATEST\n"
},
{
"ingestionTime": 1517965424581,
"timestamp": 1517965424567,
"message": "END RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168\n"
},
{
"ingestionTime": 1517965424581,
"timestamp": 1517965424567,
"message": "REPORT RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168\tDuration: 3055.39 ms\tBilled Duration: 3100 ms \tMemory Size: 128 MB\tMax Memory Used: 35 MB\t\n"
}
],
"nextBackwardToken": "b/33851760085631457914695824538087252860391482425578356736"
}
jq flavor:
List AWS lambda group name (if list is too big you might want to filter it with grep):
aws logs describe-log-groups | jq -r ".logGroups[].logGroupName"
Then read message property from latest stream with:
LOG_GROUP_NAME="/aws/lambda/awesomeFunction"
LOG_STREAM_NAME=$(aws logs describe-log-streams --log-group-name "${LOG_GROUP_NAME}" | jq -r '.logStreams | sort_by(.creationTime) | .[-1].logStreamName')
aws logs get-log-events --log-group-name "${LOG_GROUP_NAME}" --log-stream-name "${LOG_STREAM_NAME}" | jq -r '.events[] | select(has("message")) | .message'
You might want to put this in a logs.sh file.
If you want more or other streams you might want to tweak the .logStreams[0] part
Using the AWS CLI can be a bit irritating because the stream name changes as you modify your function. I've found that using awslogs (https://github.com/jorgebastida/awslogs) is a nicer workflow.
List the groups:
awslogs groups
Filter results.
awslogs groups|grep myfunction
Then get the logs from the group.
awslogs get /aws/lambda/ShortenStack-mutationShortenLambdaBC1758AD-6KW0KAD3TYVE
It defaults to the last 5 minutes, but you can add the -s parameter to choose a time, e.g -s 10m for the last 10 minutes.
The output is colourised if you're at the terminal, or plain if you're piping it through other commands, e.g. grep to find something.

Cloudwatch logs on terminal

I am using AWS Lambda for my application. For logs, I have to see in UI only, which I really don't like to see. Is there a way that I could connect to Cloudwatch logs locally and then see the logs by the tail command? Or if I could access Cloudwatch server to see logs? Basically, I wanted to see logs on my terminal. If there is any way to do that, please let me know.
Thanks for your help.
You can use AWS CLI to get your logs in real time. See: get-log-events
AWS doesn't provide a functionality where you can tail the log. There are few 3rd party tools you can use. I have used jorgebastida/awslogs which was sufficient for my needs.
Update 02/25/2021: Thanks to #adavea, I just checked and found AWS has added a new feature to tail the CW logs.
aws.logs.tail
--follow (boolean) Whether to continuously poll for new logs.
There are some command line tools like cwtail and awslogs that do a -f follow tail.
Your other option is a free tool I created called SenseLogs that does a live tail in your browser. It is 100% browser based. See https://github.com/sensedeep/senselogs/blob/master/README.md for details.
On my linux/macosx/cygwin console, this will give you the latest log file.
Substitute $1 with your group name
echo aws logs get-log-events --log-group-name /aws/lambda/$1 --log-stream-name `aws logs describe-log-streams --log-group-name /aws/lambda/$1 --max-items 1 --descending --order-by LastEventTime | grep logStreamName | cut -f2 -d: | sed 's/,//'|sed 's/\"/'\''/g'`| sh -
Note that you will need to install awscli (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
I wrapped the above in a sh function
function getcw() {
echo aws logs get-log-events --log-group-name /aws/lambda/$1 --log-stream-name `aws logs describe-log-streams --log-group-name /aws/lambda/$1 --max-items 1 --descending --order-by LastEventTime | grep logStreamName | cut -f2 -d: | sed 's/,//'|sed 's/\"/'\''/g'`| sh -
}
and can view the latest log for my logs in chai-lambda-trigger using the command
$ getcw chai-lambda-trigger
If you wanted just the tail of the output, you could do
$ getcw chai-lambda-trigger | tail

How do you delete an AWS ECS Task Definition?

Once you've created a task definition in Amazon's EC2 Container Service, how do you delete or remove it?
It's a known issue. Once you de-register a Task Definition it goes into INACTIVE state and clutters up the ECS Console.
If you want to vote for it to be fixed, there is an issue on Github. Simply give it a thumbs up, and it will raise the priority of the request.
I've recently found this gist (thanks a lot to the creator for sharing!) which will deregister all task definitions for your specific region - maybe you can adapt it to skip some which you want to keep: https://gist.github.com/jen20/e1c25426cc0a4a9b53cbb3560a3f02d1
You need to have jq to run it:
brew install jq
I "hard-coded" my region, for me it's eu-central-1, so be sure to adapt it for your use-case:
#!/usr/bin/env bash
get_task_definition_arns() {
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]'
}
delete_task_definition() {
local arn=$1
aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition "${arn}" > /dev/null
}
for arn in $(get_task_definition_arns)
do
echo "Deregistering ${arn}..."
delete_task_definition "${arn}"
done
Then when I run it, it starts removing them:
Deregistering arn:aws:ecs:REGION:YOUR_ACCOUNT_ID:task-definition/NAME:REVISION...
Oneline approach inspired by Anna A reply:
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]' \
| xargs -I {} aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition {} \
| jq -r '.taskDefinition.taskDefinitionArn'
There is no option to delete a task definition on the AWS console.
But, you can deregister (delete) a task definition by executing the following command number of revisions that you have:
aws ecs deregister-task-definition --task-definitiontask_defination_name:revision_no
Created following gist to safely review, filter and deregister AWS task-definitions and revisions in bulk (max 100 at a time) using JS CLI.
https://gist.github.com/shivam-nagar/aa79b02b74f616f8714d51e419bd10de
Can use this to deregister all revisions for task-definition. This will result in task-definition itself marked as inactive.
Now its supported
I just went inside the Task Definations and clicked on Actions and click on Deregister and it was removed from the UI