Cloudwatch logs on terminal - amazon-web-services

I am using AWS Lambda for my application. For logs, I have to see in UI only, which I really don't like to see. Is there a way that I could connect to Cloudwatch logs locally and then see the logs by the tail command? Or if I could access Cloudwatch server to see logs? Basically, I wanted to see logs on my terminal. If there is any way to do that, please let me know.
Thanks for your help.

You can use AWS CLI to get your logs in real time. See: get-log-events
AWS doesn't provide a functionality where you can tail the log. There are few 3rd party tools you can use. I have used jorgebastida/awslogs which was sufficient for my needs.
Update 02/25/2021: Thanks to #adavea, I just checked and found AWS has added a new feature to tail the CW logs.
aws.logs.tail
--follow (boolean) Whether to continuously poll for new logs.

There are some command line tools like cwtail and awslogs that do a -f follow tail.
Your other option is a free tool I created called SenseLogs that does a live tail in your browser. It is 100% browser based. See https://github.com/sensedeep/senselogs/blob/master/README.md for details.

On my linux/macosx/cygwin console, this will give you the latest log file.
Substitute $1 with your group name
echo aws logs get-log-events --log-group-name /aws/lambda/$1 --log-stream-name `aws logs describe-log-streams --log-group-name /aws/lambda/$1 --max-items 1 --descending --order-by LastEventTime | grep logStreamName | cut -f2 -d: | sed 's/,//'|sed 's/\"/'\''/g'`| sh -
Note that you will need to install awscli (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
I wrapped the above in a sh function
function getcw() {
echo aws logs get-log-events --log-group-name /aws/lambda/$1 --log-stream-name `aws logs describe-log-streams --log-group-name /aws/lambda/$1 --max-items 1 --descending --order-by LastEventTime | grep logStreamName | cut -f2 -d: | sed 's/,//'|sed 's/\"/'\''/g'`| sh -
}
and can view the latest log for my logs in chai-lambda-trigger using the command
$ getcw chai-lambda-trigger
If you wanted just the tail of the output, you could do
$ getcw chai-lambda-trigger | tail

Related

Deleting cloudwatch group via cli

I need to delete Cloud watch group via the aws cli and running this script
aws logs describe-log-groups --region eu-west-2 | \
jq -r .logGroups[].logGroupName | \
xargs -L 1 -I {} aws logs delete-log-group --log-group-name {}
The first bit works fine , returning the log group names. Unfortunately, I cant seem to pipe it to xargs and it returns the following error:
An error occurred (ResourceNotFoundException) when calling the DeleteLogGroup operation: The specified log group does not exist.
i will be appreciative of any pointers.
As an alternative to using the AWS CLI, here's a Python script to delete log groups:
import boto3
logs_client = boto3.client('logs')
response = logs_client.describe_log_groups()
for log in response['logGroups']:
logs_client.delete_log_group(logGroupName=log['logGroupName'])

How to stop a log stream that has been started in cloud watch by cloud watch agent

How to stop a log stream that has been started in ec2 by cloud watch agent. How to delete or permanently stop sending logs from that log_group
You can delete the log-group, select log-group -> action -> delete
But why you are not stopping the agent itself?
At a command prompt, type the following command:
sudo service awslogs stop
If you are running Amazon Linux 2, type the following command:
sudo service awslogsd stop
StopTheCWLAgent
Or if you want to delete all log-group then
aws logs describe-log-groups --query 'logGroups[*].logGroupName' --output table | awk '{print $2}' | grep -v ^$ | while read x; do aws logs delete-log-group --log-group-name $x; done
clear all log-group
sudo service awslogs stop
Will stop all the logs that are going
If you want to stop a particular log-group and continue sending others Use this:
sudo service awslogs stop (log-group name)

AWS Lambda Logs using AWSCLI: How can I access Lambda logs using AWSCLI?

I was trying to work with AWS Lambda using the awscli on an ubuntu ec2 instance, and I do not have access to the aws console. Note that I am not using serverless or zapper, I directly zip my main.py file along with the dependency files as mentioned here
I run the function like this
aws lambda invoke --function-name python-test --invocation-type RequestResponse outfile.txt
The errors given in the outfile are very vague and dont help in debugging, rather they confuse me more. Using the admin's system, I am able to recognize the errors when I run a test on the console, but How can I check those logs using the awscli ?
So I tried running aws cloudwatch list-metrics > cloudwatch_logs.log
and searching the function name 'python-test' in the cloudwatch_logs.log file, I am able to find Namespace, MetricName, Dimensions for this function, but how do you access the logs ?
Any help with links to similar examples greatly appreciated !
First, get the log group name:
aws logs describe-log-groups --query logGroups[*].logGroupName
[
"/aws/lambda/MyFunction"
]
Then, list the log streams for that log group:
aws logs describe-log-streams --log-group-name '/aws/lambda/MyFunction' --query logStreams[*].logStreamName
[
"2018/02/07/[$LATEST]140c61ffd59442b7b8405dc91d708fdc"
]
Finally, get the log events for that stream:
aws logs get-log-events --log-group-name '/aws/lambda/MyFunction' --log-stream-name '2018/02/07/[$LATEST]140c61ffd59442b7b8405dc91d708fdc'
{
"nextForwardToken": "f/33851760153448034063427449515194237355552440866456338433",
"events": [
{
"ingestionTime": 1517965421523,
"timestamp": 1517965421526,
"message": "START RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168 Version: $LATEST\n"
},
{
"ingestionTime": 1517965424581,
"timestamp": 1517965424567,
"message": "END RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168\n"
},
{
"ingestionTime": 1517965424581,
"timestamp": 1517965424567,
"message": "REPORT RequestId: bca9c478-0ba2-11e8-81db-4bccfc644168\tDuration: 3055.39 ms\tBilled Duration: 3100 ms \tMemory Size: 128 MB\tMax Memory Used: 35 MB\t\n"
}
],
"nextBackwardToken": "b/33851760085631457914695824538087252860391482425578356736"
}
jq flavor:
List AWS lambda group name (if list is too big you might want to filter it with grep):
aws logs describe-log-groups | jq -r ".logGroups[].logGroupName"
Then read message property from latest stream with:
LOG_GROUP_NAME="/aws/lambda/awesomeFunction"
LOG_STREAM_NAME=$(aws logs describe-log-streams --log-group-name "${LOG_GROUP_NAME}" | jq -r '.logStreams | sort_by(.creationTime) | .[-1].logStreamName')
aws logs get-log-events --log-group-name "${LOG_GROUP_NAME}" --log-stream-name "${LOG_STREAM_NAME}" | jq -r '.events[] | select(has("message")) | .message'
You might want to put this in a logs.sh file.
If you want more or other streams you might want to tweak the .logStreams[0] part
Using the AWS CLI can be a bit irritating because the stream name changes as you modify your function. I've found that using awslogs (https://github.com/jorgebastida/awslogs) is a nicer workflow.
List the groups:
awslogs groups
Filter results.
awslogs groups|grep myfunction
Then get the logs from the group.
awslogs get /aws/lambda/ShortenStack-mutationShortenLambdaBC1758AD-6KW0KAD3TYVE
It defaults to the last 5 minutes, but you can add the -s parameter to choose a time, e.g -s 10m for the last 10 minutes.
The output is colourised if you're at the terminal, or plain if you're piping it through other commands, e.g. grep to find something.

AWS EC2 user data docker system prune before start ecs task

I have followed the below code from the AWS to start a ECS task when the EC2 instance launches. This works great.
However my containers only run for a few minutes(max ten) then once finished the EC# is shutdown using a cloudwatch rule.
The problem I am find is that due to the instances shutting down straight after the task is finished the auto clean up of the docker containers doesn't happen resulting in the EC2 instance getting full up stopping other tasks to fail. I have tried the lower the time between clean up but it still can be a bit flaky.
Next idea was to add docker system prune -a -f to the user data of the EC2 instance but it doesnt seem to get ran. I think its because I am putting it in the wrong part of the user data, I have searched through the docs for this but cant find anything to help.
Question where can I put the docker prune command in the user data to ensure that at each launch the prune command is ran?
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
I hadn't considered terminated then creating a new instance.
I use cloud formation currently to create EC2.
What's the best workflow for terminating an EC2 after the task definition has completed then on schedule create a new one registering it to the ECS cluster?
Cloud watch scheduled rule to start lambda that creates EC2 then registers to cluster?

How do you delete an AWS ECS Task Definition?

Once you've created a task definition in Amazon's EC2 Container Service, how do you delete or remove it?
It's a known issue. Once you de-register a Task Definition it goes into INACTIVE state and clutters up the ECS Console.
If you want to vote for it to be fixed, there is an issue on Github. Simply give it a thumbs up, and it will raise the priority of the request.
I've recently found this gist (thanks a lot to the creator for sharing!) which will deregister all task definitions for your specific region - maybe you can adapt it to skip some which you want to keep: https://gist.github.com/jen20/e1c25426cc0a4a9b53cbb3560a3f02d1
You need to have jq to run it:
brew install jq
I "hard-coded" my region, for me it's eu-central-1, so be sure to adapt it for your use-case:
#!/usr/bin/env bash
get_task_definition_arns() {
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]'
}
delete_task_definition() {
local arn=$1
aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition "${arn}" > /dev/null
}
for arn in $(get_task_definition_arns)
do
echo "Deregistering ${arn}..."
delete_task_definition "${arn}"
done
Then when I run it, it starts removing them:
Deregistering arn:aws:ecs:REGION:YOUR_ACCOUNT_ID:task-definition/NAME:REVISION...
Oneline approach inspired by Anna A reply:
aws ecs list-task-definitions --region eu-central-1 \
| jq -M -r '.taskDefinitionArns | .[]' \
| xargs -I {} aws ecs deregister-task-definition \
--region eu-central-1 \
--task-definition {} \
| jq -r '.taskDefinition.taskDefinitionArn'
There is no option to delete a task definition on the AWS console.
But, you can deregister (delete) a task definition by executing the following command number of revisions that you have:
aws ecs deregister-task-definition --task-definitiontask_defination_name:revision_no
Created following gist to safely review, filter and deregister AWS task-definitions and revisions in bulk (max 100 at a time) using JS CLI.
https://gist.github.com/shivam-nagar/aa79b02b74f616f8714d51e419bd10de
Can use this to deregister all revisions for task-definition. This will result in task-definition itself marked as inactive.
Now its supported
I just went inside the Task Definations and clicked on Actions and click on Deregister and it was removed from the UI