I would like to ask about Lambda and CloudWatch logs. I see that once you create a Lambda the log structure behind is always like "/aws/lambda/".
What I would like to do is my own CloudWatch group "/myLogs" and then set to Lambda to log into this CloudWatch group. As I was reading some documentation about this it looks like that this was not possible in past, is there any update on that now is it possible to do it and how ?
Thanks.
By default, the log group gets created by the name /aws/lambda/.
I haven't explored through console, but if you are using cdk, you can use aws_logRetention method construct of aws_lambda construct.
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_lambda/LogRetention.html
This will create a log group by the name you provide (but it will still follow the convention of /aws/lambda/....
You can set the retention days as well and manage the logs.
You can try it once.
Related
I dont have much code to show.
I want to process all cloudwatch logs that are generated in last 1 day using lambda.
I want to execute lambda 6 am in the morning to extract some information from cloudwatch logs that are generated on previous day and put it in a table.
Instead of your 6 AM idea, you could also use a CloudWatch Logs subscription filter and trigger a Lambda function to process and store the log entries as described in a step-by-step example here Example 2: Subscription filters with AWS Lambda.
Or even easier and without duplicating the data to a database: Analyzing log data with CloudWatch Logs Insights
I assume that you have a lot of Lambda functions and each has a CloudWatch Log Group.
You can try the CloudWatch Log Group subscription filter, with this feature you can stream your logs into any support destinations such as Lambda.
Before that, you should prepare a Lambda function that has functionalities to help you to put your extracted data into DynamoDB table.
References:
https://docs.aws.amazon.com/lambda/latest/dg/with-ddb-example.html
https://www.geeksforgeeks.org/aws-dynamodb-insert-data-using-aws-lambda/
I have followed the Datadog's documentation (here) for manually configure AWS account with Datadog. Set-up includes a Lambda function provided by Datadog (here) which is triggered by Cloudwatch log group and lambda pushes logs to Datadog.
Problem is that when logs are pushed, Datadog's log forwarder lambda changes name of function, tags and rest of the to small case. Now, when I use '!Ref ' while creating query for Datadog monitor using Cloudformation, then query contains autogenerated name of lambda which is mixture of small and upper case alphabets. But query does not work as Datadog changes name of the function while pushing logs.
Can someone suggest the work around here?
You could use a Cloudformation macro to transform strings in your template, in this case using the Lower operation to make your !Refs lowercase. Keep in mind that you need to define and deploy macros as lambdas.
If given the correct permissions, Lambda functions will automatically create CloudWatch log groups to hold log output from the Lambda. Same for LambdaEdge functions with the addition that the log groups are created in each region in which the LambdaEdge function has run and the name of the log group includes the name of the region. The problem is that the retention time is set to forever and there is no way to change that unless you wait for the log group to be created and then change the retention config after the fact.
To address this, I would like to create those log groups preemptively in Terraform. The problem is that the region would need to be set in the provider meta argument or passed in the providers argument to a module. I had originally thought that I could get the set of all AWS regions using the aws_regions data source and then dynamically create a provider for each region. However, there is currently no way to dynamically generate providers (see https://github.com/hashicorp/terraform/issues/24476).
Has anyone solved this or a similar problem in some other way? Yes, I could create a script using the AWS CLI to do this, but I'd really like to keep everything in Terraform. Using Terragrunt is also an option, but I wanted to see if there were any solutions using pure Terraform before I go that route.
I have a Logger Lambda function that listens on a specific LogGroup and process specific log details.
I would like to attach the newly created LogGroups of other lambdas to that specific LogGroup, so it will process them as well - but since these other lambdas are created automatically, I need to do this automatically. Can I do it? How?
So there's no way to directly replicate the logs stored in a CloudWatch log group to another log group. You could do this by creating a subscription filter with a Lambda function to push the logs from each log group to the common one, but this would increase the costs for CloudWatch.
What I would suggest is either of the following:
Create a subscription filter for each of the log groups used by your Lambda functions to the common Lambda function so that it is triggered when logs are pushed to any of the log groups. This event can be set up after creating each function. Note, you would have to update the function policy of the common Lambda to allow it to be invoked from each log group (or just set up a wildcard).
Push all the logs for all of the functions to a single log group. This would take the least effort, but you would have to figure out how to effectively separate the logs per function (if that is required for your use case).
I accidentally deleted a lambda log group in CloudWatch.
Now my lambda fails and I do not see the log group reappear in CloudWatch.
Is it supposed to be recreated automatically? How can I fix the situation?
I tried recreating the log group manually but it didn't receive any log.
Try to remove and redeploy the lambda.
Also, make sure it has permissions to write to CloudWatch.
If the role configured in the lambda function has permissions to write to CloudWatch logs, then the lambda function will recreate the log groups upon execution. It may take up to a minute after the function has been invoked.
To resolve this issue, modify the role that is configured in the lambda function to include the "AWSLambdaBasicExecutionRole" Policy. This is an AWS Managed policy that includes everything you need to write to CloudWatch Logs.
See this article and video walk through!
https://geektopia.tech/post.php?blogpost=Write_To_CloudWatch_Logs_From_Lambda