I have a Logger Lambda function that listens on a specific LogGroup and process specific log details.
I would like to attach the newly created LogGroups of other lambdas to that specific LogGroup, so it will process them as well - but since these other lambdas are created automatically, I need to do this automatically. Can I do it? How?
So there's no way to directly replicate the logs stored in a CloudWatch log group to another log group. You could do this by creating a subscription filter with a Lambda function to push the logs from each log group to the common one, but this would increase the costs for CloudWatch.
What I would suggest is either of the following:
Create a subscription filter for each of the log groups used by your Lambda functions to the common Lambda function so that it is triggered when logs are pushed to any of the log groups. This event can be set up after creating each function. Note, you would have to update the function policy of the common Lambda to allow it to be invoked from each log group (or just set up a wildcard).
Push all the logs for all of the functions to a single log group. This would take the least effort, but you would have to figure out how to effectively separate the logs per function (if that is required for your use case).
Related
I have followed the Datadog's documentation (here) for manually configure AWS account with Datadog. Set-up includes a Lambda function provided by Datadog (here) which is triggered by Cloudwatch log group and lambda pushes logs to Datadog.
Problem is that when logs are pushed, Datadog's log forwarder lambda changes name of function, tags and rest of the to small case. Now, when I use '!Ref ' while creating query for Datadog monitor using Cloudformation, then query contains autogenerated name of lambda which is mixture of small and upper case alphabets. But query does not work as Datadog changes name of the function while pushing logs.
Can someone suggest the work around here?
You could use a Cloudformation macro to transform strings in your template, in this case using the Lower operation to make your !Refs lowercase. Keep in mind that you need to define and deploy macros as lambdas.
Can we create a single AWS Lambda function to trigger the alarm for all unhealthy targets in all Target groups in an AWS Account ?
This link https://aws.amazon.com/blogs/networking-and-content-delivery/identifying-unhealthy-targets-of-elastic-load-balancer/ provides the information to create a single lambda function to monitor and trigger an alarm for single Target Group. I need to monitor and trigger alarms for multiple target groups using one lambda function and further I need the same lambda function to trigger the SNS to send the email. Can we achieve the same ?
Judging from blog post contents, it is achievable while using the proposed solution as a starting point. Although you would need to change a few things.
You will need to associate all of the alarms with the same SNS topic. Depending on the type of alarm you trigger, you will have different data available to you in incoming SNS message. To me the most logical would be to create UnHealthyHostCount alarm on target groups themselves
Lambda function code suggests that function was written with having only one target group for "AWS/ApplicationELB" and "AWS/NetworkELB" alarms in mind.
Remove this block:
else:
tg_arn = os.environ['TARGETGROUP_ARN'].strip()
tg_type = (os.environ['TARGETGROUP_TYPE'].strip()).lower()
Extract target group ARN (tg_arn) from alarm dimension TargetGroup from the incoming SNS
The rest should be more or less the same
Exact steps depend on your particular setup and goals, so treat this as a rough blueprint.
My AWS lambda function is getting invoked my multiple places like api-gateway, aws-SNS and cloud-watch event. Is there any way to figure out who invoked the lambda function? My lambda function's logic depends on the invoker.
Another way to achieve this is having three different lambda functions but I don't want to go that way if I can find invoker information in a single Lambda function itself.
I would look at the event object as all of the three services will have event of different structure.
For example, for CloudWatch Events I would check if there is a source field in the event. For SNS I would check for Records and API gateway for httpMethod.
But you can check for any other attribute that is unique to a given service. If you are not sure, just print out to logs example events from your function for the three services and check what would be the most suited attribute to look for.
I would like to ask about Lambda and CloudWatch logs. I see that once you create a Lambda the log structure behind is always like "/aws/lambda/".
What I would like to do is my own CloudWatch group "/myLogs" and then set to Lambda to log into this CloudWatch group. As I was reading some documentation about this it looks like that this was not possible in past, is there any update on that now is it possible to do it and how ?
Thanks.
By default, the log group gets created by the name /aws/lambda/.
I haven't explored through console, but if you are using cdk, you can use aws_logRetention method construct of aws_lambda construct.
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_lambda/LogRetention.html
This will create a log group by the name you provide (but it will still follow the convention of /aws/lambda/....
You can set the retention days as well and manage the logs.
You can try it once.
I have an AWS Lambda function that is invoked when an instance gets terminated and this the message is stored in Amazon CloudWatch Logs.
I want to extract and filter these log messages to get a particular ID. How can I extract the logs and filter it using Python?
The easiest method might be to create a rule in Amazon CloudWatch Events that triggers an AWS Lambda function. The function automatically passes information relating to the instance that was terminated. You can write the Lambda function in Python.
This way, your function is automatically triggered whenever an instance is terminated, rather than having to look through logs.