I accidentally deleted a lambda log group in CloudWatch.
Now my lambda fails and I do not see the log group reappear in CloudWatch.
Is it supposed to be recreated automatically? How can I fix the situation?
I tried recreating the log group manually but it didn't receive any log.
Try to remove and redeploy the lambda.
Also, make sure it has permissions to write to CloudWatch.
If the role configured in the lambda function has permissions to write to CloudWatch logs, then the lambda function will recreate the log groups upon execution. It may take up to a minute after the function has been invoked.
To resolve this issue, modify the role that is configured in the lambda function to include the "AWSLambdaBasicExecutionRole" Policy. This is an AWS Managed policy that includes everything you need to write to CloudWatch Logs.
See this article and video walk through!
https://geektopia.tech/post.php?blogpost=Write_To_CloudWatch_Logs_From_Lambda
Related
Thinking that I wanted to clear out old logs, I made the mistake of deleting my Lambda's "Log Stream" on CloudWatch.
The result, as I should have expected if I was awake, is that now CloudWatch isn't getting the Lambda's console logs at all. Oops.
The log group still exists.
I can see how to create a new log stream.
What I haven't been able to find on the web is clear instructions to get the existing Lambda to output to this new stream... ie, to repair what I did.
Can someone provide instructions or a pointer to them, please? I'm sure I'm not the only one who's made this mistake, so I think it's an answer worth having on tap.
UPDATE: Decided to try recovering by creating an entirely new Lambda, running the same code and configured the same way, expecting that it would Just Work; my understanding was that a new Lambda binds to a CloudWatch group automagically.
Then I ran my test, clicked the twist-arrow to see the end of the output, and hit "Click here to view the corresponding CloudWatch log group.". It opened Cloudwatch looking at the expected log group name -- with a big red warning that this group did not exist. Clicking "(Logs)" at the top of the test output gave the same behavior.
I tried creating the group manually, but now I'm back where I was -- lambda runs, I get local log output, but the logs are not reaching CloudWatch.
So it looks like there's something deeper wrong. CloudWatch is still getting logs from the critical lambda (the one driving my newly-released Alexa skill), and the less-critical one (scheduled update for the skill's database) is running OK so I don't absolutely need its logs right now -- but I need to figure this out so I can read them if that background task ever breaks.
Since this is now looking like real Unexpected Behavior rather than user error, I'll take it to the AWS forums and post here if they come up with an answer. On that system, the question is now at https://repost.aws/questions/QUDzF2c_m0TPCwl3Ufa527Wg/lambda-logging-to-cloud-watch-seems-to-be-broken
Programmer's mantra: "If it was easy, they wouldn't need us..."
After a Lambda function is executed, you can go to the Monitoring tab and click View logs in CloudWatch -- it will take you to the location where the logs should be present.
If you know that the function has executed but no logs are appearing, then confirm that your Lambda function has the AWSLambdaBasicExecutionRole assigned to the IAM Role being used by the Lambda function. This grants permission for the Lambda function to write to CloudWatch Logs.
See: AWS Lambda execution role - AWS Lambda
I would like to ask about Lambda and CloudWatch logs. I see that once you create a Lambda the log structure behind is always like "/aws/lambda/".
What I would like to do is my own CloudWatch group "/myLogs" and then set to Lambda to log into this CloudWatch group. As I was reading some documentation about this it looks like that this was not possible in past, is there any update on that now is it possible to do it and how ?
Thanks.
By default, the log group gets created by the name /aws/lambda/.
I haven't explored through console, but if you are using cdk, you can use aws_logRetention method construct of aws_lambda construct.
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_lambda/LogRetention.html
This will create a log group by the name you provide (but it will still follow the convention of /aws/lambda/....
You can set the retention days as well and manage the logs.
You can try it once.
Trying to put cloud watch logs into kineses firehose.
Followed below:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample
Got this error
An error occurred (InvalidParameterException) when calling the PutSubscriptionFilter operation: Could not deliver test message to specified Firehose stream. Check if t
e given Firehose stream is in ACTIVE state.
aws logs put-subscription-filter --log-group-name "xxxx" --filter-name "xxx" --filter-pattern "{$.httpMethod = GET}" --destination-arn "arn:aws:firehose:us-east-1:12345567:deliverystream/xxxxx" --role-arn "arn:aws:iam::12344566:role/xxxxx"
You need to update the trust policy of your IAM role so that it gives permissions to the logs.amazonaws.com service principal to assume it, otherwise CloudWatch Logs won't be able to assume your role to publish events to your Kinesis stream. (Obviously you also need to double-check the permissions on your role to make sure it has permissions to read from your Log Group and write to your Kinesis Stream.)
It would be nice if they added this to the error message to help point people in the right direction...
The most likely problem that causes this error is a permissions issue. i.e. something wrong in the definition of the IAM role you passed to --role-arn. You may want to double check that the role and its permissions were set up properly as described in the doc.
I was getting a similar error when subscribing to a cloudwatch loggroup and publishing to a Kinesis stream. Cdk was not defining a dependency needed for the SubscriptionFilter to be created after the Policy that would allow the filtered events to be published in Kinesis. This is reported in this github cdk issue:
https://github.com/aws/aws-cdk/issues/21827
I ended up using the workaround implemented by github user AlexStasko: https://github.com/AlexStasko/aws-subscription-filter-issue/blob/main/lib/app-stack.ts
If your Firehose is active status and you can send log stream then the remaining issue is only policy.
I got the similar issue when follow the tutorial. The one confused here is Kinesis part and Firehose part, we may mixed up together. You need to recheck your: ~/PermissionsForCWL.json, with details part of:
....
"Action":["firehose:*"], *// You could confused with kinesis:* like me*
"Resource":["arn:aws:firehose:region:123456789012:*"]
....
When I did the tutorial you mentioned, it was defaulting to a different region so I had to pass --region with my region. It wasn't until I did the entire steps with the correct region that it worked.
For me I think this issue was occurring due to the time it takes for the IAM data plane to settle after new roles are created via regional IAM endpoints for regions that are geographically far away from us-east-1.
I have a custom Lambda CF resource that auto-subscribes all existing and future log groups to a Firehose via a subscription filter. The IAM role gets deployed for CW Logs then very quickly the Lambda function tries to subscribe the log groups. And on occasion this error would happen.
I added a time.sleep(30) to my code (this code only runs once a stack creation so it's not going to hurt anything to wait 30 seconds).
I am not able to edit nor delete the CloudWatch Events trigger in AWS Lambda.
I used the below command but it didn't work.
aws events delete-rule --name "startEC2"
Could anyone please help me out. Thanks in advance.
Do check the IAM Role that you used in a lambda function. To remove cloudwatch event trigger, you have to delete the 'cloudwatch event' from your IAM Policy statement that is attached to IAM Role for this function.
The only thing that you can do from lambda console is assign/remove a cloudwatch event that will trigger the particular lambda function. If you want to delete the cloudwatch rule, why not go to the Cloudwatch Console itself???
Also, if you want to delete via CLI, make sure you got proper permissions to do so.
That command doesn't work in the following cases:
You aren't connected to the internet.
You don't have AWS CLI installed in your node.
You haven't properly configured your aws credentials/profile.
You haven't got enough permissions to do delete-event API call.
But above all, its unlikely to get no output at all, in any case.
Please do clarify your question properly.
I'd like to run some code using Lambda on the event that I create a new EC2 instance. Looking the blueprint config-rule-change-triggered I have the ability to run code depending on various configuration changes, but not when one is created. Is there a way to do what I want? Or have I misunderstood the use case of Lambda?
We had similar requirements couple of days back(Users were supposed to get emails whenever a new instance gets launched)
1) Go to cloudwatch, then select Rules
2) Select service name (its ec2 for your case) then select "Ec2 instance state-change notification"
3) Then select pending in "Specific state" dropdown
4) Click on Add target option and select your lambda function.
That's it, whenever a new instance gets launched, Cloudwatch will trigger your lambda function.
Hope it helps !!
You could do this by inserting code into your EC2 instance launch userdata and have that code explicitly invoke a Lambda function, but that's not the best way to do it.
A better way is to use a combination of CloudTrail and Lambda. If you enable CloudTrail logging (every a/c should have this enabled, all the time, in all regions) then CloudTrail will log to S3 all of the API calls made in your account. You then connect this to Lambda by configuring S3 to publish events to Lambda. Your Lambda function will receive an S3 event, can then retrieve the API logs, find RunInstances API calls, and then do whatever work you need to as a consequence of the new instance being launched.
Some helpful references here and here.
I don't see a notification trigger for instance startup, however what you can do is write a startup script and pass that in via userdata. That startup script would need to download and install the AWS CLI and then authenticate to SNS and publish a message to a pre-configured topic. The startup script would authenticate to SNS and whatever other AWS services are needed via your IAM Role, so you would need to give the IAM Role permission to do whatever you want the script to do. This can be done in the IAM console.
That topic would then have your Lambda function subscribed to it, which would execute. Similar to the below article (though the author is doing something similar for shutdown, not startup).
http://rogueleaderr.com/post/48795010760/how-to-notifyemail-yourself-when-an-ec2-instance
If you are putting the EC2 instances into an autoscale group, I believe there is a trigger that gets fired when the autoscale group launches a new instance, so you could take advantage of that.
I hope that helps.
If you have CloudTrail enabled, then you can have S3 PutObject/TrailBucket trigger a Lambda function. Lambda function parses the object that is passed to it and if it finds RunInstances event, then run your code.
I do the exact same thing to notify certain users when a new instance is launched. With Lambda/Python, it is ~20 lines of code.