Recreating/reattaching AWS Lambda console logging to CloudWatch - amazon-web-services

Thinking that I wanted to clear out old logs, I made the mistake of deleting my Lambda's "Log Stream" on CloudWatch.
The result, as I should have expected if I was awake, is that now CloudWatch isn't getting the Lambda's console logs at all. Oops.
The log group still exists.
I can see how to create a new log stream.
What I haven't been able to find on the web is clear instructions to get the existing Lambda to output to this new stream... ie, to repair what I did.
Can someone provide instructions or a pointer to them, please? I'm sure I'm not the only one who's made this mistake, so I think it's an answer worth having on tap.
UPDATE: Decided to try recovering by creating an entirely new Lambda, running the same code and configured the same way, expecting that it would Just Work; my understanding was that a new Lambda binds to a CloudWatch group automagically.
Then I ran my test, clicked the twist-arrow to see the end of the output, and hit "Click here to view the corresponding CloudWatch log group.". It opened Cloudwatch looking at the expected log group name -- with a big red warning that this group did not exist. Clicking "(Logs)" at the top of the test output gave the same behavior.
I tried creating the group manually, but now I'm back where I was -- lambda runs, I get local log output, but the logs are not reaching CloudWatch.
So it looks like there's something deeper wrong. CloudWatch is still getting logs from the critical lambda (the one driving my newly-released Alexa skill), and the less-critical one (scheduled update for the skill's database) is running OK so I don't absolutely need its logs right now -- but I need to figure this out so I can read them if that background task ever breaks.
Since this is now looking like real Unexpected Behavior rather than user error, I'll take it to the AWS forums and post here if they come up with an answer. On that system, the question is now at https://repost.aws/questions/QUDzF2c_m0TPCwl3Ufa527Wg/lambda-logging-to-cloud-watch-seems-to-be-broken
Programmer's mantra: "If it was easy, they wouldn't need us..."

After a Lambda function is executed, you can go to the Monitoring tab and click View logs in CloudWatch -- it will take you to the location where the logs should be present.
If you know that the function has executed but no logs are appearing, then confirm that your Lambda function has the AWSLambdaBasicExecutionRole assigned to the IAM Role being used by the Lambda function. This grants permission for the Lambda function to write to CloudWatch Logs.
See: AWS Lambda execution role - AWS Lambda

Related

How to get logs from a lambda trigger in AWS Lambda?

Some time ago, I developed a lambda function, this function has s3 bucket as trigger. The event type is s3:ObjectCreated:* and the suffix is .csv. Everything was working fine.
For a few days I see that when I upload a csv the trigger no longer works. Nothing was changed, which is why I want to see the logs for a lambda trigger.
It's possible?
I need to see why the lambda trigger is not working, I don't know if there are any logs or error messages from the lambda triggers.
I was reading a lot of literature and I can't find something like what I need. Cloudwatch only shows logs for lambdas and obviously if a lambda doesn't start then no logs are logged and I can't see why the trigger didn't start.

AWS cloudwatch: logs are getting created in different log streams for the single API hit

We are making use of AWS Lambda and have configured cloudwatch for logging. There is a cron job running every 5 minutes which is triggering the lambda function. The logs that are generated for the hit are getting created in different log streams. For reference, please check the image attached here:
So, let's say there is an API hit at 11:45, then for checking the logs I have to go through the log streams having last event time 2022-05-05 11:43:10 (UTC+05:30) , 2022-05-05 11:43:00 (UTC+05:30), 2022-05-05 11:38:11 (UTC+05:30) and 2022-05-05 11:38:02 (UTC+05:30) and so on. The reason is, for a single hit logs are getting created in different log streams. Some of the logs are in first log stream, some are in second, a few are in third one. Previously, all the logs were created in single log stream corresponding to a single hit. Is there anything that can be done to avoid this? as this makes debugging a time taking process.
This is how Lambda works: each Lambda execution environment gets its own log stream. If you need to look at logs across log streams, then the best "built-in" solution is CloudWatch Logs Insights, which works at the log-group level.
Update: this document describes the Lambda execution environment, and the conditions that cause creation/destruction of an environment.

Tracking AWS Lambda functions to detect for human intervention and compromise

If I have an application that runs solely as lambda functions within AWS, is there a way I can setup the logging to tell me how my lambda was executed? For example, I only want the application to be able to execute the lambda based on triggers, but I want to be able to detect if someone logged in and executed one by hand, or even worse, if someone externally was able to remotely execute a lambda.
I understand that I can lock these things down, and they are, and there are guardrails to help prevent external access; but on top of this, I still want to be able to detect and verify that only the application is executing the lambda. Ideally, there's something that I can trace in the logging that shows me an execution IP that I can verify comes from the lambda service, or a log that states how the lambda was executed, then I could trace that back to an executing application or service.
You can use CloudTrail logging to retrospectively evaluate how your Lambda was invoked. If you go with this option you will need to enable Lambda logging as this is disabled by default.
It will push logs into S3 which you could retrospectively parse and evaluate if this happened.
You can add restrictions to invoking the Lambda via its function policy. By using conditions you can tighten exactly which resources can invoke.

AWS Lambda log group not recreated after deletion

I accidentally deleted a lambda log group in CloudWatch.
Now my lambda fails and I do not see the log group reappear in CloudWatch.
Is it supposed to be recreated automatically? How can I fix the situation?
I tried recreating the log group manually but it didn't receive any log.
Try to remove and redeploy the lambda.
Also, make sure it has permissions to write to CloudWatch.
If the role configured in the lambda function has permissions to write to CloudWatch logs, then the lambda function will recreate the log groups upon execution. It may take up to a minute after the function has been invoked.
To resolve this issue, modify the role that is configured in the lambda function to include the "AWSLambdaBasicExecutionRole" Policy. This is an AWS Managed policy that includes everything you need to write to CloudWatch Logs.
See this article and video walk through!
https://geektopia.tech/post.php?blogpost=Write_To_CloudWatch_Logs_From_Lambda

How can I get more specific CloudWatch alerts when an AWS Lambda function fails?

I have a variety of functions, all in Node.js, in AWS Lambda. They're triggered by certain events like S3 triggers, API Gateway methods, or sometimes just called manually. I create them by pasting code in the console or uploading a zip file I've built locally.
On rare occasion, a function will fail. To detect failures, I've set up a CloudWatch alarm that looks like this:
This works, to an extent: when a function anywhere in my account fails, I get an email. The problem is the email just states that the alarm got tripped. It doesn't state what Lambda function actually failed so I have to dig through Lambda to find which function actually caused the alarm.
I've considered the following:
Setting up a CloudWatch alarm per function. This is the most obvious solution but is also the most tedious and highest maintenance.
Building a CI/CD pipeline for my Lambda functions instead of entering the code or uploading zips in the console. I can then add a step that sets up a CloudWatch alert for the function automatically. This is better than the first option but also is a lot of infrastructure to set up for potentially a simple problem.
Using another Lambda function to custom handle the alert. The problem is, best I can tell, the SNS message that CloudWatch publishes doesn't contain any more data than the email; it just says in essence "your alarm named X tripped" but not why.
Any ideas on how to achieve this?
We handle it internally. When there is a problem, the Lambda attempts to handle it, and sends an alert. The CloudWatch metric is only for truly unhandled exceptions. Remember Lambda automatically retries if a function has an error, which can be undesirable for certain situations. So it may be preferable to handle any exceptions internal to the Lambda function.