How to get logs from a lambda trigger in AWS Lambda? - amazon-web-services

Some time ago, I developed a lambda function, this function has s3 bucket as trigger. The event type is s3:ObjectCreated:* and the suffix is .csv. Everything was working fine.
For a few days I see that when I upload a csv the trigger no longer works. Nothing was changed, which is why I want to see the logs for a lambda trigger.
It's possible?
I need to see why the lambda trigger is not working, I don't know if there are any logs or error messages from the lambda triggers.
I was reading a lot of literature and I can't find something like what I need. Cloudwatch only shows logs for lambdas and obviously if a lambda doesn't start then no logs are logged and I can't see why the trigger didn't start.

Related

Recreating/reattaching AWS Lambda console logging to CloudWatch

Thinking that I wanted to clear out old logs, I made the mistake of deleting my Lambda's "Log Stream" on CloudWatch.
The result, as I should have expected if I was awake, is that now CloudWatch isn't getting the Lambda's console logs at all. Oops.
The log group still exists.
I can see how to create a new log stream.
What I haven't been able to find on the web is clear instructions to get the existing Lambda to output to this new stream... ie, to repair what I did.
Can someone provide instructions or a pointer to them, please? I'm sure I'm not the only one who's made this mistake, so I think it's an answer worth having on tap.
UPDATE: Decided to try recovering by creating an entirely new Lambda, running the same code and configured the same way, expecting that it would Just Work; my understanding was that a new Lambda binds to a CloudWatch group automagically.
Then I ran my test, clicked the twist-arrow to see the end of the output, and hit "Click here to view the corresponding CloudWatch log group.". It opened Cloudwatch looking at the expected log group name -- with a big red warning that this group did not exist. Clicking "(Logs)" at the top of the test output gave the same behavior.
I tried creating the group manually, but now I'm back where I was -- lambda runs, I get local log output, but the logs are not reaching CloudWatch.
So it looks like there's something deeper wrong. CloudWatch is still getting logs from the critical lambda (the one driving my newly-released Alexa skill), and the less-critical one (scheduled update for the skill's database) is running OK so I don't absolutely need its logs right now -- but I need to figure this out so I can read them if that background task ever breaks.
Since this is now looking like real Unexpected Behavior rather than user error, I'll take it to the AWS forums and post here if they come up with an answer. On that system, the question is now at https://repost.aws/questions/QUDzF2c_m0TPCwl3Ufa527Wg/lambda-logging-to-cloud-watch-seems-to-be-broken
Programmer's mantra: "If it was easy, they wouldn't need us..."
After a Lambda function is executed, you can go to the Monitoring tab and click View logs in CloudWatch -- it will take you to the location where the logs should be present.
If you know that the function has executed but no logs are appearing, then confirm that your Lambda function has the AWSLambdaBasicExecutionRole assigned to the IAM Role being used by the Lambda function. This grants permission for the Lambda function to write to CloudWatch Logs.
See: AWS Lambda execution role - AWS Lambda

How to split AWS CloudWatch Log streams?

There is a group of AWS CloudWatch Logs, inside which there are several threads. As far as I understand it, each thread is a log coming from a separate server or container.
CloudWatch Log streams
I put the whole group of logs in Kinesis Firehose to deliver them to S3 Bucket. But inside Kinesis Firehose, all the logs are merged into one. How can I get these logs to the S3 storage so that each thread has its own directory?
I found a solution:
1) I modified every log in Kinesis Firehose using the Lambda function. I added an identifier to the end of the log line. And then it looks like this:
Modified logs
2) I created a Lambda function with a trigger that works every time logs are written to s3 bucket. And in this function, I distribute logs to the folders I need based on the information I added to the logs earlier. I will not give the code of this lambda function. I've described the general approach and I think those who need it can figure it out.

save cloudwatch logs generated by glue crawler in another cloudwatch log group

is there a way of saving logs generated by a crawler in a specific, newly created cloudwatch log group?
I want to use the finish crawling log as a trigger to a lambda function.
Many thanks in advance!
You can use the AWS CloudWatch Logs API to upload logs. Use CreateLogGroup and CreateLogStream to create your log stream, and then use PutLogEvents to upload your log.
There are other options that might be more suitable for triggering a Lambda function though, depending on your exact use case, such as uploading the collected log to S3 and having the upload trigger the function, or even starting the Lambda function directly.

Invoke AWS Lambda function that is Written in Django App

I have written some cronjobs in my django app and I want to schedule these jobs using AWS Lambda service. Can someone please recommend a good approach to get this done?
I will answer this based on the question's topic rather than the body, since I am not sure what the OP means with "I want to schedule these jobs using AWS Lambda".
If all you want is trigger your Lambda function based in a cronjob, you can use CloudWatch Events to achieve this. You can specify regular cron expressions or some built-in expressions that AWS makes available, like rate(1 min) will run your function every minute. You can see how to trigger a Lambda function via CloudWatch Events on the docs. See cron/rate to see all the available options.
CloudWatch Events is only one of the many options to trigger you Lambda function. Your function can react to a whole bunch of AWS Events, including S3, SQS, SNS, API Gateway, etc. You can see the full list of events here. Just pick one that fits your needs and you are good to go.
EDIT AFTER OP'S UPDATE:
Yes, what you're looking for is CloudWatch Events. Once you have the Lambda to poll your database in place, you can just create a rule in CloudWatchEvents and have your Lambda be triggered by it. Please see the following images for guidance.
Go to CloudWatch, click on Events and choose Schedule as the Event Source
(make sure to setup your own Cron expression or select the pre-defined rate values)
On the right-hand side, choose your Lambda function accordingly.
Click on "Configure Details" when you are done, give it a name, leave the "Enabled" box checked and finally click on Create.
Go back to your Lambda function and you should see it's now triggered by CloudWatch Events (column on the left-hand side)
Your lambda is now configured properly and will execute once a day.

How can I get more specific CloudWatch alerts when an AWS Lambda function fails?

I have a variety of functions, all in Node.js, in AWS Lambda. They're triggered by certain events like S3 triggers, API Gateway methods, or sometimes just called manually. I create them by pasting code in the console or uploading a zip file I've built locally.
On rare occasion, a function will fail. To detect failures, I've set up a CloudWatch alarm that looks like this:
This works, to an extent: when a function anywhere in my account fails, I get an email. The problem is the email just states that the alarm got tripped. It doesn't state what Lambda function actually failed so I have to dig through Lambda to find which function actually caused the alarm.
I've considered the following:
Setting up a CloudWatch alarm per function. This is the most obvious solution but is also the most tedious and highest maintenance.
Building a CI/CD pipeline for my Lambda functions instead of entering the code or uploading zips in the console. I can then add a step that sets up a CloudWatch alert for the function automatically. This is better than the first option but also is a lot of infrastructure to set up for potentially a simple problem.
Using another Lambda function to custom handle the alert. The problem is, best I can tell, the SNS message that CloudWatch publishes doesn't contain any more data than the email; it just says in essence "your alarm named X tripped" but not why.
Any ideas on how to achieve this?
We handle it internally. When there is a problem, the Lambda attempts to handle it, and sends an alert. The CloudWatch metric is only for truly unhandled exceptions. Remember Lambda automatically retries if a function has an error, which can be undesirable for certain situations. So it may be preferable to handle any exceptions internal to the Lambda function.