I'm having trouble triggering my AWS Lambda function.
The function works perfectly fine when I click Test, but I've created a new scheduled rule which triggers the Lambda function every minute. It works once, and then never again. I've also tried to use Cron, same results.
The logs should output a print function, but instead they read this:
02:07:40
START RequestId: |numbers| Version: 8
02:07:40
END RequestId: |numbers|
I've clicked Enable on 'CloudWatch Events will add necessary permissions for target(s) so they can be invoked when this rule is triggered.', so I suspect that my permissions aren't an issue.
As a side note, I've done everything on the console and am not really sure how to properly use the CLI. Any help would be wonderful. Thank you.
The best way is to start simple, then build-up to the ultimate goal.
Start by creating an AWS Lambda function that simply prints something to the log file. Here is an example in Python:
def lambda_handler(event, context):
print ('Within function')
Then, ensure that the function has been assigned an IAM Role with the AWSLambdaBasicExecutionRole policy, or another policy that grants access to CloudWatch Logs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
Then, configure CloudWatch Events to trigger the function once per minute and check the log files in Amazon CloudWatch Logs to confirm that the function is executing.
This will hopefully work correctly. It's then just a matter of comparing the configurations to find out why the existing function is not successfully running each minute. You can also look at the Monitoring tab to see whether any executions produced errors.
OK, here's where I went wrong:
According to this answer: https://forums.aws.amazon.com/thread.jspa?threadID=264583 AWS only runs the entire S3 zip package once. I needed to put all of my code into the handler to fix this.
Related
I have an AWS lambda whose job it is to consume logs from an external source and write these logs to a custom CloudWatch log group. Please note that this lambda is already writing logs to its own log group, that's not my question. What I want is for it to write the externally-derived logs to another CloudWatch group.
Following the AWS documentation, and using CloudFormation, I created an event bus and a rule that targets CloudWatch:
redacted
I have omitted most of the CloudFormation template for clarity, just leaving in the parts that seem relevant.
What I am finding is that the Lambda receives the logs (via Kinesis), processes them and sends them to the event bus in the code snippet below:
redacted
The last line above indicates that the event is sent to the event bus:
redacted
However the Event Bus, having i believe, received the event, does not send the event off to CloudWatch. Even if i manually create the log group: ${AWS::StackName}-form-log-batch-function (I have kept the stack reference as a parameter to preserve anonymity).
I have checked the CloudFormation creation and all resources are present (confirmed by the Lambda not experiencing any exceptions, when it tries to send the event).
Anyone understand what I am missing here?
You can't write to CloudWatch Logs (CWL) using your WebLogsEventBusLoggingRole role. As AWS docs explain, you have to use CWL resource-based permissions:
When CloudWatch Logs is the target of a rule, EventBridge creates log streams, and CloudWatch Logs stores the text from the triggering events as log entries. To allow EventBridge to create the log stream and log the events, CloudWatch Logs must include a resource-based policy that enables EventBridge to write to CloudWatch Logs.
Sadly, you can't setup such permissions from vanila CloudFormation (CFN). This is not supported:
AWS::Events::Rule targetting Cloudwatch logs
To do it from CFN, you have to create custom resource in a form of a lambda function. The function would set CWL permissions using AWS SDK.
I hope this is helpful for others still looking for answers.
Problem#1 I want Cloudformation to work
You can use the cloud formation from https://serverlessland.com/patterns/eventbridge-cloudwatch or terraform
Problem#2 Why is EventBridge not able to write to Cloudwatch Logs in general
Just like what was said above, in order for aws event bridge to write to cloudwatch there needs to be a resource policy (a Policy set on the destination, in this case, Cloudwatch Logs). However please note
If you create a cloudwatch logs Target in the console a Resource Policy will be auto-generated for you, but the auto-generated one has a Twist
{
"Version": "2012-10-17",
"Statement":
[
{
"Sid": "TrustEventsToStoreLogEvent",
"Effect": "Allow",
"Principal":
{
"Service":
[
"events.amazonaws.com",
"delivery.logs.amazonaws.com"
]
},
"Action":
[
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:us-east-1:777777777:log-group:/*:*"
}
]
}
You will notice that the resource take the form of /*:*
which mean the log group has to start with / if you are going to use the auto generated one.
So if your log group is not in the format /event/myloggroup/ then the policy will not help you.
So for example
Target Log Group Name
ARN
Does it work?
event-bridge-rule2
arn:aws:logs:us-east-1:281458815962:log-group:event-bridge-rule2:*
Note the arn is missing starting /
/aws/events/helpme
arn:aws:logs:us-east-1:281458815962:log-group:/aws/events/helpme:*
Works like a charm
My advice is put a policy that makes sense to you and doesn't rely on the automatic one.
just create a loggroup with the name /aws/events/<yourgroupname> and it will work fine & also set logs:*
I have viewer-request and origin-response Lambda functions deployed to a CloudFront distribution, which are firing, but not logging to CloudWatch. I have spent a considerable amount of time researching this topic, and have run through all advice from other posts including:
Checking all regions for logs, as I know that they CloudWatch logs will be created in the region which the labmda#edge function runs. No logs in any of them.
I have checked that the AWSServiceRoleForCloudFrontLogger role exists.
Interestingly when I purposefully code in an error into one of Lambda functions, I do get logs created within a group named /aws/cloudfront/LambdaEdge/<cloudfront distribution id> containing error logs, however there is no output from the console.log statements here.
For the life of me I can't work out how I can enable logging of ALL requests, both successes and failures, to CloudWatch, containing my debug statements using console.log().
The AWSServiceRoleForCloudFrontLogger contains a single policy AWSCloudFrontLogger:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:/aws/cloudfront/*"
}
]
}
EDIT:
Below is the AWS role suggested by AWS support. I can confirm this worked and resolved the issue.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}```
The issue most probably is that Lambda does not have the permissions to output the logs into CloudWatch.
Can you double check the Lambda function execution role permissions?
Related Link : Can't get AWS Lambda function to log (text output) to CloudWatch
Explanation
So there are two kinds of logs here, hence you have to provide permissions to CloudWatch at two different places.
Logs that you put in your Lambda function (using console.log), since these logs are to be published by the function to CloudWatch, function execution role should have the permission to CloudWatch. This is true irrespective of who triggers the Lambda function.
Now comes L#E, sometimes you might end up modifying request/response in a way that is not valid as per CloudFront. In these scenarios only ClodFront has the knowledge that you messed up(your Lambda function doesn't know this) and it publishes this knowledge in form of logs to CloudWatch. Now since this is a different entity, it needs it own permissions to push the logs to CloudWatch(which you had provided via AWSServiceRoleForCloudFrontLogger).
As one of the steps for the previous problem I've faced, I need to see the logs for some Lambda#Edge but I cannot find them anywhere.
According to the documentation on Lambda#Edge:
When you review CloudWatch log files or metrics when you're
troubleshooting errors, be aware that they are displayed or stored in
the Region closest to the location where the function executed. So, if
you have a website or web application with users in the United
Kingdom, and you have a Lambda function associated with your
distribution, for example, you must change the Region to view the
CloudWatch metrics or log files for the London AWS Region.
The lambda function I'm trying to find the logs for is located in us-east-1 (mandated by CloudFront since it is used as a distribution's event handler) while I'm in Canada so I assume the closest region would be ca-central-1. But since I'm not developing in ca-central-1, I don't have any log groups in that region. In any case, I don't see the logs for my Lambda#Edge. For the sake of completeness, I checked all the regions and I couldn't find any trace of logs for the lambda function. To be clear, I'm looking for a log group with the lambda function's name.
I'm positive that there should be logs since I have console.log() in my code and also I can download the content requested (the lambda function is in charge of selecting the S3 bucket holding the contents) which means the lambda function was successfully executed. If it wasn't, I should have not been able to get the S3 content.
Where can I find the logs for my Lambda#Edge function?
For anyone else who might be facing the same issue, use the script mentioned in the same documentation page to find your log groups:
FUNCTION_NAME=function_name_without_qualifiers
for region in $(aws --output text ec2 describe-regions | cut -f 4)
do
for loggroup in $(aws --output text logs describe-log-groups --log-group-name "/aws/lambda/us-east-1.$FUNCTION_NAME" --region $region --query 'logGroups[].logGroupName')
do
echo $region $loggroup
done
done
Create a file, paste the above script in it, replace the function_name_without_qualifiers with your function's name, make it executable and run it. It will find you the regions and log groups for your Lambda#Edge. The lesson learnt here is that the log group is not named like ordinary log groups. Instead it follows this structure:
/aws/lambda/${region}.${function_name}
It seems that the format of log describe-log-groups has also changed. When I tryed the script, it returned nothing. But with "/aws/lambda/$FUNCTION_NAME" instead of "/aws/lambda/us-east-1.$FUNCTION_NAME" the script returns the list of group with the following structure:
${region} /aws/lambda/${function_name}
Last but not least would be good to check Lambda's role permissions.
In my case that was the problem, because by default it allowed writing logs only to 1 region (us-east-1).
Here is how my policy looks like now:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:*:{account-id}:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:{account-id}:log-group:/aws/lambda/{function-name}:*"
]
}
]
}
{account-id} - your AWS Account ID
I'm currently working on a lambda#edge function.
I cannot find any logs on CloudWatch or other debugging options.
When running the lambda using the "Test" button, the logs are written to CloudWatch.
When the lambda function is triggered by a CloudFront event the logs are not written.
I'm 100% positive that the event trigger works, as I can see its result.
Any idea how to proceed?
Thanks ahead,
Yossi
1) Ensure you have provided permission for lambda to send logs to cloudwatch. Below is the AWSLambdaBasicExecutionRole policy which you need to attach to the exection role which you are using for your lambda function.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
2) Lambda creates CloudWatch Logs log streams in the CloudWatch Logs regions closest to the locations where the function is executed. The format of the name for each log stream is /aws/lambda/us-east-1.function-name where function-name is the name that you gave to the function when you created it. So ensure you are checking the cloudwatch logs in the correct REGION.
In case anyone finds it useful.
The fact that AWS prefixes your function name, which breaks the built-in "CloudWatch at a glance" Dashboard, and that Lambda#Edge runs across multiple regions inspired me to create this CloudWatch Dashboard template that gives you similar standard monitoring for all regions in one dashboard.
I am trying to use AWS Glue to run an ETL job that fetches data from Redshift to S3.
When I run a crawler it successfully connects to Redshift and fetches schema information. Relevant logs are created under a log group aws-glue/crawlers.
When I run the ETL job, it is supposed to create a log stream under log groups aws-glue/jobs/output and aws-glue/jobs/error, but it fails to create such log streams, and eventually the job too fails.
( I am using AWS managed AWSGlueServiceRole policy for Glue service)
Since it does not produce any logs, it is difficult to identify the reason for ETL job failure. I would appreciate it if you could help me resolve this issue.
Most of the time this has to do with your AWS service not having the correct permissions (yes, even for just writing logs!).
Adding something like this to the Glue role might do the trick:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:::*"
}
]
}
I would make sure that your Endpoint and VPC is set up correctly via these instructions:
http://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html
I had my inbound rules set up correctly but did not set up the outbound rules which is what I think the issue was.