I've been trying in vain to see logs for my lambda function. No matter what I see this:
To be clear, the lambda function runs properly. I just can't see the logs at all.
I've recreated the function multiple times to make sure it wasn't me accidentally mucking with a setting that disabled logging.
My steps:
From the AWS Lambda function page, create a new function. I'm using nodejs 8.10, but it seems to fail even if I use a 6.x version.
Upload a zip file with my function (including the node_modules directory, package.json and package-lock.json as well) to S3 into testbucket with the filename thumbnails.zip.
Use this command to publish my lambda function from S3: aws lambda update-function-code --function-name transcode-v2 --s3-bucket mytestbucket --s3-key thumbnails.zip.
I can test my function with sample data and the test button.
I can also invoke it from the CLI and it seems to "work" (in that it runs)
I always see this message when I go to cloud logs: There was an error loading Log Streams. Please try again by refreshing this page. I've tried recreating the function twice and this does not fix it.
Anyone know what is wrong here? The function seems to work under test (meaning, I see logs inside the test logging dialog), and when I invoke from the command line. But, nothing ever gets into the cloud logging page except for that error.
I can see that invocations are being triggered from AWS.
When an AWS role is created, you must provide an IAM Role that will be used by the Lambda function. The permissions associated with the role will grant access to AWS services and resources required by the Lambda function.
There is a default AWSLambdaBasicExecutionRole that provides:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
These permissions allow the Lambda function to write log information to Amazon CloudWatch Logs.
There are other available Roles too, such as AWSLambdaExecute:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::*"
}
]
}
So, either use one of these pre-provided roles, or add similar permissions to the Role that your Lambda function is using.
Related
I have a lambda in AWS and in the console under Monitoring, it shows a warning:
when I click the edit button, it says: The required permissions were not found. The Lambda console will attempt to add them to the execution role..
but my lambda already has this policy in its role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords",
"xray:GetSamplingRules",
"xray:GetSamplingTargets",
"xray:GetSamplingStatisticSummaries"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
I wonder what else it needs in order to enable active tracing?
$ aws lambda update-function-configuration --function-name my-function \
--tracing-config Mode=Active
I enabled the tracing using the above command and below mentiond permissions:
$ aws lambda get-function --function-name mylambda | jq .Configuration.TracingConfig
{
"Mode": "Active"
}
And I was able to see the corresponding traces. The strange thing it was complaining the whole time in the UI for the latest version as well saying The required permissions were not found. The Lambda console will attempt to add them to the execution role. even though there were permissions in place. So I am guessing it might be a bug in the UI unless someone can add more information about it.
As as I keep hitting save and it keeps adding a new policy to the lambda execution role and simultaneously complaining too about the same warning message.
Using AWS Lambda with AWS X-Ray
Only these permissions needed as described in the doc.
Lambda needs the following permissions to send trace data to X-Ray. Add them to your function's execution role.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
This is helpful when we have multiple versions deployed and want to test a specific version like dev/test/prod etc.
Tracing mode is part of the version-specific configuration that is locked when you publish a version of your function. You can't change the tracing mode on a published version.
I have a lambda function that is set up as a trigger on an S3 bucket and it gets called correctly but the lambda function fails when calling S3.getObject.
Do I need to separately set permissions for the lambda function in order to allow it to call getObject on the bucket that triggered the event?
UPDATE:
There seems to be a bug with AWS Amplify that means the S3Trigger bucket permissions get replaced by any API permissions you add. They both create a policy using the same name and it seems whichever gets created last ends up replacing the previous one.
I worked around this by renaming the S3 trigger policy.
Yes you need to provide a Lambda execution role to access your Amazon S3 bucket.
You will can use a policy similar to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::AWSDOC-EXAMPLE-BUCKET/*"
]
}
]
See https://aws.amazon.com/premiumsupport/knowledge-center/lambda-execution-role-s3-bucket/
I have viewer-request and origin-response Lambda functions deployed to a CloudFront distribution, which are firing, but not logging to CloudWatch. I have spent a considerable amount of time researching this topic, and have run through all advice from other posts including:
Checking all regions for logs, as I know that they CloudWatch logs will be created in the region which the labmda#edge function runs. No logs in any of them.
I have checked that the AWSServiceRoleForCloudFrontLogger role exists.
Interestingly when I purposefully code in an error into one of Lambda functions, I do get logs created within a group named /aws/cloudfront/LambdaEdge/<cloudfront distribution id> containing error logs, however there is no output from the console.log statements here.
For the life of me I can't work out how I can enable logging of ALL requests, both successes and failures, to CloudWatch, containing my debug statements using console.log().
The AWSServiceRoleForCloudFrontLogger contains a single policy AWSCloudFrontLogger:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:/aws/cloudfront/*"
}
]
}
EDIT:
Below is the AWS role suggested by AWS support. I can confirm this worked and resolved the issue.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}```
The issue most probably is that Lambda does not have the permissions to output the logs into CloudWatch.
Can you double check the Lambda function execution role permissions?
Related Link : Can't get AWS Lambda function to log (text output) to CloudWatch
Explanation
So there are two kinds of logs here, hence you have to provide permissions to CloudWatch at two different places.
Logs that you put in your Lambda function (using console.log), since these logs are to be published by the function to CloudWatch, function execution role should have the permission to CloudWatch. This is true irrespective of who triggers the Lambda function.
Now comes L#E, sometimes you might end up modifying request/response in a way that is not valid as per CloudFront. In these scenarios only ClodFront has the knowledge that you messed up(your Lambda function doesn't know this) and it publishes this knowledge in form of logs to CloudWatch. Now since this is a different entity, it needs it own permissions to push the logs to CloudWatch(which you had provided via AWSServiceRoleForCloudFrontLogger).
I tried to upload image using aws-sdk, multer-s3.
In my local environment, uploading image was succeed, but in production environment(aws lambda), it fail with error status 403 forbidden.
But my aws credential key and secret-key is same as local environment. also i checked aws key in production environment successfully.
I think difference between two other environment is nothing.What am I missing?
I have even tried setting aws key in my router code like below, but it also failed.
AWS.config.accessKeyId = 'blabla';
AWS.config.secretAccessKey = 'blalbla';
AWS.config.region = 'ap-northeast-2';
and here is my policy
{
"Id": "Policy1536755128154",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1536755126539",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::elebooks-image/*",
"Principal": "*"
}
]
}
Update your attached s3 bucket policy to a user according to below policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET",
"arn:aws:s3:::YOUR-BUCKET/*"
]
}
]
}
it's working on my server.
I haven't worked with AWS Lambda but I am familiar with S3. When you're using the AWS SDK in your local environment, you're probably using the root user with default full access, so it will just work.
With Lambda however, according to the following extract from the documentation, you need to make sure that the IAM role you specified when you created the Lambda function has the appropriate permissions to do an s3:putObject to that bucket.
Permissions for your Lambda function – Regardless of what invokes a Lambda function, AWS Lambda executes the function by assuming the IAM role (execution role) that you specify at the time you create the Lambda function. Using the permissions policy associated with this role, you grant your Lambda function the permissions that it needs. For example, if your Lambda function needs to read an object, you grant permissions for the relevant Amazon S3 actions in the permissions policy. For more information, see Manage Permissions: Using an IAM Role (Execution Role).
See Writing IAM policies: How to grant access to an S3 bucket
I have server S3 buckets belonging to different clients. I am using AWS SDK for PHP in my application to upload photos to the S3 bucket. I am using the AWS SDK for Laravel 4 to be exact but I don't think the issue is with this specific implementation.
The problem is unless I give the AWS user my server is using the FullS3Access it will not upload photos to the bucket. It will say Access Denied! I have tried first with only giving full access to the bucket in question, then I realized I should add the ability to list all buckets because that is probably what the SDK tries to do to confirm the credentials but still no luck.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::clientbucket"
]
}
]
}
It is a big security concern for me that this application has access to all S3 buckets to work.
Jeremy is right, it's permissions-related and not specific to the SDK, so far as I can see here. You should certainly be able to scope your IAM policy down to just what you need here -- we limit access to buckets by varying degrees often, and it's just an issue of getting the policy right.
You may want to try using the AWS Policy Simulator from within your account. (That link will take you to an overview, the simulator itself is here.) The policy generator is also helpful a lot of the time.
As for the specific policy above, I think you can drop the second statement and merge with the last one (the one that is scoped to your specific bucket) may benefit from some * statements since that may be what's causing the issue:
"Action": [
"s3:Delete*",
"s3:Get*",
"s3:List*",
"s3:Put*"
]
That basically gives super powers to this account, but only for the one bucket.
I would also recommend creating an IAM server role if you're using a dedicated instance for this application/client. That will make things even easier in the future.