I have a lambda function that is set up as a trigger on an S3 bucket and it gets called correctly but the lambda function fails when calling S3.getObject.
Do I need to separately set permissions for the lambda function in order to allow it to call getObject on the bucket that triggered the event?
UPDATE:
There seems to be a bug with AWS Amplify that means the S3Trigger bucket permissions get replaced by any API permissions you add. They both create a policy using the same name and it seems whichever gets created last ends up replacing the previous one.
I worked around this by renaming the S3 trigger policy.
Yes you need to provide a Lambda execution role to access your Amazon S3 bucket.
You will can use a policy similar to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::AWSDOC-EXAMPLE-BUCKET/*"
]
}
]
See https://aws.amazon.com/premiumsupport/knowledge-center/lambda-execution-role-s3-bucket/
Related
From the AWS tutorial page on configuring a lifecycle hook:
Before you create a Lambda function, you must first create an
execution role and a permissions policy to allow Lambda to complete
lifecycle hooks.
What is risky or special about completing lifecycle hooks that permissions are needed?
I can't see what is qualitatively different from anything else we configure in EC2. Everything is risky, but we don't need to set roles and permissions.
"completing lifecycle hooks" is actually an API call that your lambda should execute against your ASG:
complete_lifecycle_action - Completes the lifecycle action for the specified token or instance with the specified result.
So your lambda execution role must have permissions to perform such an action. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "autoscaling:CompleteLifecycleAction",
"Resource": "*"
}
]
}
I am using javascript SDK and a lambda function to copy a file from a source account to the current account where my lambda lives. I'm assuming a role for cross account access to the source account S3 bucket before I call copyObject api. But I'm getting Access Denied! Here is my cross account role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::sourceBucket/*"
]
}
]
}
and here is my lambda permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::destinationbucket/*",
"Effect": "Allow"
},
{
"Action": [
"sts:*"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
I think when I assume the cross account role I give up the lambda permissions and then I cannot copy file to the destination. Any help is much appreciated.
You appear to have:
A source bucket (Bucket-A) in Account-A
A destination bucket (Bucket-B) in Account-B
An AWS Lambda function in Account-B
An IAM Role (Role-A) in Account-A that the Lambda function can assume
Your requirement is to have the Lambda function copy objects from Bucket-A to Bucket-B.
When using the CopyObject command, the credentials must have:
Read permissions on Bucket-A
Write permissions on Bucket-B
However, while Role-A does have read permissions on Bucket-A, it does not have permission to write to Bucket-B.
Therefore, you have two choices:
Option 1: Add a Bucket Policy to Bucket-B that grants write permissions to Role-A, or
Option 2: Instead of using Role-A, the administrator of Bucket-A in Account-A can grant read permissions for Bucket-A to the IAM Role being used by the Lambda function by creating a Bucket Policy on Bucket-A . That is, the Lambda function does not assume Role-A. It just uses its own role to read directly from Bucket-A.
Option 2 is better, because it is involves less moving parts. That is, there is no need to assume a role. I suggest you try this method before using the AssumeRole method.
If you do wish to continue with using Role-A, then please note that the CopyObject() command will need to set the ACL to bucket-owner-full-control. If this is not done, the Account-B will not have permission to access/delete the copied objects. (If you use the second method, then the objects will be copied using Account-B credentials, so it is not required.)
Bottom line: For your describe scenario involving Role-A, add a Bucket Policy to Bucket-B that grants write permissions to Role-A.
I tried to upload image using aws-sdk, multer-s3.
In my local environment, uploading image was succeed, but in production environment(aws lambda), it fail with error status 403 forbidden.
But my aws credential key and secret-key is same as local environment. also i checked aws key in production environment successfully.
I think difference between two other environment is nothing.What am I missing?
I have even tried setting aws key in my router code like below, but it also failed.
AWS.config.accessKeyId = 'blabla';
AWS.config.secretAccessKey = 'blalbla';
AWS.config.region = 'ap-northeast-2';
and here is my policy
{
"Id": "Policy1536755128154",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1536755126539",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::elebooks-image/*",
"Principal": "*"
}
]
}
Update your attached s3 bucket policy to a user according to below policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-BUCKET",
"arn:aws:s3:::YOUR-BUCKET/*"
]
}
]
}
it's working on my server.
I haven't worked with AWS Lambda but I am familiar with S3. When you're using the AWS SDK in your local environment, you're probably using the root user with default full access, so it will just work.
With Lambda however, according to the following extract from the documentation, you need to make sure that the IAM role you specified when you created the Lambda function has the appropriate permissions to do an s3:putObject to that bucket.
Permissions for your Lambda function – Regardless of what invokes a Lambda function, AWS Lambda executes the function by assuming the IAM role (execution role) that you specify at the time you create the Lambda function. Using the permissions policy associated with this role, you grant your Lambda function the permissions that it needs. For example, if your Lambda function needs to read an object, you grant permissions for the relevant Amazon S3 actions in the permissions policy. For more information, see Manage Permissions: Using an IAM Role (Execution Role).
See Writing IAM policies: How to grant access to an S3 bucket
I've been trying in vain to see logs for my lambda function. No matter what I see this:
To be clear, the lambda function runs properly. I just can't see the logs at all.
I've recreated the function multiple times to make sure it wasn't me accidentally mucking with a setting that disabled logging.
My steps:
From the AWS Lambda function page, create a new function. I'm using nodejs 8.10, but it seems to fail even if I use a 6.x version.
Upload a zip file with my function (including the node_modules directory, package.json and package-lock.json as well) to S3 into testbucket with the filename thumbnails.zip.
Use this command to publish my lambda function from S3: aws lambda update-function-code --function-name transcode-v2 --s3-bucket mytestbucket --s3-key thumbnails.zip.
I can test my function with sample data and the test button.
I can also invoke it from the CLI and it seems to "work" (in that it runs)
I always see this message when I go to cloud logs: There was an error loading Log Streams. Please try again by refreshing this page. I've tried recreating the function twice and this does not fix it.
Anyone know what is wrong here? The function seems to work under test (meaning, I see logs inside the test logging dialog), and when I invoke from the command line. But, nothing ever gets into the cloud logging page except for that error.
I can see that invocations are being triggered from AWS.
When an AWS role is created, you must provide an IAM Role that will be used by the Lambda function. The permissions associated with the role will grant access to AWS services and resources required by the Lambda function.
There is a default AWSLambdaBasicExecutionRole that provides:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
These permissions allow the Lambda function to write log information to Amazon CloudWatch Logs.
There are other available Roles too, such as AWSLambdaExecute:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::*"
}
]
}
So, either use one of these pre-provided roles, or add similar permissions to the Role that your Lambda function is using.
I want to connect my AWS S3 with my AWS Lambda. I created my s3 bucket and named it xyz. While creating an event source on my AWS Lambda function, it is showing the following error
There was an error creating the event source mapping: Your bucket must be in the same region as the function.
While going through this link, I found out that I needed to setup a event notification for the s3 bucket for the AWS Lambda function. But I am unable to setup event notification for the s3 bucket as it is not showing settings for an AWS lambda function in the events tab of the s3 bucket's properties.
My Policy document for the IAM role I created for Lambda is as follows
{
"Version": "VersionNumber",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::xyz/*"
]
}
]
}
Can somebody let me know why I am unable to create an event for AWS Lambda for an operation on s3 bucket?
Thanks to John's comment, I was able to resolve this issue.
This problem occurs when (clearly stated by the error message) Lambda and S3 buckets are residing in different regions.
For creating lambda in the same region as that of s3 bucket, you need to know bucket's region.
To view the region of an Amazon S3 bucket, click on the bucket in the management console, then go to the Properties tab. The region will be displayed
Now that you know your target region. You can just switch to that region, in aws console, by selecting a region from the dropdown selection menu on top right corner just before Support menu.
Once you change your region to that of s3 bucket, creating a new lambda function will solve the issue.