I want to connect my AWS S3 with my AWS Lambda. I created my s3 bucket and named it xyz. While creating an event source on my AWS Lambda function, it is showing the following error
There was an error creating the event source mapping: Your bucket must be in the same region as the function.
While going through this link, I found out that I needed to setup a event notification for the s3 bucket for the AWS Lambda function. But I am unable to setup event notification for the s3 bucket as it is not showing settings for an AWS lambda function in the events tab of the s3 bucket's properties.
My Policy document for the IAM role I created for Lambda is as follows
{
"Version": "VersionNumber",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::xyz/*"
]
}
]
}
Can somebody let me know why I am unable to create an event for AWS Lambda for an operation on s3 bucket?
Thanks to John's comment, I was able to resolve this issue.
This problem occurs when (clearly stated by the error message) Lambda and S3 buckets are residing in different regions.
For creating lambda in the same region as that of s3 bucket, you need to know bucket's region.
To view the region of an Amazon S3 bucket, click on the bucket in the management console, then go to the Properties tab. The region will be displayed
Now that you know your target region. You can just switch to that region, in aws console, by selecting a region from the dropdown selection menu on top right corner just before Support menu.
Once you change your region to that of s3 bucket, creating a new lambda function will solve the issue.
Related
I was able to restrict access to private content on my bucket using Cloudfront but now I'm unable to read from the bucket for Elemental Media Convert. Is there any way to allow only media convert services and restrict everything else?
Here is my bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U7X28UWXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myawsbucket5696/*"
}
]
}
Any help is appreciated. Thank you.
The 3403 error is 'HTTP Access Forbidden'. MediaConvert cannot read that file. Is it perhaps owned by a user other than the bucket owner? The role within your Account which MediaConvert assumes when running jobs on your behalf, will be subject to whatever access restrictions exist on objects within your source S3 bucket.
You can test & debug this file access outside of MediaConvert by assuming the designated Role in your AWS Console and then using the CloudShell prompt. Use the S3api command to attempt to get metadata about the object in question. This should succeed if your Role has permission to touch the object. For Example: aws s3api head-object --bucket mynewbucket --key myfile.mov
FYI you can see all MediaConvert error codes at https://docs.aws.amazon.com/mediaconvert/latest/ug/mediaconvert_error_codes.html
I have a ubuntu ec2 with cloudwatch agent running. The agent is able to push the logs to Cloudwatch as expected. But I am unable to export the logs to S3.
The instance policy has SSMManagedInstanceCore and CloudwatchAgentServerPolicy as described in the documentation.
At this point, I am not sure what policy needs to be assigned.
I also added log policy to write to S3 bucket.
All this is being done in terraform.
Can someone help me solve this pls?
Thanks.
You can add inline policy to your instance role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::/<your-bucket-name>/*"
}
]
}
Depending on the bucket setup, other permissions may be required, e.g. for KMS encryption.
UPDATE
If you want to automatically export your logs from CloudWatch Logs to S3 you have to setup Subscription Filter with Amazon Kinesis Data Firehose. This is fully independent from your instance role and the instance itself.
I'm getting this error message when trying to see the log file in AWS CloudWatch for my AWS Lambda function.
An error occurred while describing log streams.
The specified log group does not exist.
Log group does not exist
The specific log group: /aws/lambda/xxxxx does not exist in this account or region.
By the way, I'm using the Singapore region.
Make sure that your Lambda function's execution role has sufficient permissions to write logs to CloudWatch, and that the log group resource in the IAM policy includes your function's name.
In the IAM console, review and edit the IAM policy for the execution role to make sure that:
The write actions CreateLogGroup and CreateLogStream are allowed. You should attach these policies in the IAM roles of the Lambda function
Note: If you don't need custom permissions for your function, you can attach the managed policy AWSLambdaBasicExecutionRole, which allows Lambda to write logs to CloudWatch.
The AWS Region specified in the Amazon Resource Name (ARN) is the
same as your Lambda function's Region.
The log-group resource includes your Lambda function name. For
example, if your function is named myLambdaFunction, the log-group is
/aws/lambda/myLambdaFunction.
Here is an example of the permissions in the JSON format
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:region:accountId:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
" arn:aws:logs:region:accountId:log-group:/aws/lambda/functionName:*"
]
}
]
}
I have a lambda function that is set up as a trigger on an S3 bucket and it gets called correctly but the lambda function fails when calling S3.getObject.
Do I need to separately set permissions for the lambda function in order to allow it to call getObject on the bucket that triggered the event?
UPDATE:
There seems to be a bug with AWS Amplify that means the S3Trigger bucket permissions get replaced by any API permissions you add. They both create a policy using the same name and it seems whichever gets created last ends up replacing the previous one.
I worked around this by renaming the S3 trigger policy.
Yes you need to provide a Lambda execution role to access your Amazon S3 bucket.
You will can use a policy similar to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ExampleStmt",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::AWSDOC-EXAMPLE-BUCKET/*"
]
}
]
See https://aws.amazon.com/premiumsupport/knowledge-center/lambda-execution-role-s3-bucket/
There is a limit of 100 buckets per AWS account. My application is creating buckets when certain conditions are met. Is there a mechanism to monitor the number of buckets created in my account? I would like to alarm/get notified before I reach the 100 bucket limit.
Edit: The plan is to create prefix per customer and grant access to the prefix using Resource Policy. The customers would be uploading objects to only the prefix they have access to. We would update resource policy every time we create a new prefix. Sample policy as shown below. Once we hit limit on Resource Policy size for bucket, we would then need to create new bucket.
"Statement": [
{
"Sid": "AllowGetObject",
"Effect": "Allow",
"Principal": {
"AWS":"123456789012"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::TestBucketName/123456789012/*",
"arn:aws:s3:::TestBucketName/123456789012"
]
}
]
Unfortunately for S3 there is no AWS backed solution that performs all of the actions for monitoring S3.
To do this you would need to create your own solution, the below is a suggestion for covering this problem:
Use a Lambda function to call the list-buckets function, counting the total number of buckets in your account. Push the value to CloudWatch as a custom metric.
Create a CloudWatch alarm for this metric based on a specific threshold.
Create a Lambda function and use the list-service-quotas function to get your service quotas for S3 buckets. Use this to update the alarm thresholds.
Set both of these Lambda functions on a scheduled CloudWatch event.
For other services quotas you might be able to take advantage of the Trusted Advisor API if you are using Business or Enterprise support plan, however this only covers specific quotas for services.
If your application is running on node.js, you can get the number of buckets using the following code:
const s3 = new AWS.S3();
s3.listBuckets({}, (err, data) => {
if (err) console.log(err);
else console.log(data.Buckets.length);
}
It appears that:
You are providing customers with credentials associated with an IAM User (not a good practice because generally IAM User credentials are for your internal staff, not external entities)
You want to allow customers to upload data to Amazon S3
I would recommend:
Use one Amazon S3 bucket
Allow customers to access their own folder (Prefix) within the bucket
This can be done by creating a bucket policy that uses IAM Policy Variables, which can automatically insert the username into the policy. This allows one policy to apply differently for every user.
Here is an example from IAM policy elements: Variables and tags - AWS Identity and Access Management:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${aws:username}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${aws:username}/*"]
}
]
}
This way, users can access their own folder, but cannot access other users' folders.