What is the correct public access config for s3 bucket to access from lambda with http proxy integration? - amazon-web-services

I have an s3 bucket and I want to upload a file from a lambda that is invoked through an API call but I don't know what is the correct config for blocking public access configuration setting of the s3 bucket. Now, none of the 4 options are checked and it works but I'm not sure about it considering the security aspects.
What is the correct configuration of policy bucket, IAM role and blocking public access for upload a file to s3 using an http call?
Thanks!

The appropriate configuration is:
Lambda function IAM role: permit s3:PutObject to s3://mybucket/myprefix/*
Lambda function: use AWS SDK to invoke PutObject to S3
The S3 bucket policy and S3 block public access settings are largely orthogonal to the Lambda requirement here. In the typical case, you should use standard best practices: do not have an S3 bucket policy at all (unless you specifically need it) and enable Block Public Access at the account level (unless you specifically need buckets to be public).

Related

Getting access denied from S3 when calling cross account

I have an AWS data account where I have an S3 bucket. I have several AWS worker accounts where I have AWS Lambda functions. I want the Lambda functions to push objects into the S3 bucket in the data account.
I have configured in the data account a role R1 that has S3 Full Access, and a policy that establishes a trusted entity with the worker accounts and gives those accounts assumerole access. I have also configured a bucket policy that gives R1 access to the S3 bucket.
In the worker accounts, I have configured a Role R2 for the Lambda function. That R2 role has a policy attached that says it can assumeRole R1. When I try to putObject from the Lambda function, I get 403 access denied.
I have no idea where in this chain things are not working, the error is completely nondescript and useless, and every documentation I look at solely talks about how to do this through the console, whereas I am using CloudFormation to do this. I'm not sure how to even begin debugging this because I'm not sure of an easy way to emulate a Role and see what its doing. Any suggestions?
An alternative approach would be:
Add a bucket policy to the S3 bucket in the Data Account. In the policy, grant PutObject permissions for the IAM Role used by each of the worker AWS Lambda functions.
Grant permission in the IAM Role used by each of the worker Lambda functions to PutObject into the central bucket
That is, both accounts are permitting the access.
Then, the Lambda function can write directly to the bucket, without needing to assume an IAM Role.

can't view S3 Bucket image object from EC2 hosted website

I created an IAM role that gives full access to the S3 Bucket and attached it to the EC2 instance. However, I am unable to view the image when I try to view it from the EC2 hosted website. I keep getting a 403 Forbidden code.
Below is the IAM role and the policy attached:
It is seen that GetObject is enabled:
But the error still persists:
Any advice on how to solve this? Thank you for reading.
The URL you are using to access the object does not appear to include any security information (bucket.s3.amazonaws.com/cat1.jpg). Thus, it is simply an 'anonymous' request to S3, and since the object is private, S3 will deny the request.
The mere fact that the request is being sent from an Amazon EC2 instance that has been assigned an IAM Role is not sufficient to obtain access to the object via an anonymous URL.
To allow a browser to access a private Amazon S3 object, your application should generate an Amazon S3 pre-signed URLs. This is a time-limited URL that includes security information that identifies you as the requester and includes a signature that permits access to the private object.
Alternatively, code running on the instance can use an AWS SDK to make an API call to S3 to access the object (eg GetObject()). This will succeed because the AWS SDK will use the credentials provided by the IAM Role.

Allow CloudFront to access S3 origin while also having S3 bucket Block all public access?

I am trying to setup the S3 buckets I want my CloudFront distribution to access.
From my client I use AWS mobile SDK to upload to S3. When clients consume files from S3 I hit CloudFront and things worked until I made this change:
When I created the distribution, I had CloudFront update the bucket policy to have the OAI included in the principal:
So, then I thought I could run GET calls on CloudFront, because CloudFront has the OAI setup and S3 bucket reflects that.
However, I keep getting Access denied:
What else do I need to do to secure down the bucket and only allow CloudFront to read and allow my client app to be able to upload files to it using the SDK configured with the poolId I have setup for it?. Unless I leave the "Block all public access" unchecked, I get access denied via CloudFront.
Unfortunately according to the documentation the following is stated:
Amazon S3 Block Public Access must be disabled on the bucket.
This is because it will ignore the bucket policy due to the Block public and cross-account access to buckets and objects through any public bucket or access point policies value.
Unless your bucket policy also allows anonymous GetObject by default your objects will not be public.

Copy files from an S3 bucket in one AWS account to another AWS account

There is a S3 bucket owned by a different AWS account which has a list of files. I need to copy the files to my S3 bucket. I would like to perform 2 things in order to do this:
Add an S3 bucket event in the other account which will trigger a lambda to copy files in my aws account.
My lambda should be provided permission (possibly through an assumed role) in order to copy the files.
What are the steps that I must perform in order to achieve 1 and 2?
The base requirement of copying files is straight-forward:
Create an event on the source S3 bucket that triggers a Lambda function
The Lambda function copies the object to the other bucket
The complicating factor is the need for cross-account copying.
Two scenarios are possible:
Option 1 ("Pull"): Bucket in Account-A triggers Lambda in Account-B. This can be done with Resource-Based Policies for AWS Lambda (Lambda Function Policies) - AWS Lambda. You'll need to configure the trigger via the command-line, not the management console. Then, a Bucket policy on the bucket in Account-A needs to allow GetObject access by the IAM Role used by the Lambda function in Account-B.
Option 2 ("Push"): Bucket in Account-A triggers Lambda in Account-A (same account). The Bucket policy on the bucket in Account-B needs to allow PutObject access by the IAM Role used by the Lambda function in Account-A. Make sure it saves the object with an ACL of bucket-owner-full-control so that Account-B 'owns' the copied object.
If possible, I would recommend the Push option because everything is in one account (aside from the Bucket Policy).
There is an easier way of doing it without lambda, AWS allows to set the replication of a S3 bucket( including cross region and different account), when you setup the replication all new objects will get copied to the replicated bucket, for existing objects using aws CLI just do copy object again with same bucket so that it gets replicated to target bucket, Once all existing the objects are copied you can turn off replication if you don't wise for future objects to get replicated, Here AWS does the heavy lifting for you :) https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html
There is few ways to achieve this.
You could use SNS notification and cross account IAM to trigger the lambda. Read this: cross-account-s3-data-copy-using-lambda-function explains pretty well what you are trying to achieve.
Another approach is to deploy lambda and all the resources required in the account that holds the files. You would need to create S3 notification that triggers lambda which copies the files to your account or have cloudwatch schedule (bit like cronjob) that triggers the lambda.
In this case lambda and the trigger would have to exists in the account that holds the files.
In both scenarios minimal IAM permissions that lambda would have to have is to be able to read and write to and from s3 buckets. To use STS in order to assume role. You also need to add Cloudwatch permissions to be able to generate lambda logs.
Rest of the required IAM permissions will depend of the approach you are going to take.

Cannot set S3 as api gateway AWS service

I'm trying to setup a Amazon API Gateway proxy which would be connected to s3 bucket to just proxy each file/object from the bucket to the API Gateway endpoint. (I need this because i need some files to be passed through other HTTP verbs, and s3 does not allow POST method).
The thing is that I cannot select 'S3' as aws service
Can someone provide me some guidance?
To allow the API to invoke required Amazon S3 actions, you must have appropriate IAM policies attached to an IAM role. The next section describes how to verify and to create, if necessary, the required IAM role and policies.
For your API to view or list Amazon S3 buckets and objects, you can use the IAM-provided AmazonS3ReadOnlyAccess policy in the IAM role.
Please read documentation here to know full setup
It should be under a name Simple Storege Service (S3)