I created an IAM role that gives full access to the S3 Bucket and attached it to the EC2 instance. However, I am unable to view the image when I try to view it from the EC2 hosted website. I keep getting a 403 Forbidden code.
Below is the IAM role and the policy attached:
It is seen that GetObject is enabled:
But the error still persists:
Any advice on how to solve this? Thank you for reading.
The URL you are using to access the object does not appear to include any security information (bucket.s3.amazonaws.com/cat1.jpg). Thus, it is simply an 'anonymous' request to S3, and since the object is private, S3 will deny the request.
The mere fact that the request is being sent from an Amazon EC2 instance that has been assigned an IAM Role is not sufficient to obtain access to the object via an anonymous URL.
To allow a browser to access a private Amazon S3 object, your application should generate an Amazon S3 pre-signed URLs. This is a time-limited URL that includes security information that identifies you as the requester and includes a signature that permits access to the private object.
Alternatively, code running on the instance can use an AWS SDK to make an API call to S3 to access the object (eg GetObject()). This will succeed because the AWS SDK will use the credentials provided by the IAM Role.
Related
I have installed nginx with kaltura-nginx-vod module on EC2. I would like to set up private remote read-only mode to my s3 bucket via http. Example of the desired nginx configuration:
vod_upstream_location /s3;
location /s3/ {
internal;
proxy_pass http://my-s3-bucket.s3-eu-east-1.amazonaws.com/;
}
I tried to create Access Point to my s3. In the settings I had pointed Access option to my VPC, but cURL returned 403 from EC2 when I tried to get some object from s3 by http url.
Also I had created IAM role with read-only S3 access and assign to my EC2, but result was same - 403.
How to set up private http-access from EC2 to s3 bucket via virtual private amazon network in same region?
This does not work because you can't access objects based on their URL, unless they are public. Since you've assigned IAM role to the EC2 instance you have to make signed http request to the object using the object's url with EC2 instance role credentials.
So either have to construct the valid signature yourself, or simply use AWS SDK, such as boto3 for python, to do this for you. By looking at the kaltura-nginx-vod description it does not seem to be making any signed requests to S3 on your behalf.
I am trying to setup the S3 buckets I want my CloudFront distribution to access.
From my client I use AWS mobile SDK to upload to S3. When clients consume files from S3 I hit CloudFront and things worked until I made this change:
When I created the distribution, I had CloudFront update the bucket policy to have the OAI included in the principal:
So, then I thought I could run GET calls on CloudFront, because CloudFront has the OAI setup and S3 bucket reflects that.
However, I keep getting Access denied:
What else do I need to do to secure down the bucket and only allow CloudFront to read and allow my client app to be able to upload files to it using the SDK configured with the poolId I have setup for it?. Unless I leave the "Block all public access" unchecked, I get access denied via CloudFront.
Unfortunately according to the documentation the following is stated:
Amazon S3 Block Public Access must be disabled on the bucket.
This is because it will ignore the bucket policy due to the Block public and cross-account access to buckets and objects through any public bucket or access point policies value.
Unless your bucket policy also allows anonymous GetObject by default your objects will not be public.
We are able to put objects into our S3 Bucket.
But now we have a requirement that we need to put these Object directly to an S3 Bucket which belongs to a different account and different region.
Here we have few questions:
Is this possible?
If possible what changes we need to do for this?
They have provided us Access Key, Secret Key, Region, and Bucket details.
Any comments and suggestions will be appreciated.
IAM credentials are associated with a single AWS Account.
When you launch your own Amazon EC2 instance with an assigned IAM Role, it will receive access credentials that are associated with your account.
To write to another account's Amazon S3 bucket, you have two options:
Option 1: Your credentials + Bucket Policy
The owner of the destination Amazon S3 bucket can add a Bucket Policy on the bucket that permits access by your IAM Role. This way, you can just use the normal credentials available on the EC2 instance.
Option 2: Their credentials
It appears that you have been given access credentials for their account. You can use these credentials to access their Amazon S3 bucket.
As detailed on Working with AWS Credentials - AWS SDK for Java, you can provide these credentials in several ways. However, if you are using BOTH the credentials provided by the IAM Role AND the credentials that have been given to you, it can be difficult to 'switch between' them. (I'm not sure if there is a way to tell the Credentials Provider to switch between a profile stored in the ~/.aws/credentials file and those provided via instance metadata.)
Thus, the easiest way is to specify the Access Key and Secret Key when creating the S3 client:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("access_key_id", "secret_key_id");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
It is generally not a good idea to put credentials in your code. You should load them from a configuration file.
Yes, it's possible. You need to allow cross account S3 put operation in bucket's policy.
Here is a blog by AWS. It should help you in setting up cross account put action.
I am using script to create an S3 bucket and uploading cloudformation templates to that bucket with same user credentials.
But when trying to access the templates of that bucket using cloudformation, am getting access denied. I even tried adding the bucket policy and giving explicit access to that user but getting same "Access denied" error. Please suggest if am missing anything. Thank you.
As you are trying to access the s3 bucket using cloud formation, you need to ensure that cloud formation should have proper permission to access S3 because this time you are not calling the S3 API but cloud formation.
If you are using AWS CloudFormation via the management console, then CloudFormation will use your own credentials to retrieve the template from Amazon S3.
Therefore, the user that is using CloudFormation will require access to the object in Amazon S3.
If you believe that this has been correctly configured, please edit your Question to provide more details (eg the permissions granted to the user who is using CloudFormation).
I have this snippet to upload a file on S3
s3 = boto3.resource('s3')
s3.Object('bucketname', timestamped_filename).put(Body=open(FILE_SAVE_PATH, 'rb'))
my bucket has a delete/upload permission for everyone, so it does work on my Windows machine.
However, when I try to run the same code on my Mac it throws
botocore.exeptions.NoCredentialsError: Unable to locate credentials
Is this behavior normal?
And what kind of credentials I can possibly provide if I'm accessing a public bucket?
Thank you.
When making an API call to AWS, valid credentials must be provided. These credentials are associated with an IAM User and grant access to AWS services.
When making API calls (or using the AWS Command-Line Interface (CLI)) from an Amazon EC2 instance, these credentials can be granted to the EC2 instance by assigning an IAM Role to the instance at launch time.
When making calls from a non-EC2 computer, credentials must be provided via a configuration file or environment variables.
It appears that your Windows machine is either an EC2 instance with a role, or it has a local configuration file with valid credentials; and it appears that your Mac has neither of these.
See: boto3 Credentials documentation