Having trouble granting public read permissions in S3 bucket - amazon-web-services

I'm trying to understand the specific permissions I need to set on my Amazon S3 bucket. I've looked for this information already, but have only seen 1 or 2 examples of the new ACL/Policies that Amazon has implemented.
My use case: I'm using S3 to store images for my website (hosted elsewhere). I would like to upload images on S3 and be able to access them through their link on my own site.
I've used https://awspolicygen.s3.amazonaws.com/policygen.html to generate a GetObject policy:
{
"Id": "Policyxxxxxxxxxxxxxxxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmtxxxxxxxxxxxxxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::xxxxxx-xxxxx-xxxxxxx/*",
"Principal": "*"
}
]
}
These are my current Block public access settings:
Block all public access: Off
Block public access to buckets and objects granted through new access control lists (ACLs): On
Block public access to buckets and objects granted through any access control lists (ACLs): On
Block public access to buckets and objects granted through new public bucket policies: Off
Block public and cross-account access to buckets and objects through any public bucket policies: Off
In Access Control List, I have not added any permissions.
In Bucket Policy, I placed the policy I generated.
In CORS configuration, I specified localhost and my domain name as allowed origins and GET's as allowed methods.
Is this correct for my usage? It currently works, but I'm not 100% sure I've gotten the permissions right. All I need is public access to my photos (so my website can grab them) and to deny anything else (besides me logging in and uploading more photos).

Related

Make one S3 bucket public

Currently I have 5 S3 buckets in my account, and all of them are Block all public access -> ON and the same setting is also there for Block Public Access settings for this account -> ON.
Now I want to create a new bucket that should be public, and I don't want to change any of my existing buckets. So for the newly created bucket I have set Block all public access = OFF. But when I try to save below policy, it gives Access denied error. So I guess I have to Turn Off Block Public Access settings for this account.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::MyNewImageBucketS3/*",
"Principal": "*"
}
]
}
I want to know that if I turn off account level setting, then will it affect my existing buckets?
As a second option I can configure CloudFront and serve files publicly but want to know about the public access change at the account level.
Block all public access = OFF; this setting is for individual s3 buckets provided you are doing it from bucket settings, so for that specific bucket you can turn this off and you are good to go.
If you want specific objects to be publicly accessible then this can be achieved via similar IAM policy you shared but to make this work turn on public access on that bucket and then you can apply IAM policy to allow specific objects and deny remaining.
Below image describes that if you change it in bucket setting, its going to effect on that specific bucket and the objects within bucket only
For more guidelines please check below AWS doc
https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-alternatives-guidelines.html

How to make S3 objects readable only from certain IP addresses?

I am trying to setup Cloudflare to cache images from S3. I want to be as restrictive (least permissive) as possible in doing this. I assume I need to accept requests from Cloudflare to read my S3 images. I want all other requests to be rejected.
I followed this guide: https://support.cloudflare.com/hc/en-us/articles/360037983412-Configuring-an-Amazon-Web-Services-static-site-to-use-Cloudflare
I did not enable static website hosting on my bucket, because it's not necessary for my case.
In my bucket permissions I turned off "Block all public access" and temporarily turned off "Block public access to buckets and objects granted through new public bucket or access point policies". I needed to do this in order to add a bucket policy.
From the link above, I then added a bucket policy that looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CLOUDFLARE_IP_0>,
<CLOUDFLARE_IP_1>,
<CLOUDFLARE_IP_2>,
...
]
}
}
}
]
}
At this point, a message appeared in the AWS console stating:
"This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket."
I then turned back on "Block public access to buckets and objects granted through new public bucket or access point policies" and turned off "Block public and cross-account access to buckets and objects through any public bucket or access point policies".
At this point, the S3 image request behavior seems to be working as intended, but I am not confident that I set everything up to be minimally permissive, especially given the warning message in the AWS console.
Given my description, did I properly set things up in this bucket to accept read requests only from Cloudflare and deny all other requests? I want to make sure that requests from any origin other than Cloudflare will be denied.
Sounds good! If it works from CloudFlare, but not from somewhere else, then it meets your requirements.
Those Block Public Access warnings are intentionally scary to make people think twice before opening their buckets to the world.
Your policy is nicely limited to only GetObject and only to a limited range of IP addresses.

Should I make my S3 bucket public for static site hosting?

I have an s3 bucket that is used to host a static site that is accessed through cloudfront.
I wish to use the s3 <RoutingRules> to redirect any 404 to the root of the request hostname. To do this I need to set the cloudfront origin to use the s3 "website endpoint".
However, it appears that to allow Cloudfront to access the s3 bucket via the "website endpoint" and not the "s3 REST API endpoint", I need to explicitly make the bucket public, namely, with a policy rule like:
{
"Sid": "AllowPublicGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::dev.ts3.online-test/*"
},
{
"Sid": "AllowPublicListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::dev.ts3.online-test"
}
That's all well and good. It works. However AWS gives me a nice big shiny warning saying:
This bucket has public access. You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.
So I have two questions I suppose:
Surely this warning should be caveated, and is just laziness on AWS' part? Everything in the bucket is static files that can be freely requested. There is no protected or secret content in the bucket. I don't see why giving public read is a security hole at all...
For peace of mind, is there any way to specify a principalId in the bucket policy that will only grant this to cloudfront? (I know if I use the REST endpoint I can set it to the OAI, but I can't use the rest endpoint)
The first thing about the warning.
The list buckets view shows whether your bucket is publicly accessible. Amazon S3 labels the permissions for a bucket as follows:
Public –
Everyone has access to one or more of the following: List objects, Write objects, Read and write permissions.
Objects can be public –::
The bucket is not public, but anyone with the appropriate permissions can grant public access to objects.
Buckets and objects not public –:
- The bucket and objects do not have any public access.
Only authorized users of this account –:
Access is isolated to IAM users and roles in this account and AWS service principals because there is a policy that grants public access.
So the warning due to first one. Recomended policy by AWS for s3 static website is below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
]
}
]
}
Add a bucket policy to the website bucket that grants everyone access
to the objects in the bucket. When you configure a bucket as a
website, you must make the objects that you want to serve publicly
readable. To do so, you write a bucket policy that grants everyone
s3:GetObject permission. The following example bucket policy grants
everyone access to the objects in the example-bucket bucket.
BTW public access should be only GET, not anything else, Its totally fine to allow GET request for your static website on S3.
static-website-hosting

Only allow EC2 instance to access static website on S3

I have a static website hosted on S3, I have set all files to be public.
Also, I have an EC2 instance with nginx that acts as a reverse proxy and can access the static website, so S3 plays the role of the origin.
What I would like to do now is set all files on S3 to be private, so that the website can only be accessed by traffic coming from the nginx (EC2).
So far I have tried the following. I have created and attached a new policy role to the EC2 instance with
Policies Granting Permission: AmazonS3ReadOnlyAccess
And have rebooted the EC2 instance.
I then created a policy in my S3 bucket console > Permissions > Bucket Policy
{
"Version": "xxxxx",
"Id": "xxxxxxx",
"Statement": [
{
"Sid": "xxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXX-bucket/*"
}
]
}
As principal I have set the ARN I got when I created the role for the EC2 instance.
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
However, this does not work, any help is appreciated.
If the Amazon EC2 instance with nginx is merely making generic web requests to Amazon S3, then the question becomes how to identify requests coming from nginx as 'permitted', while rejecting all other requests.
One method is to use a VPC Endpoint for S3, which allows direct communication from a VPC to Amazon S3 (rather than going out an Internet Gateway).
A bucket policy can then restrict access to the bucket such that it can only be accessed via that endpoint.
Here is a bucket policy from Example Bucket Policies for VPC Endpoints for Amazon S3:
The following is an example of an S3 bucket policy that allows access to a specific bucket, examplebucket, only from the VPC endpoint with the ID vpce-1a2b3c4d. The policy uses the aws:sourceVpce condition key to restrict access to the specified VPC endpoint.
{
"Version": "2012-10-17",
"Id": "Policy",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
},
"Principal": "*"
}
]
}
So, the complete design would be:
Object ACL: Private only (remove any current public permissions)
Bucket Policy: As above
IAM Role: Not needed
Route Table configured for VPC Endpoint
Permissions in Amazon S3 can be granted in several ways:
Directly on an object (known as an Access Control List or ACL)
Via a Bucket Policy (which applies to the whole bucket, or a directory)
To an IAM User/Group/Role
If any of the above grant access, then the object can be accessed publicly.
Your scenario requires the following configuration:
The ACL on each object should not permit public access
There should be no Bucket Policy
You should assign permissions in the Policy attached to the IAM Role
Whenever you have permissions relating to a User/Group/Role, it is better to assign the permission in IAM rather than on the Bucket. Use Bucket Policies for general access to all users.
The policy on the Role would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This policy is directly applied to the IAM Role, so there is no need for a principal field.
Please note that this policy only allows GetObject -- it does not permit listing of buckets, uploading objects, etc.
You also mention that "I have set all files to be public". If you did this by making each individual object publicly readable, then anyone will still be able to access the objects. There are two ways to prevent this -- either remove the permissions from each object, or create a Bucket Policy with a Deny statement that stops access, but still permits the Role to get access.
That's starting to get a bit tricky and hard to maintain, so I'd recommend removing the permissions from each object. This can be done via the management console by editing the permissions on each object, or by using the AWS Command-Line Interface (CLI) with a command like:
aws s3 cp s3://my-bucket s3://my-bucket --recursive --acl private
This copies the files in-place but changes the access settings.
(I'm not 100% sure whether to use --acl private or --acl bucket-owner-full-control, so play around a bit.)

Amazon S3 access control-Who can upload files?

I have a static website created with Amazon S3. The only permissions I have set are through the bucket policy provided in Amazons tutorial:
{
"Version":"2012-10-17",
"Statement": [{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
Clearly, this policy enables the public to view any file stored on my bucket, which I want. My question is, is this policy alone enough to prevent other people from uploading files and/or hijacking my website? I wish for the public to be able to access any file on the bucket, but I want to be the only one with list, upload, and delete permissions. Is this the current behavior of my bucket, given that my bucket policy only addresses view permissions?
Have a look at this: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_EvaluationLogic.html#policy-eval-basics
From that document:
When a request is made, the AWS service decides whether a given
request should be allowed or denied. The evaluation logic follows
these rules:
By default, all requests are denied. (In general, requests made using
the account credentials for resources in the account are always
allowed.)
An explicit allow overrides this default.
An explicit deny overrides any allows.
So as long as you don't explicitly allow other access you should be fine. I have a static site hosted on S3 and I have the same access policy.