How to make S3 objects readable only from certain IP addresses? - amazon-web-services

I am trying to setup Cloudflare to cache images from S3. I want to be as restrictive (least permissive) as possible in doing this. I assume I need to accept requests from Cloudflare to read my S3 images. I want all other requests to be rejected.
I followed this guide: https://support.cloudflare.com/hc/en-us/articles/360037983412-Configuring-an-Amazon-Web-Services-static-site-to-use-Cloudflare
I did not enable static website hosting on my bucket, because it's not necessary for my case.
In my bucket permissions I turned off "Block all public access" and temporarily turned off "Block public access to buckets and objects granted through new public bucket or access point policies". I needed to do this in order to add a bucket policy.
From the link above, I then added a bucket policy that looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CLOUDFLARE_IP_0>,
<CLOUDFLARE_IP_1>,
<CLOUDFLARE_IP_2>,
...
]
}
}
}
]
}
At this point, a message appeared in the AWS console stating:
"This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket."
I then turned back on "Block public access to buckets and objects granted through new public bucket or access point policies" and turned off "Block public and cross-account access to buckets and objects through any public bucket or access point policies".
At this point, the S3 image request behavior seems to be working as intended, but I am not confident that I set everything up to be minimally permissive, especially given the warning message in the AWS console.
Given my description, did I properly set things up in this bucket to accept read requests only from Cloudflare and deny all other requests? I want to make sure that requests from any origin other than Cloudflare will be denied.

Sounds good! If it works from CloudFlare, but not from somewhere else, then it meets your requirements.
Those Block Public Access warnings are intentionally scary to make people think twice before opening their buckets to the world.
Your policy is nicely limited to only GetObject and only to a limited range of IP addresses.

Related

Make one S3 bucket public

Currently I have 5 S3 buckets in my account, and all of them are Block all public access -> ON and the same setting is also there for Block Public Access settings for this account -> ON.
Now I want to create a new bucket that should be public, and I don't want to change any of my existing buckets. So for the newly created bucket I have set Block all public access = OFF. But when I try to save below policy, it gives Access denied error. So I guess I have to Turn Off Block Public Access settings for this account.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::MyNewImageBucketS3/*",
"Principal": "*"
}
]
}
I want to know that if I turn off account level setting, then will it affect my existing buckets?
As a second option I can configure CloudFront and serve files publicly but want to know about the public access change at the account level.
Block all public access = OFF; this setting is for individual s3 buckets provided you are doing it from bucket settings, so for that specific bucket you can turn this off and you are good to go.
If you want specific objects to be publicly accessible then this can be achieved via similar IAM policy you shared but to make this work turn on public access on that bucket and then you can apply IAM policy to allow specific objects and deny remaining.
Below image describes that if you change it in bucket setting, its going to effect on that specific bucket and the objects within bucket only
For more guidelines please check below AWS doc
https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-alternatives-guidelines.html

AWS Lambda to revert back S3 "block all public access" if someone changed it to allow public access

I'm trying to use AWS Lambda to check periodically (maybe use a cron job) if the S3 block public access is turned on. If it ever finds that S3 block public access is turned off (i.e, if S3 is public), the lambda needs to revert the setting back to "block public access". Not sure where to begin with this.. please advice.
S3 Block Public Access provides controls across an entire AWS Account or at the individual S3 bucket level. You could set it at account level and ensure that all users have policies that deny permission on the s3:PutAccountPublicAccessBlock action, as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "denyaccountbpa",
"Effect": "Deny",
"Action": "s3:PutAccountPublicAccessBlock",
"Resource": "*"
}
]
}
Also, be aware of the no-cost Trusted Advisor option.

Having trouble granting public read permissions in S3 bucket

I'm trying to understand the specific permissions I need to set on my Amazon S3 bucket. I've looked for this information already, but have only seen 1 or 2 examples of the new ACL/Policies that Amazon has implemented.
My use case: I'm using S3 to store images for my website (hosted elsewhere). I would like to upload images on S3 and be able to access them through their link on my own site.
I've used https://awspolicygen.s3.amazonaws.com/policygen.html to generate a GetObject policy:
{
"Id": "Policyxxxxxxxxxxxxxxxxx",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmtxxxxxxxxxxxxxxx",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::xxxxxx-xxxxx-xxxxxxx/*",
"Principal": "*"
}
]
}
These are my current Block public access settings:
Block all public access: Off
Block public access to buckets and objects granted through new access control lists (ACLs): On
Block public access to buckets and objects granted through any access control lists (ACLs): On
Block public access to buckets and objects granted through new public bucket policies: Off
Block public and cross-account access to buckets and objects through any public bucket policies: Off
In Access Control List, I have not added any permissions.
In Bucket Policy, I placed the policy I generated.
In CORS configuration, I specified localhost and my domain name as allowed origins and GET's as allowed methods.
Is this correct for my usage? It currently works, but I'm not 100% sure I've gotten the permissions right. All I need is public access to my photos (so my website can grab them) and to deny anything else (besides me logging in and uploading more photos).

How to revoke public permissions from a Amazon S3 Bucket

I created a Amazon S3 Bucket to store only images from my website. I have more than 1 million images all with public read access. Everytime I make a login, Amazon gives me this warning:
"This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket. "
I'm using the following Bucket Policy to only allow images to be shown just in my site:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com.br",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://www.example.com.br/*",
"https://www.example.com/*",
"https://www.example.com.br/*"
]
}
}
}
]
}
How can I revoke the public access to the bucket and to my files and grant it only to my sites?
Thank you!
It's a scary warning meant to prevent people from leaking data unintentionally. There have been lots of cases in the news lately about companies accidentally setting permissions to allow public reads.
In your case you really do want these to be publicly readable so you can just ignore the warning. Your security policy looks fine and still matches the documentation for public hosting.
You could theoretically put these images behind another server that streams them to the user if you really don't want someone to be able to download them directly. That's not really any more secure though.
If you do not want to have these publicly available at all just delete this policy from your bucket. In that case your website will not be able to serve the images.
Your policy looks good. You are providing a higher level of security then just public thru the referer header and not allowing the listing of objects.
Using S3 to provide common files such as CSS, JS and Images is just so easy. However, with all of the accidental security problems I usually recommend one of these approaches:
Turn on static web site hosting for the bucket. This makes it very clear to future admins that this bucket is intended for public files. Also I do not see big warning messages for these buckets. Enable redirect requests.
Better, turn off all public access and use CloudFront. Enable Origin Access Identity. You receive all the benefits of CloudFront, tighter security, etc.
Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content

Bucket policy that respects pre-signed URLs OR IP Address deny?

I would like to be able to restrict access to files in a S3 bucket in multiple ways. This is due to the fact that the files stored can be accessed in different manners. We do this because we have TBs of files, so we don't want to duplicate the bucket.
One access method is through tokenized CDN delivery which uses the S3 bucket as a source. So that the files may be pulled, I've set the permissions for the files to allow download for everybody. Using a bucket policy, I can restrict IP addresses which can get the files in the bucket. So I've restricted them to the CDN IP block and anyone outside those IP addresses can't grab the file.
The other is access method is by direct downloads using our store system which generates S3 time expiring pre-signed URLS.
Since the CDN pull effectively needs the files to be publicly readable, is there a way to:
Check first for a valid pre-signed URL and serve the file if the request is valid
If not valid, fall back to the IP address restriction to prevent further access?
I've got a working IP restriction bucket policy working, but that stomps out the pre-signed access...removing the bucket policy fixes the pre-signed access but then the files are public.
Objects in Amazon S3 are private by default. Access then can be granted via any of these methods:
Per-object ACLs (mostly for granting public access)
Bucket Policy with rules to define what API calls are permitted in which circumstances (eg only from a given IP address range)
IAM Policy -- similar to Bucket Policy, but can be applied to specific Users or Groups
A Pre-signed URL that grants time-limited access to an object
When attempting to access content in Amazon S3, as long as any of the above permit access, then access is granted. It is not possible to deny access via a different method -- for example, if access is granted via a pre-signed URL, then a Bucket Policy cannot cause that access to be denied.
Therefore, the system automatically does what you wish... If the pre-signed URL is valid, then access is granted. If the IP address comes from the desired range, then access is granted. It should work correctly.
It is very strange that you say the IP restriction "stomps out the pre-signed access" -- that should not be possible.
Issue solved -- here's what I ended up with. I realized I was using a "deny" for the IP Address section (saw that code posted somewhere, which worked on it's own) which does override any allows, so I needed to flip that.
I also made sure I didn't have any anonymous permissions on objects in the buckets as well.
{
"Version": "2012-10-17",
"Id": "S3PolicyId2",
"Statement": [
{
"Sid": "Allow our access key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/myuser"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.168.0.1/27",
"186.168.0.1//32",
"185.168.0.1/26"
]
}
}
}
]