I would like to be able to restrict access to files in a S3 bucket in multiple ways. This is due to the fact that the files stored can be accessed in different manners. We do this because we have TBs of files, so we don't want to duplicate the bucket.
One access method is through tokenized CDN delivery which uses the S3 bucket as a source. So that the files may be pulled, I've set the permissions for the files to allow download for everybody. Using a bucket policy, I can restrict IP addresses which can get the files in the bucket. So I've restricted them to the CDN IP block and anyone outside those IP addresses can't grab the file.
The other is access method is by direct downloads using our store system which generates S3 time expiring pre-signed URLS.
Since the CDN pull effectively needs the files to be publicly readable, is there a way to:
Check first for a valid pre-signed URL and serve the file if the request is valid
If not valid, fall back to the IP address restriction to prevent further access?
I've got a working IP restriction bucket policy working, but that stomps out the pre-signed access...removing the bucket policy fixes the pre-signed access but then the files are public.
Objects in Amazon S3 are private by default. Access then can be granted via any of these methods:
Per-object ACLs (mostly for granting public access)
Bucket Policy with rules to define what API calls are permitted in which circumstances (eg only from a given IP address range)
IAM Policy -- similar to Bucket Policy, but can be applied to specific Users or Groups
A Pre-signed URL that grants time-limited access to an object
When attempting to access content in Amazon S3, as long as any of the above permit access, then access is granted. It is not possible to deny access via a different method -- for example, if access is granted via a pre-signed URL, then a Bucket Policy cannot cause that access to be denied.
Therefore, the system automatically does what you wish... If the pre-signed URL is valid, then access is granted. If the IP address comes from the desired range, then access is granted. It should work correctly.
It is very strange that you say the IP restriction "stomps out the pre-signed access" -- that should not be possible.
Issue solved -- here's what I ended up with. I realized I was using a "deny" for the IP Address section (saw that code posted somewhere, which worked on it's own) which does override any allows, so I needed to flip that.
I also made sure I didn't have any anonymous permissions on objects in the buckets as well.
{
"Version": "2012-10-17",
"Id": "S3PolicyId2",
"Statement": [
{
"Sid": "Allow our access key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/myuser"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.168.0.1/27",
"186.168.0.1//32",
"185.168.0.1/26"
]
}
}
}
]
Related
What is s3 bucket policy permission to provide an IAM user to access object url which is basically an HTTPs url for the object that i have uploaded to S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
With above policy i can download the object into my local , but i cant access it with object url which includes Https link. If i keep the s3 bucket full public , only then i can have the https access to the object url.
I dont want to provide full public access and how to provide access to this with bucket policy?
You can get https url by generating s3 pre-signed urls for the objects. This will allow for temporary access using the urls generated.
Other than that, a common choice is to share your s3 objects with an outside world without making your bucket public using CloudFront as explained in:
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud
Objects in Amazon S3 are private by default. They are not accessible via an anonymous URL.
If you want a specific IAM User to be able to access the bucket, then you can add permissions to the IAM User themselves. Then, when accessing the bucket, they will need to identify themselves and prove their identity. This is best done by making API calls to Amazon S3, which include authentication.
If you must access the private object via a URL, then you can create an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object. This proves your ownership and will let S3 serve the content to you. A pre-signed URL can be generated with a couple of lines of code.
I use S3 to stock static files for my website. Since my website has a login password, I would like to limit access to the static files on S3.
I successfully set the access permission like below.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite.com/*",
"http://127.0.0.1:8000/*"
]
}
}
}
]
}
And then, I tried to access the image directly by inputting the URL. I got the result(please see the attached).
My question:
Do you think it is safe to expose RequestID and HostID from a security perspective?
XML image. This is what I got
The Request ID and Host ID are identifiers within Amazon S3 that can be used for debugging and support purposes. There is no harm in S3 exposing that information, and you cannot prevent that information from appearing.
Also, please note that using aws:referer is a rather insecure method of protecting your content, since it can be easily spoofed (faked) when making a request to S3.
If you wish to protect valuable/confidential information in Amazon S3, then you should:
Keep all content in S3 as private (no bucket policy)
Users authenticate to your back-end app
When a user wants to access some private content from S3, your back-end app checks that they are entitled to access the content. If so, the back-end generates an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object.
This can be provided as a direct link, or included in an HTML page (eg <img src="...">)
When S3 receives the pre-signed URL, it verifies the signature and checks the expiry time. If they are valid, it then returns the private object from the S3 bucket.
This way, you can use S3 to serve static content, but your application has full control over who is permitted to access the content. It cannot be faked like referer since each request is signed with a hash of the Secret Key.
I am trying to setup Cloudflare to cache images from S3. I want to be as restrictive (least permissive) as possible in doing this. I assume I need to accept requests from Cloudflare to read my S3 images. I want all other requests to be rejected.
I followed this guide: https://support.cloudflare.com/hc/en-us/articles/360037983412-Configuring-an-Amazon-Web-Services-static-site-to-use-Cloudflare
I did not enable static website hosting on my bucket, because it's not necessary for my case.
In my bucket permissions I turned off "Block all public access" and temporarily turned off "Block public access to buckets and objects granted through new public bucket or access point policies". I needed to do this in order to add a bucket policy.
From the link above, I then added a bucket policy that looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CLOUDFLARE_IP_0>,
<CLOUDFLARE_IP_1>,
<CLOUDFLARE_IP_2>,
...
]
}
}
}
]
}
At this point, a message appeared in the AWS console stating:
"This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket."
I then turned back on "Block public access to buckets and objects granted through new public bucket or access point policies" and turned off "Block public and cross-account access to buckets and objects through any public bucket or access point policies".
At this point, the S3 image request behavior seems to be working as intended, but I am not confident that I set everything up to be minimally permissive, especially given the warning message in the AWS console.
Given my description, did I properly set things up in this bucket to accept read requests only from Cloudflare and deny all other requests? I want to make sure that requests from any origin other than Cloudflare will be denied.
Sounds good! If it works from CloudFlare, but not from somewhere else, then it meets your requirements.
Those Block Public Access warnings are intentionally scary to make people think twice before opening their buckets to the world.
Your policy is nicely limited to only GetObject and only to a limited range of IP addresses.
I'm having a really hard time setting up my bucket policy, it looks like my bucket policy only applies to some objects in my bucket.
What I want is pretty simple: I store video files in the bucket and I want them to be exclusively downloadable from my webiste.
My approach is to block everything by default, and then add allow rules:
Give full rights to root and Alice user.
Give public access to files in my bucket from only specific referers (my websites).
Note:
I manually made all the objects 'public' and my settings for Block Public Access are all set to Off.
Can anyone see any obvious errors in my bucket policy?
I don't understand why my policy seems to only work for some files.
Thank you so much
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::426873019732:root",
"arn:aws:iam::426873019732:user/alice"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MY_BUCKET",
"arn:aws:s3:::MY_BUCKET/*"
]
}
]
}
Controlling access via aws:Referer is not secure. It can be overcome quite easily. A simple web search will provide many tools that can accomplish this.
The more secure method would be:
Keep all objects in your Amazon S3 bucket private (do not "Make Public")
Do not use a Bucket Policy
Users should authenticate to your application
When a user wishes to access one of the videos, or when your application creates an HTML page that refers/embeds a video, the application should determine whether the user is entitled to access the object.
If the user is entitled to access the object, the application creates an Amazon S3 pre-signed URL, which provides time-limited access to a private object.
When the user's browser requests to retrieve the object via the pre-signed URL, Amazon S3 will verify the contents of the URL. If the URL is valid and the time limit has not expired, Amazon S3 will return the object (eg the video). If the time has expired, the contents will not be provided.
The pre-signed URL can be created in a couple of lines of code and does not require and API call back to Amazon S3.
The benefit of using pre-signed URLs is that your application determines who is entitled to view objects. For example, a user could choose to share a video with another user. Your application would permit the other user to view this shared video. It would not require any changes to IAM or bucket policies.
See: Amazon S3 pre-signed URLs
Also, if you wish to grant access to an Amazon S3 bucket to specific IAM Users (that is, users within your organization, rather than application users), it is better to grant access on the IAM User rather than via an Amazon S3 bucket. If there are many users, you can create an IAM Group that contains multiple IAM Users, and then put the policy on the IAM Group. Bucket Policies should generally be used for granting access to "everyone" rather than specific IAM Users.
In general, it is advisable to avoid using Deny policies since they can be difficult to write correctly and might inadvertently deny access to your Admin staff. It is better to limit what is being Allowed, rather than having to combine Allow and Deny.
I have a static website created with Amazon S3. The only permissions I have set are through the bucket policy provided in Amazons tutorial:
{
"Version":"2012-10-17",
"Statement": [{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
Clearly, this policy enables the public to view any file stored on my bucket, which I want. My question is, is this policy alone enough to prevent other people from uploading files and/or hijacking my website? I wish for the public to be able to access any file on the bucket, but I want to be the only one with list, upload, and delete permissions. Is this the current behavior of my bucket, given that my bucket policy only addresses view permissions?
Have a look at this: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_EvaluationLogic.html#policy-eval-basics
From that document:
When a request is made, the AWS service decides whether a given
request should be allowed or denied. The evaluation logic follows
these rules:
By default, all requests are denied. (In general, requests made using
the account credentials for resources in the account are always
allowed.)
An explicit allow overrides this default.
An explicit deny overrides any allows.
So as long as you don't explicitly allow other access you should be fine. I have a static site hosted on S3 and I have the same access policy.