S3 Security regarding Restrict S3 object access - amazon-web-services

I use S3 to stock static files for my website. Since my website has a login password, I would like to limit access to the static files on S3.
I successfully set the access permission like below.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite.com/*",
"http://127.0.0.1:8000/*"
]
}
}
}
]
}
And then, I tried to access the image directly by inputting the URL. I got the result(please see the attached).
My question:
Do you think it is safe to expose RequestID and HostID from a security perspective?
XML image. This is what I got

The Request ID and Host ID are identifiers within Amazon S3 that can be used for debugging and support purposes. There is no harm in S3 exposing that information, and you cannot prevent that information from appearing.
Also, please note that using aws:referer is a rather insecure method of protecting your content, since it can be easily spoofed (faked) when making a request to S3.
If you wish to protect valuable/confidential information in Amazon S3, then you should:
Keep all content in S3 as private (no bucket policy)
Users authenticate to your back-end app
When a user wants to access some private content from S3, your back-end app checks that they are entitled to access the content. If so, the back-end generates an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object.
This can be provided as a direct link, or included in an HTML page (eg <img src="...">)
When S3 receives the pre-signed URL, it verifies the signature and checks the expiry time. If they are valid, it then returns the private object from the S3 bucket.
This way, you can use S3 to serve static content, but your application has full control over who is permitted to access the content. It cannot be faked like referer since each request is signed with a hash of the Secret Key.

Related

s3 bucket policy to access object url

What is s3 bucket policy permission to provide an IAM user to access object url which is basically an HTTPs url for the object that i have uploaded to S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
With above policy i can download the object into my local , but i cant access it with object url which includes Https link. If i keep the s3 bucket full public , only then i can have the https access to the object url.
I dont want to provide full public access and how to provide access to this with bucket policy?
You can get https url by generating s3 pre-signed urls for the objects. This will allow for temporary access using the urls generated.
Other than that, a common choice is to share your s3 objects with an outside world without making your bucket public using CloudFront as explained in:
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud
Objects in Amazon S3 are private by default. They are not accessible via an anonymous URL.
If you want a specific IAM User to be able to access the bucket, then you can add permissions to the IAM User themselves. Then, when accessing the bucket, they will need to identify themselves and prove their identity. This is best done by making API calls to Amazon S3, which include authentication.
If you must access the private object via a URL, then you can create an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object. This proves your ownership and will let S3 serve the content to you. A pre-signed URL can be generated with a couple of lines of code.

Restrict read-write access from my S3 bucket

I am hosting a website where users can write and read files, which are stored into another S3 Bucket. However, I want to restrict the access of these files only to my website.
For example, loading a picture.
If the request comes from my website (example.com), I want the read (or write if I upload a picture) request to be allowed by the AWS S3 storing bucket.
If the request comes from the user who directly writes the Object URL in his browser, I want the storing bucket to block it.
Right now, even with all I have tried, people can access ressources from the Object URL.
Here is my Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Id",
"Statement": [
{
"Sid": "Sid",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": "arn:aws:s3:::storage-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://example.com/*"
}
}
}
]
}
Additionnal informations:
All my "Block public access" are unchecked as you can see here. (I think that the problem comes from here. When I check the two boxes about ACL, my main problem is fixed, but I got a 403 error - Forbidden - when it comes to upload files to the Bucket, another problem);
My ACL looks like this;
My website is statically hosted on another S3 Bucket.
If you need more informations or details, ask me.
Thank you in advance for your answers.
This message has been written by a French speaking guy. Sorry for the mistakes
"aws:Referer": "http://example.com/*
The referer is an http header passed by the browser and any client could just freely set the value. It provides no real security
However, I want to restrict the access of these files only to my website
Default way restrict access to S3 resources for a website is using the pre-signed url. Basically your website backend can create an S3 url to download or upload an s3 object and pass the url only to authenticated /allowed client. Then your resource bucket can restrict the public access. Allowing upload without authentication is usually a very bad idea.
Yes, in this case your website is not static anymore and you need some backend logic to do so.
If your website clients are authenticated, you may use the AWS API Gateway and Lambda to create this pre-signed url for the clients.

How to revoke public permissions from a Amazon S3 Bucket

I created a Amazon S3 Bucket to store only images from my website. I have more than 1 million images all with public read access. Everytime I make a login, Amazon gives me this warning:
"This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket. "
I'm using the following Bucket Policy to only allow images to be shown just in my site:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com.br",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://www.example.com.br/*",
"https://www.example.com/*",
"https://www.example.com.br/*"
]
}
}
}
]
}
How can I revoke the public access to the bucket and to my files and grant it only to my sites?
Thank you!
It's a scary warning meant to prevent people from leaking data unintentionally. There have been lots of cases in the news lately about companies accidentally setting permissions to allow public reads.
In your case you really do want these to be publicly readable so you can just ignore the warning. Your security policy looks fine and still matches the documentation for public hosting.
You could theoretically put these images behind another server that streams them to the user if you really don't want someone to be able to download them directly. That's not really any more secure though.
If you do not want to have these publicly available at all just delete this policy from your bucket. In that case your website will not be able to serve the images.
Your policy looks good. You are providing a higher level of security then just public thru the referer header and not allowing the listing of objects.
Using S3 to provide common files such as CSS, JS and Images is just so easy. However, with all of the accidental security problems I usually recommend one of these approaches:
Turn on static web site hosting for the bucket. This makes it very clear to future admins that this bucket is intended for public files. Also I do not see big warning messages for these buckets. Enable redirect requests.
Better, turn off all public access and use CloudFront. Enable Origin Access Identity. You receive all the benefits of CloudFront, tighter security, etc.
Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content

Bucket policy that respects pre-signed URLs OR IP Address deny?

I would like to be able to restrict access to files in a S3 bucket in multiple ways. This is due to the fact that the files stored can be accessed in different manners. We do this because we have TBs of files, so we don't want to duplicate the bucket.
One access method is through tokenized CDN delivery which uses the S3 bucket as a source. So that the files may be pulled, I've set the permissions for the files to allow download for everybody. Using a bucket policy, I can restrict IP addresses which can get the files in the bucket. So I've restricted them to the CDN IP block and anyone outside those IP addresses can't grab the file.
The other is access method is by direct downloads using our store system which generates S3 time expiring pre-signed URLS.
Since the CDN pull effectively needs the files to be publicly readable, is there a way to:
Check first for a valid pre-signed URL and serve the file if the request is valid
If not valid, fall back to the IP address restriction to prevent further access?
I've got a working IP restriction bucket policy working, but that stomps out the pre-signed access...removing the bucket policy fixes the pre-signed access but then the files are public.
Objects in Amazon S3 are private by default. Access then can be granted via any of these methods:
Per-object ACLs (mostly for granting public access)
Bucket Policy with rules to define what API calls are permitted in which circumstances (eg only from a given IP address range)
IAM Policy -- similar to Bucket Policy, but can be applied to specific Users or Groups
A Pre-signed URL that grants time-limited access to an object
When attempting to access content in Amazon S3, as long as any of the above permit access, then access is granted. It is not possible to deny access via a different method -- for example, if access is granted via a pre-signed URL, then a Bucket Policy cannot cause that access to be denied.
Therefore, the system automatically does what you wish... If the pre-signed URL is valid, then access is granted. If the IP address comes from the desired range, then access is granted. It should work correctly.
It is very strange that you say the IP restriction "stomps out the pre-signed access" -- that should not be possible.
Issue solved -- here's what I ended up with. I realized I was using a "deny" for the IP Address section (saw that code posted somewhere, which worked on it's own) which does override any allows, so I needed to flip that.
I also made sure I didn't have any anonymous permissions on objects in the buckets as well.
{
"Version": "2012-10-17",
"Id": "S3PolicyId2",
"Statement": [
{
"Sid": "Allow our access key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/myuser"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.168.0.1/27",
"186.168.0.1//32",
"185.168.0.1/26"
]
}
}
}
]

Amazon S3 Bucket policy, allow only one domain to access files

I have a S3 bucket with a file in it. I only want a certain domain to be able to access the file. I have tried a few policies on the bucket but all are not working, this one is from the AWS documentation.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.phpfiddle.org/*",
"http://phpfiddle.org/*"
]
}
}
}
]
}
To test the file, i have hosted a code on phpfiddle.org and have this code. But i am not able to access this file neither by directly accessing from the browser nor by the phpfiddle code.
<?php
$myfile = file_get_contents("https://s3-us-west-2.amazonaws.com/my-bucket-name/some-file.txt");
echo $myfile;
?>
Here are the permissions for the file, the bucket itself also has the same permissions + the above policy.
This is just an example link and not an actually working link.
The Restricting Access to a Specific HTTP Referrer bucket policy is only allow your file to be accessed from a page from your domain (the HTTP referrer is your domain).
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your S3 bucket, examplebucket.
You can't direct access your file from your browser (type directly the file URL into browser). You need to create a link/image/video tag from any page in your domain.
If you want to file_get_contents from S3, you need to create a new policy to allow your server IP (see example). Change the IP address to your server IP.
Another solutions is use AWS SDK for PHP to download the file into your local. You can also generate a pre-signed URL to allow your customer download from S3 directly for a limited time only.