How to setup Amazon S3 policy for ip address - amazon-web-services

I am using an S3-compatible storage (digital ocean spaces) to host images from my web application.
To prevent hotlinking and minimize direct downloads I applied this policy:
{
"Id": ip referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from my server.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "server-ip-address"
}
}
}
]
}
The trick seemed to work and I am now unable to access the files directly, however, neither can my web application. Have I done something wrong?
Is there a way to debug a referrer or something?

Content is private by default. Your policy is not granting any access via an Allow statement, so the content is not accessible. The Deny can be used to remove permissions granted by an Allow, but does not itself grant access.
You could change it into an Allow policy, and change NotIpAddress into IpAddress. This would grant access to your server to download content. However, it would be better to use an S3-style API call to download content from your own bucket rather than using an anonymous HTTP request.
If you are putting a link to the object in an HTML page, then the policy will provide the security that you expect because the user's browser will attempt to access the object and it will be denied access since the request is not originating from your server's IP address.

Related

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

Restrict access to s3 static website behind a cloudfront distribution

I want to temporarily restrict users from being able to access my static website hosted in s3 which sits behind a cloudfront distribution.
Is this possible and if so what methods could i use to implement this?
I've been able to restrict specific access to my s3 bucket by using a condition in the bucket policy which looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "12.34.56.73/32"
}
}
}
]
}
which works and restricts my s3 bucket to my ip, however this means that the cloudfront url gets 403 forbidden: access denied.
When reading the AWS docs, it suggests that to restrict specific access to s3 resources, use an Origin Access Identity. However they specify the following:
If you don't see the Restrict Bucket Access option, your Amazon S3 origin might be configured as a website endpoint. In that configuration, S3 buckets must be set up with CloudFront as custom origins and you can't use an origin access identity with them.
which suggests to me that i can't use it in this instance. Ideally i'd like to force my distribution or bucket policy to use a specific security group and control it that way so i can easily add/remove approved ip.
You can allow CloudFront IP addresses on CloudFront because static website endpoint doesn't support Origin access identity.
Here is the list of CloudFront IP addresses:
http://d7uri8nf7uskq.cloudfront.net/tools/list-cloudfront-ips
This link also explains how you can limit access via referral headers
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
You can tell CloudFront to add a header to every request and then modify your S3 bucket policy to require that header.
E.g.
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition":{
"StringLike":{"aws:Referer":"mysecretvalue"}
}
}
]
}

Amazon S3 Permissions by Client code

I've been converting an existing application to an EC2, S3 and RDS model within AWS, so far it's going well but I've got a problem I can't seem to find any info on.
My Web application accesses the S3 box for images and documents, the way this is stored is by client code,
Data/ClientCode1/Images
Data/ClientCode2/Images
Data/ClientABC/Images -- etc
The EC2 hosting the web application also works within a similar structure, so www.programname.com/ClientCode1/Index.aspx as an example, this has working security to prevent cross client access.
Now when www.programname.com/ClientCode1/Index.aspx goes to access the S3 for images, I need to make sure it can only access the ClientCode1 folder on the S3, the goal is to prevent client A seeing the images/documents of client B if you had a tech sort trying.
Is there perhaps a way to get the page referrer, or is there a better approach to this issue?
There is no way to use the URL or referrer to control access to Amazon S3, because that information is presented to your application (not S3).
If all your users are accessing the data in Amazon S3 via the same application, it will be the job of your application to enforce any desired security. This is because the application will be using a single set of credentials to access AWS services, so those credentials will need access to all data that the application might request.
To clarify: Amazon S3 has no idea which page a user is viewing. Only your application knows this. Therefore, your application will need to enforce the security.
I found the solution, seems to work well
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
}
]
}
This allows you to check the referer to see if the URL is from a given path, in my case I have each client sitting in their own path, the bucket follows the same rule, in the above example only a user coming from Client1 can access the bucket data for Client1, if I log in to Client2 and try force an image to the Client1 bucket I'll get access denied.

Bucket policy that respects pre-signed URLs OR IP Address deny?

I would like to be able to restrict access to files in a S3 bucket in multiple ways. This is due to the fact that the files stored can be accessed in different manners. We do this because we have TBs of files, so we don't want to duplicate the bucket.
One access method is through tokenized CDN delivery which uses the S3 bucket as a source. So that the files may be pulled, I've set the permissions for the files to allow download for everybody. Using a bucket policy, I can restrict IP addresses which can get the files in the bucket. So I've restricted them to the CDN IP block and anyone outside those IP addresses can't grab the file.
The other is access method is by direct downloads using our store system which generates S3 time expiring pre-signed URLS.
Since the CDN pull effectively needs the files to be publicly readable, is there a way to:
Check first for a valid pre-signed URL and serve the file if the request is valid
If not valid, fall back to the IP address restriction to prevent further access?
I've got a working IP restriction bucket policy working, but that stomps out the pre-signed access...removing the bucket policy fixes the pre-signed access but then the files are public.
Objects in Amazon S3 are private by default. Access then can be granted via any of these methods:
Per-object ACLs (mostly for granting public access)
Bucket Policy with rules to define what API calls are permitted in which circumstances (eg only from a given IP address range)
IAM Policy -- similar to Bucket Policy, but can be applied to specific Users or Groups
A Pre-signed URL that grants time-limited access to an object
When attempting to access content in Amazon S3, as long as any of the above permit access, then access is granted. It is not possible to deny access via a different method -- for example, if access is granted via a pre-signed URL, then a Bucket Policy cannot cause that access to be denied.
Therefore, the system automatically does what you wish... If the pre-signed URL is valid, then access is granted. If the IP address comes from the desired range, then access is granted. It should work correctly.
It is very strange that you say the IP restriction "stomps out the pre-signed access" -- that should not be possible.
Issue solved -- here's what I ended up with. I realized I was using a "deny" for the IP Address section (saw that code posted somewhere, which worked on it's own) which does override any allows, so I needed to flip that.
I also made sure I didn't have any anonymous permissions on objects in the buckets as well.
{
"Version": "2012-10-17",
"Id": "S3PolicyId2",
"Statement": [
{
"Sid": "Allow our access key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/myuser"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.168.0.1/27",
"186.168.0.1//32",
"185.168.0.1/26"
]
}
}
}
]

Amazon S3 Bucket policy, allow only one domain to access files

I have a S3 bucket with a file in it. I only want a certain domain to be able to access the file. I have tried a few policies on the bucket but all are not working, this one is from the AWS documentation.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.phpfiddle.org/*",
"http://phpfiddle.org/*"
]
}
}
}
]
}
To test the file, i have hosted a code on phpfiddle.org and have this code. But i am not able to access this file neither by directly accessing from the browser nor by the phpfiddle code.
<?php
$myfile = file_get_contents("https://s3-us-west-2.amazonaws.com/my-bucket-name/some-file.txt");
echo $myfile;
?>
Here are the permissions for the file, the bucket itself also has the same permissions + the above policy.
This is just an example link and not an actually working link.
The Restricting Access to a Specific HTTP Referrer bucket policy is only allow your file to be accessed from a page from your domain (the HTTP referrer is your domain).
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your S3 bucket, examplebucket.
You can't direct access your file from your browser (type directly the file URL into browser). You need to create a link/image/video tag from any page in your domain.
If you want to file_get_contents from S3, you need to create a new policy to allow your server IP (see example). Change the IP address to your server IP.
Another solutions is use AWS SDK for PHP to download the file into your local. You can also generate a pre-signed URL to allow your customer download from S3 directly for a limited time only.