I'm trying to allow a specific domain to access my Amazon S3 bucket but after that all the domains has been dropped from Amazon S3. The url I wish to access is https://s3.ap-southeast-2.amazonaws.com/{bucketUri}/1.jpg but returns 403 forbidden on my allowed referer. I can see the Referer on the page is correct.
Here's my settings:
Block public access all off.
Bucket policy:
{
"Version": "2008-10-17",
"Id": "Restrict based on HTTP referrer policy",
"Statement": [
{
"Sid": "1",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": "arn:aws:s3:::{bucketUri}",
"Condition": {
"StringNotLike": {
"aws:Referer": "http://reference.domain.put.here"
}
}
}
]
}
You should use Allow type policies as shown here, not Deny. Also your Resource is incorrect:
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":["s3:GetObject","s3:GetObjectVersion"],
"Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
"Condition":{
"StringLike":{"aws:Referer":["http://reference.domain.put.here"]}
}
}
]
}
And please note that aws:Referer does not really protect you. It is a merely inconvenience to bypass, as you can easly spoof aws:Referer.
Do an explicit Allow, not Deny. Change the referer to StringLike instead.
Related
I uploaded a static html site to s3 following this guideline: https://support.cloudflare.com/hc/en-us/articles/360037983412-Configuring-an-Amazon-Web-Services-static-site-to-use-Cloudflare
On s3 I created 2 bucket:
Root domain bucket: test1014.xyz (just a redirect to subdomain)
Subdomain bucket: www.test1014.xyz (contains the html file)
For the subdomain bucket, I blocked all public access and added a permission for cloudflare:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.test1014.xyz/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"173.245.48.0/20",
"103.21.244.0/22",
"103.22.200.0/22",
"103.31.4.0/22",
"141.101.64.0/18",
"108.162.192.0/18",
"190.93.240.0/20",
"188.114.96.0/20",
"197.234.240.0/22",
"198.41.128.0/17",
"162.158.0.0/15",
"104.16.0.0/13",
"104.24.0.0/14",
"172.64.0.0/13",
"131.0.72.0/22",
"2400:cb00::/32",
"2606:4700::/32",
"2803:f800::/32",
"2405:b500::/32",
"2405:8100::/32",
"2a06:98c0::/29",
"2c0f:f248::/32"
]
}
}
}
]
}
On cloudflare I added 2 domains:
CNAME | test1014.xyz | test1014.xyz.s3-website-ap-southeast-1.amazonaws.com
CNAME | www | www.test1014.xyz.s3-website-ap-southeast-1.amazonaws.com
Basically I just followed the guideline and still keep getting "This site can’t be reached ".
I already updated my domain nameserver to cloudflare.
Amazon S3 content is private by default.
The policy you show is a Deny policy. It is normally used in addition to an Allow policy to override the settings.
Therefore, you should either add an "Allow All" policy as well, or modify the Deny policy to be an Allow policy that grants access only if access via those IP addresses.
By the way, Deny policies are very difficult to get right. For example, it would also mean that YOU cannot access an object in S3 (eg to download a file) unless you do it via CloudFlare. They are best avoided if possible.
Update: Here's a bucket policy that only permits access (theoretically) to CloudFlare. It avoids using a Deny policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.test1014.xyz/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"173.245.48.0/20",
"103.21.244.0/22",
"103.22.200.0/22",
"103.31.4.0/22",
"141.101.64.0/18",
"108.162.192.0/18",
"190.93.240.0/20",
"188.114.96.0/20",
"197.234.240.0/22",
"198.41.128.0/17",
"162.158.0.0/15",
"104.16.0.0/13",
"104.24.0.0/14",
"172.64.0.0/13",
"131.0.72.0/22",
"2400:cb00::/32",
"2606:4700::/32",
"2803:f800::/32",
"2405:b500::/32",
"2405:8100::/32",
"2a06:98c0::/29",
"2c0f:f248::/32"
]
}
}
}
]
}
I will be using Cloudflare as a proxy for my S3 website bucket to make sure users can't directly access the website with the bucket URL.
I have an S3 bucket set up for static website hosting with my custom domain: www.mydomain.com and have uploaded my index.html file.
I have a CNAME record with www.mydomain.com -> www.mydomain.com.s3-website-us-west-1.amazonaws.com and Cloudflare Proxy enabled.
Issue: I am trying to apply a bucket policy to Deny access to my website bucket unless the request originates from a range of Cloudflare IP addresses. I am following the official AWS docs to do this, but every time I try to access my website, I get a Forbidden 403 AccessDenied error.
This is my bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudflareGetObject",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::ACCOUNT_ID:user/Administrator",
"arn:aws:iam::ACCOUNT_ID:root"
]
},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::www.mydomain.com/*",
"arn:aws:s3:::www.mydomain.com"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"2c0f:f248::/32",
"2a06:98c0::/29",
"2803:f800::/32",
"2606:4700::/32",
"2405:b500::/32",
"2405:8100::/32",
"2400:cb00::/32",
"198.41.128.0/17",
"197.234.240.0/22",
"190.93.240.0/20",
"188.114.96.0/20",
"173.245.48.0/20",
"172.64.0.0/13",
"162.158.0.0/15",
"141.101.64.0/18",
"131.0.72.0/22",
"108.162.192.0/18",
"104.16.0.0/12",
"103.31.4.0/22",
"103.22.200.0/22",
"103.21.244.0/22"
]
}
}
}
]
}
By default, AWS Deny all the request. Source
Your policy itself does not grant access to the Administrator [or any other user], it only omits him from the list of principals that are explicitly denied. To allow him access to the resource, another policy statement must explicitly allow access using "Effect": "Allow". Source
Now, we have to create Two Policy Statment's:- First with Allow and Second with Deny. Then, It is better to have only One Policy With "allow" only to Specific IP.
It is better not to complicate simple things like using Deny with Not Principal and NotIPAddress. Even AWS says :
Very few scenarios require the use of NotPrincipal, and we recommend that you explore other authorization options before you decide to use NotPrincipal. Source
Now, the questions come on how to whitelist Cloudflare IP's???.
Let's go with a simple approach. Below is the Policy. Replace your bucket name and your Cloudflare Ip's. I have tested it and it is running.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFlareIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:getObject",
"Resource": [
"arn:aws:s3:::my-poc-bucket",
"arn:aws:s3:::my-poc-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"IP1/32",
"IP2/32"
]
}
}
}
]
}
I am days researching how CORS works on AWS S3 but I can’t configure it at all.
I need my files to NOT be publicly accessible, BUT they can be incorporated into my domains. Currently I am unable to incorporate my images into my domains, access to them is completely blocked, as if CORS did not exist.
AWS Block Public Access
CORS settings
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedOrigins": [
"https://www.dev.seedlix.com.br/",
"https://dev.seedlix.com.br/",
"http://www.dev.seedlix.com.br/",
"http://dev.seedlix.com.br/",
"https://www.seedlix.com.br/",
"https://seedlix.com.br/",
"http://www.seedlix.com.br/",
"http://seedlix.com.br/",
"http://localhost:3000/",
"52.95.163.31:443"
],
"ExposeHeaders": []
}
]
After many hours of research, I finally managed to do what I wanted. First of all, I left all Bucket Accessible, and then I created a Policy that blocks access to EVERYONE except for requests originating from my domains.
Bucket Public Access
Bucket Policy
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating custom domains.",
"Effect": "Deny",
"Principal": "*",
"Action": ["s3:GetObject", "s3:GetObjectVersion"],
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://domain-a.com.br/*",
"https://domain-b.com/*",
]
}
}
}
]
Bucket Cors Policy
I have set up the following information:
Created an AWS S3 bucket and Uploaded some images into the particular folder
Created an AWS CloudFront web distribution:
Origin Domain Name: Selected S3 bucket from the list
Restrict Bucket Access: Yes
Origin Access Identity: Selected existed Identity
Grant Read Permissions on Bucket: Yes, Update Bucket Policy
AccessDenied
Access denied
I have got the signed URL from the above process like
image.png?policy=xxxxx#signature=xxx#Key-Pair-Id=XXXXXXX
but I couldn't access the URL
Sample JSON for cloud front policy
{
"Statement": [{
"Resource": "XXXXXXXXXX.cloudfront.net/standard/f7cecd92-5314-4263-9147-7cca3041e69d.png",
"Condition": {
"DateLessThan": {
"AWS:EpochTime": 1555021200
},
"IpAddress": {
"AWS:SourceIp": "0.0.0.0/0"
},
"DateGreaterThan": {
"AWS:EpochTime": 1554848400
}
}
}]
}
Added CloudFront bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXXX"
},
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket_name/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXXX"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bucket_name"
}
]
}
It looks like the AccessDenied error you're seeing has nothing to do with the steps you have mentioned, the Origin access identity it to allow CloudFront to access S3 using a special user using sigv4, using above steps, you'll see a allow statement added to the bucket policy.
If it's a error from S3, you'll see like 2 request ids, host and request Ids along with Access denied massage.
image.png?policy=xxxxx#signature=xxx#Key-Pair-Id=XXXXXXX
If you're seeing Access denied, the error is with CloudFront signed URL (restricted viewer access).
To see whats wrong with the generated CloudFront signed URL, try to base64 decode the policy value and see the Resource URL/expires etc are correct or not.
Our website is currently being scraped by bots to acccess content on S3. I'm trying to get a bucket policy setup so that a S3 URL cannot be accessed by any other referrer except our domain. Problem is that it doesn't seem to detect that our domain is the referrer. Here is the bucket policy we have setup below.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://www.example.com/*",
"http://www.example.com/*",
"https://*.example.com/*",
"https://*.example.com/*"
]
}
}
}
]
}
Does anyone know why it wouldn't recognize www.example.com/blah/blah.html etc. as the referrer?
Is there a way to see see what AWS is registering as the referrer when I access a URL via our app? That would help with troubleshooting.
From Bucket Policy Examples - Restricting Access to a Specific HTTP Referrer:
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition":{
"StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
This policy is Allowing access if the Referer string is like those shown in the list.
Your cost is using StringNotLike, so it is only permitting access if the Referer is not one that you listed.