S3 Bucket Policy for Limiting Access to Cloudflare IP Addresses - amazon-web-services

I will be using Cloudflare as a proxy for my S3 website bucket to make sure users can't directly access the website with the bucket URL.
I have an S3 bucket set up for static website hosting with my custom domain: www.mydomain.com and have uploaded my index.html file.
I have a CNAME record with www.mydomain.com -> www.mydomain.com.s3-website-us-west-1.amazonaws.com and Cloudflare Proxy enabled.
Issue: I am trying to apply a bucket policy to Deny access to my website bucket unless the request originates from a range of Cloudflare IP addresses. I am following the official AWS docs to do this, but every time I try to access my website, I get a Forbidden 403 AccessDenied error.
This is my bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudflareGetObject",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::ACCOUNT_ID:user/Administrator",
"arn:aws:iam::ACCOUNT_ID:root"
]
},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::www.mydomain.com/*",
"arn:aws:s3:::www.mydomain.com"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"2c0f:f248::/32",
"2a06:98c0::/29",
"2803:f800::/32",
"2606:4700::/32",
"2405:b500::/32",
"2405:8100::/32",
"2400:cb00::/32",
"198.41.128.0/17",
"197.234.240.0/22",
"190.93.240.0/20",
"188.114.96.0/20",
"173.245.48.0/20",
"172.64.0.0/13",
"162.158.0.0/15",
"141.101.64.0/18",
"131.0.72.0/22",
"108.162.192.0/18",
"104.16.0.0/12",
"103.31.4.0/22",
"103.22.200.0/22",
"103.21.244.0/22"
]
}
}
}
]
}

By default, AWS Deny all the request. Source
Your policy itself does not grant access to the Administrator [or any other user], it only omits him from the list of principals that are explicitly denied. To allow him access to the resource, another policy statement must explicitly allow access using "Effect": "Allow". Source
Now, we have to create Two Policy Statment's:- First with Allow and Second with Deny. Then, It is better to have only One Policy With "allow" only to Specific IP.
It is better not to complicate simple things like using Deny with Not Principal and NotIPAddress. Even AWS says :
Very few scenarios require the use of NotPrincipal, and we recommend that you explore other authorization options before you decide to use NotPrincipal. Source
Now, the questions come on how to whitelist Cloudflare IP's???.
Let's go with a simple approach. Below is the Policy. Replace your bucket name and your Cloudflare Ip's. I have tested it and it is running.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFlareIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:getObject",
"Resource": [
"arn:aws:s3:::my-poc-bucket",
"arn:aws:s3:::my-poc-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"IP1/32",
"IP2/32"
]
}
}
}
]
}

Related

This site can’t be reached for aws s3 and cloudflare

I uploaded a static html site to s3 following this guideline: https://support.cloudflare.com/hc/en-us/articles/360037983412-Configuring-an-Amazon-Web-Services-static-site-to-use-Cloudflare
On s3 I created 2 bucket:
Root domain bucket: test1014.xyz (just a redirect to subdomain)
Subdomain bucket: www.test1014.xyz (contains the html file)
For the subdomain bucket, I blocked all public access and added a permission for cloudflare:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.test1014.xyz/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"173.245.48.0/20",
"103.21.244.0/22",
"103.22.200.0/22",
"103.31.4.0/22",
"141.101.64.0/18",
"108.162.192.0/18",
"190.93.240.0/20",
"188.114.96.0/20",
"197.234.240.0/22",
"198.41.128.0/17",
"162.158.0.0/15",
"104.16.0.0/13",
"104.24.0.0/14",
"172.64.0.0/13",
"131.0.72.0/22",
"2400:cb00::/32",
"2606:4700::/32",
"2803:f800::/32",
"2405:b500::/32",
"2405:8100::/32",
"2a06:98c0::/29",
"2c0f:f248::/32"
]
}
}
}
]
}
On cloudflare I added 2 domains:
CNAME | test1014.xyz | test1014.xyz.s3-website-ap-southeast-1.amazonaws.com
CNAME | www | www.test1014.xyz.s3-website-ap-southeast-1.amazonaws.com
Basically I just followed the guideline and still keep getting "This site can’t be reached ".
I already updated my domain nameserver to cloudflare.
Amazon S3 content is private by default.
The policy you show is a Deny policy. It is normally used in addition to an Allow policy to override the settings.
Therefore, you should either add an "Allow All" policy as well, or modify the Deny policy to be an Allow policy that grants access only if access via those IP addresses.
By the way, Deny policies are very difficult to get right. For example, it would also mean that YOU cannot access an object in S3 (eg to download a file) unless you do it via CloudFlare. They are best avoided if possible.
Update: Here's a bucket policy that only permits access (theoretically) to CloudFlare. It avoids using a Deny policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.test1014.xyz/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"173.245.48.0/20",
"103.21.244.0/22",
"103.22.200.0/22",
"103.31.4.0/22",
"141.101.64.0/18",
"108.162.192.0/18",
"190.93.240.0/20",
"188.114.96.0/20",
"197.234.240.0/22",
"198.41.128.0/17",
"162.158.0.0/15",
"104.16.0.0/13",
"104.24.0.0/14",
"172.64.0.0/13",
"131.0.72.0/22",
"2400:cb00::/32",
"2606:4700::/32",
"2803:f800::/32",
"2405:b500::/32",
"2405:8100::/32",
"2a06:98c0::/29",
"2c0f:f248::/32"
]
}
}
}
]
}

AWS API Gateway: User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:

I have created the API Gateway with terraform and I am then attaching API's to it using the serverless framework.
I have created a resource policy based on this AWS tutorial (https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-resource-policy-access/) as I want to be able to use custom API Gateway domains but I do not want my API's accessible by anyone over the internet unless their IP address is in my whitelist.
Here is my rendered policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow",
"Effect": "Allow",
"Principal": "*",
"Resource": "arn:aws:execute-api:eu-west-1:*:/*/*/*"
},
{
"Sid": "ipwhitelist",
"Effect": "Deny",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:eu-west-1:*:/*/*/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
<<excluded>>
]
}
}
}
]
}
I have redeployed my API and now I am blocked regardless of whether my IP address is in the allowed list or not and according to the tutorial this should work.
I have also tested the policy by removing the entire deny section so it only allows all traffic and this is still resulting in my calls being blocked, when I delete the policy and the redeploy my serverless project it works again, so with that being said is there a reason why the allow policy would still block all IP addresses?
I am looking for ideas of where to look to find out why the white list is not working.
The answer to this is that I was missing a permission from my allow policy, the explicit allow is required to allow anything that is then excluded by the deny policy but it was missing any actions, I had to ensure the following was present in the terraform that generated the allow part of the policy:
actions = ["execute-api:Invoke"]
This is then translated into the following in the actual IAM policy:
"Action": "execute-api:Invoke"

Making S3 static web hosting content private

I want to be able to publish test reports to S3 and have it accessible to the URL sent at the end of the Drone build.
Is it possible to have the S3 static site not view-able by anyone? So its only accessible by people who can already access resources in the VPC using a VPN.
I read that the content must have public read access, so checking if that is avoidable.
Yes:
Set up the static website as normal,
Add a VPC endpoint for S3,
Use a bucket policy to deny all but traffic from your VPC.
Here is a good article describing it in more detail: https://blog.monsterxx03.com/2017/08/19/build-private-staticwebsite-on-s3/
The other option is to write an S3 bucket policy like below, where x.x.x.x/x is the CIDR of the VPC:
{
"Id": "Policy1564215115240",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1564215036691",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::<s3 bucket name>",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "x.x.x.x/x"
}
},
"Principal": "*"
}
]
}

AWS Bucket policy for a reverse proxy

I am trying to use an s3 bucket as a simple web host, but want to put it behind a reverse proxy capable of layering some required security controls.
I have IP addresses associated with the reverse proxy that I would like to restrict the s3 web access to. When I apply the IP based restriction in the bucket policy though it seems to make administrative interaction in the account extremely difficult to blocked.
I would like not disrupt access from within the account via console/IAM user/federated role, but enable http access to the s3 site for just the IPs associated with the reverse proxy.
The AWS documentation on what is required to enable web access shows that I need this policy statement, so I have included it to start with.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Then I want to restrict the web traffic to a particular set of IPs so I have added this statement.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This deny policy has the unintended consequence of blocking my ability to access it from inside my account with IAM users or assumed federated roles, so I have added an explicit allow for those resources. I would like to just place a blanket allow for "the account" if possible. That leaves me with this policy, and it just doesn't seem to work how I would like it to. I can't seem to either manage it as my users or access the web content from the proxy.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
},
{
"Sid": "AllowAccountUsersAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:IAM::999999999999:user/user#place",
"arn:aws:IAM::999999999999:user/user2#place",
"999999999999"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::my bucket"
]
}
]
}
Is there a way to have an S3 bucket be a static web host restricted to only select IP ranges for web access, without disrupting the ability to manage the bucket itself from the account?
There are multiple ways that access can be granted to resources in an Amazon S3 bucket:
IAM permissions
Bucket Policy
Pre-signed URL
If an access request meets any of the above, they will be granted access (although a Deny might override it).
IAM Permissions are used to assign permissions to a User or Group. For example, if you want to have access to the bucket, you can add create a policy and assign it to you as an IAM user. If you wish all of your administrators to access the bucket, then put them in an IAM Group and assign the policy to the group. All access made this way needs to be done with AWS credentials (no anonymous access).
A Bucket Policy is typically used to grant anonymous access (no credentials required), but can include restrictions such as IP address ranges, SSL-only, and time-of-day. This is the way you would grant access to your reverse proxy, since it is not sending credentials as part of its requests.
A Pre-signed URL can be generated by applications to grant temporary access to a specific object. The URL includes a calculated signature that authenticates the access. This is typically used when generating links on HTML pages (eg to link to private images).
Your situation
So, firstly, you should grant access to yourself and your administrators, using a policy similar to:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Note that there is no Principal because it applies to whatever users/groups have been assigned this policy.
Next, you wish to grant access to your reverse proxies. This can be done via a Bucket Policy:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This policy is permitting (Allow) access to the specified bucket, but only if the request is coming from one of the stated IP addresses.
Granting access via Allow is always preferable to denying access because Deny always overrides Allow. Therefore, use Deny sparingly since once something is denied, it cannot then be Allowed (eg if a Deny blocks administrative access, you cannot then Allow the access). Deny is mostly used where you definitely want to block something (eg a known bad actor).
VPC Endpoint
A final option worth considering is use of a VPC Endpoint for S3. This allows a directly communication between a VPC and S3, without having to go via an Internet Gateway. This is excellent for situations where resources in a Private Subnet wish to communicate with S3 without using a NAT Gateway.
Additional policies can be added to a VPC Endpoint to define which resources can access the VPC Endpoint (eg your range of Reverse Proxies). Bucket Policies can specifically refer to VPC Endpoints, permitting requests from that access method. For example, you could configure a bucket policy that permits access only from a specific VPC -- this is useful for separating Dev/Test/Prod access to buckets.
However, it probably isn't suitable for your given use-case because it would force all S3 traffic to go via the VPC Endpoint, even outside of your reverse proxies. This might not be desired behavior for your architecture.
Bottom line: IAM policies grant access to users. Bucket Policies grant anonymous access.
You certainly do not "need" the first policy you have listed, and in fact you should rarely ever use that policy because it grants complete access to the bucket.

Restricting S3 bucket access to a VPC

I am trying to apply the following policy in order to restrict my_bucket's access to a particular VPC.
When I try to apply this as a bucket policy, I get an Policy has an invalid condition key - ec2:Vpc.
How do I correct this?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition":{
"StringNotEquals": {
"ec2:Vpc": "arn:aws:ec2:region:account:vpc/vpc-ccccccc"
}
}
}
]
}
I just got this to work. I had to do two things. 1) Create the bucket policy on the S3 bucket, 2) create a "VPC Endpoint"
My S3 bucket policy looks like this (of course put in your bucket name and VPC identifier):
{
"Version": "2012-10-17",
"Id": "Policy1234567890123",
"Statement": [
{
"Sid": "Stmt1234567890123",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"StringEquals": {
"aws:sourceVpc": "vpc-12345678"
}
}
}
]
}
The S3 bucket also has some permissions outside the bucket policy to allow access from the AWS Console. Doing the above did not give access. To get access, I also had to go to AWS Console -> VPC -> Endpoints, and then create an endpoint. I attached the newly created endpoint to the only routing policy the account has at the moment (that has all subnets attached to it) and I used the default policy of
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
Once I created the endpoint, I was able to read from the S3 bucket from any EC2 instance in my VPC simply using wget with the right URL. I am still able to access the bucket from the AWS Console. But if I try to access the URL from outside the VPC, I get 403 forbidden. Thus, access to the S3 bucket is restricted to a single VPC, just like what you are looking for.
This is apparently a new feature. See this AWS blog entry for more information.
Two things that bit me and which might be helpful to add to Eddie's nice answer are:
First, you won't be able to view your bucket (or even modify its policy once you set the policy above) in the S3 AWS console unless you also give your AWS users permissions to manipulate the bucket. To do that, find your AWS account number (displayed in upper-right here), and add this statement to the bucket policy statements list:
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::YOUR_AWS_ACCOUNT_NUMBER:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"
]
},
Second, if you have more than one VPC, say vpc-XXXXXX and vpc-YYYYYY to give access to, the statement in Eddie's answer needs to be tweaked to something like the following (note the "Allow" "StringEquals" and list of sourceVpc values:
...
"Effect": "Allow",
...
"Condition": {
"StringEquals": {
"aws:sourceVpc": [
"vpc-XXXXXXXX",
"vpc-YYYYYYYY"
]
}
No, you can't do that.
Here's another person asking the same: https://forums.aws.amazon.com/thread.jspa?threadID=102387
Some have gotten overly creative with the problem trying to solve it with networking: https://pete.wtf/2012/05/01/how-to-setup-aws-s3-access-from-specific-ips/
I prefer a more simple route, S3 allows you to sign urls to solve this very problem, but inside of your VPC you may wish to not have to think about signing - or you just couldn't sign, for example you might be using wget, etc. So I wrote this little micro-service for that very reason: https://github.com/rmmeans/S3-Private-Downloader
Hope that helps!
UPDATED:
AWS now has a feature for VPC endpoints: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/, you should use that and not what I previously suggested.