AWS Bucket policy for a reverse proxy - amazon-web-services

I am trying to use an s3 bucket as a simple web host, but want to put it behind a reverse proxy capable of layering some required security controls.
I have IP addresses associated with the reverse proxy that I would like to restrict the s3 web access to. When I apply the IP based restriction in the bucket policy though it seems to make administrative interaction in the account extremely difficult to blocked.
I would like not disrupt access from within the account via console/IAM user/federated role, but enable http access to the s3 site for just the IPs associated with the reverse proxy.
The AWS documentation on what is required to enable web access shows that I need this policy statement, so I have included it to start with.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Then I want to restrict the web traffic to a particular set of IPs so I have added this statement.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This deny policy has the unintended consequence of blocking my ability to access it from inside my account with IAM users or assumed federated roles, so I have added an explicit allow for those resources. I would like to just place a blanket allow for "the account" if possible. That leaves me with this policy, and it just doesn't seem to work how I would like it to. I can't seem to either manage it as my users or access the web content from the proxy.
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
},
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
},
{
"Sid": "AllowAccountUsersAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:IAM::999999999999:user/user#place",
"arn:aws:IAM::999999999999:user/user2#place",
"999999999999"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::my bucket"
]
}
]
}
Is there a way to have an S3 bucket be a static web host restricted to only select IP ranges for web access, without disrupting the ability to manage the bucket itself from the account?

There are multiple ways that access can be granted to resources in an Amazon S3 bucket:
IAM permissions
Bucket Policy
Pre-signed URL
If an access request meets any of the above, they will be granted access (although a Deny might override it).
IAM Permissions are used to assign permissions to a User or Group. For example, if you want to have access to the bucket, you can add create a policy and assign it to you as an IAM user. If you wish all of your administrators to access the bucket, then put them in an IAM Group and assign the policy to the group. All access made this way needs to be done with AWS credentials (no anonymous access).
A Bucket Policy is typically used to grant anonymous access (no credentials required), but can include restrictions such as IP address ranges, SSL-only, and time-of-day. This is the way you would grant access to your reverse proxy, since it is not sending credentials as part of its requests.
A Pre-signed URL can be generated by applications to grant temporary access to a specific object. The URL includes a calculated signature that authenticates the access. This is typically used when generating links on HTML pages (eg to link to private images).
Your situation
So, firstly, you should grant access to yourself and your administrators, using a policy similar to:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
Note that there is no Principal because it applies to whatever users/groups have been assigned this policy.
Next, you wish to grant access to your reverse proxies. This can be done via a Bucket Policy:
{
"Id": "S3_Policy",
"Statement": [
{
"Sid": "DenyNonProxyWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"99.99.99.1/32",
"99.99.99.2/32",
"99.99.99.3/32",
"99.99.99.4/32",
"99.99.99.5/32"
]
}
}
}
]
}
This policy is permitting (Allow) access to the specified bucket, but only if the request is coming from one of the stated IP addresses.
Granting access via Allow is always preferable to denying access because Deny always overrides Allow. Therefore, use Deny sparingly since once something is denied, it cannot then be Allowed (eg if a Deny blocks administrative access, you cannot then Allow the access). Deny is mostly used where you definitely want to block something (eg a known bad actor).
VPC Endpoint
A final option worth considering is use of a VPC Endpoint for S3. This allows a directly communication between a VPC and S3, without having to go via an Internet Gateway. This is excellent for situations where resources in a Private Subnet wish to communicate with S3 without using a NAT Gateway.
Additional policies can be added to a VPC Endpoint to define which resources can access the VPC Endpoint (eg your range of Reverse Proxies). Bucket Policies can specifically refer to VPC Endpoints, permitting requests from that access method. For example, you could configure a bucket policy that permits access only from a specific VPC -- this is useful for separating Dev/Test/Prod access to buckets.
However, it probably isn't suitable for your given use-case because it would force all S3 traffic to go via the VPC Endpoint, even outside of your reverse proxies. This might not be desired behavior for your architecture.
Bottom line: IAM policies grant access to users. Bucket Policies grant anonymous access.
You certainly do not "need" the first policy you have listed, and in fact you should rarely ever use that policy because it grants complete access to the bucket.

Related

Disallow a user to list directory contents of S3 bucket

I am attempting to allow multiple users access to a single S3 bucket. However they should only have access to a particular directory in that bucket.
Imagine the following
my-bucket
- client-1
- important-doc.txt
- client-2
- somefile.jpg
- my-own-file.js
With that in mind (allowing say, client-1 access to only that directory) I have the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::my-bucket/client-1/*"
}
]
}
This works as you would expect, client-1 can connect to the bucket, go to their particular directory and download. However it appears they have the ability to list the directory of the entire bucket, I assume due to the s3:ListBucket permission being allowed. But if I restrict that to only the folder my Transmit app notifies me that permission is denied.
Can anyone advise me how to correctly write this permission?
The first choice is how to track and authenticate the users.
Option 1: IAM Users
Normally, IAM User credentials are given to employees and applications. Policies can be applied directly against IAM Users to grant them access to resources.
In the case of granting IAM Users access to specific folders within an Amazon S3 bucket, the easiest method would be to put these users into an IAM Group and then author a policy that uses IAM Policy Variables that can automatically insert the name of the IAM User into the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket"],
"Condition": {"StringLike": {"s3:prefix": ["${aws:username}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::mybucket/${aws:username}/*"]
}
]
}
There is a limit of 5000 IAM Users in an AWS Account.
Option 2: Temporary credentials
Rather than giving IAM User 'permanent' credentials, the AWS Security Token Service (STS) can issue temporary credentials with an assigned permission policy.
The flow would typically be:
Users authenticate themselves to your app (eg using your own database of users, or federated access from Active Directory, or even an OpenID service)
Your back-end app then generates temporary credentials with the appropriate permissions (such as the policy you have shown in #jellcsc's answer)
The app provides these credentials to the users (or their app)
The users use these credentials to access the permitted AWS services
The credentials expire after a period of time and the users must reconnect to your app to obtain a new set of temporary credentials.
This is more secure because the app is responsible for ensuring authentication and granting permissions. There is less risk of accidentally granting permissions to a set of users.
Option 3: Pre-signed URLs
When a web application wishes to allow access to private objects in Amazon S3, it can generate an Amazon S3 pre-signed URLs, which is a time-limited URL that provides temporary access to a private object. It works like this:
Users authenticate to the web app
When the back-end is rendering an HTML page and wants to include a reference to a private object (eg <img src='...'>, it generates a pre-signed URL that grants temporary access to a private object
When the user's browser sends the URL to S3, the signature is verified and the expiry time is checked. If it is valid, then S3 returns the object.
This is common in applications like photo-sharing systems where users might want to share photos with other users, so that the security is more complex than simply looking at the directory where the image is stored.
Bottom line
If you are using IAM Users, then use Option 1 and take advantage of IAM Policy Variables to write one policy that will grant appropriate access to each user. However, consider carefully whether giving IAM User access to external people is acceptable within your security posture.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"StringEquals": {
"s3:prefix": [
"client-1"
],
"s3:delimiter": [
"/"
]
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-bucket",
"Condition": {
"StringLike": {
"s3:prefix": [
"client-1/*"
]
}
}
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::my-bucket/client-1/*"
}
]
}

S3 Bucket Policy for Limiting Access to Cloudflare IP Addresses

I will be using Cloudflare as a proxy for my S3 website bucket to make sure users can't directly access the website with the bucket URL.
I have an S3 bucket set up for static website hosting with my custom domain: www.mydomain.com and have uploaded my index.html file.
I have a CNAME record with www.mydomain.com -> www.mydomain.com.s3-website-us-west-1.amazonaws.com and Cloudflare Proxy enabled.
Issue: I am trying to apply a bucket policy to Deny access to my website bucket unless the request originates from a range of Cloudflare IP addresses. I am following the official AWS docs to do this, but every time I try to access my website, I get a Forbidden 403 AccessDenied error.
This is my bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudflareGetObject",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::ACCOUNT_ID:user/Administrator",
"arn:aws:iam::ACCOUNT_ID:root"
]
},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::www.mydomain.com/*",
"arn:aws:s3:::www.mydomain.com"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"2c0f:f248::/32",
"2a06:98c0::/29",
"2803:f800::/32",
"2606:4700::/32",
"2405:b500::/32",
"2405:8100::/32",
"2400:cb00::/32",
"198.41.128.0/17",
"197.234.240.0/22",
"190.93.240.0/20",
"188.114.96.0/20",
"173.245.48.0/20",
"172.64.0.0/13",
"162.158.0.0/15",
"141.101.64.0/18",
"131.0.72.0/22",
"108.162.192.0/18",
"104.16.0.0/12",
"103.31.4.0/22",
"103.22.200.0/22",
"103.21.244.0/22"
]
}
}
}
]
}
By default, AWS Deny all the request. Source
Your policy itself does not grant access to the Administrator [or any other user], it only omits him from the list of principals that are explicitly denied. To allow him access to the resource, another policy statement must explicitly allow access using "Effect": "Allow". Source
Now, we have to create Two Policy Statment's:- First with Allow and Second with Deny. Then, It is better to have only One Policy With "allow" only to Specific IP.
It is better not to complicate simple things like using Deny with Not Principal and NotIPAddress. Even AWS says :
Very few scenarios require the use of NotPrincipal, and we recommend that you explore other authorization options before you decide to use NotPrincipal. Source
Now, the questions come on how to whitelist Cloudflare IP's???.
Let's go with a simple approach. Below is the Policy. Replace your bucket name and your Cloudflare Ip's. I have tested it and it is running.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFlareIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:getObject",
"Resource": [
"arn:aws:s3:::my-poc-bucket",
"arn:aws:s3:::my-poc-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"IP1/32",
"IP2/32"
]
}
}
}
]
}

I'm trying to create service control policy to block users from creating S3 buckets with public WRITE access, but it's not working

I'm trying to create service control policy to block users from creating S3 buckets with public access, but it's not working.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Deny",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutBucketAcl",
"s3:PutBucketPolicy"
],
"Resource": [
"*"
],
"Condition": {
"ForAnyValue:StringLikeIfExists": {
"s3:x-amz-acl": [
"public-read-write"
]
}
}
},
{
"Sid": "Statement2",
"Effect": "Deny",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutBucketAcl",
"s3:PutBucketPolicy"
],
"Resource": [
"*"
],
"Condition": {
"ForAnyValue:StringLikeIfExists": {
"s3:x-amz-grant-write": [
"http://acs.amazonaws.com/groups/global/AllUsers"
]
}
}
},
{
"Sid": "Statement3",
"Effect": "Deny",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutBucketAcl",
"s3:PutBucketPolicy"
],
"Resource": [
"*"
],
"Condition": {
"ForAnyValue:StringLikeIfExists": {
"s3:x-amz-grant-write-acp": [
"http://acs.amazonaws.com/groups/global/AllUsers"
]
}
}
}
]
}
I have logged in as root account. This policy is attached to this account, but still I am able to create S3 buckets with public access.
When I log into member account I was able to add public ACL to write.
This is how I am adding the public access under permission access control policy:
That "Public Access" section you show in the screenshot does not permit users to read the content of an Amazon S3 bucket. The options allow listing of the bucket and writing objects, but not reading objects.
If you wish to block the ability to change the setting shown in your screenshot, then Deny use of s3:PutBucketAcl. Your policy is doing this, but with the additional condition that the ACL is being set to public-read-write, which is not one of the settings for the Bucket ACL. Therefore, you should Deny that action without a condition.
However the true way to protect an Amazon S3 bucket from public access is to use Block Public Access. This overrides any other policies that are granting access, such as Bucket Policies or ACLs at the object-level. If Block Public Access is activated, then the bucket will stay private.
From Service Control Policies - Prevent Users from Modifying S3 Block Public Access Settings, you can create a Service Control Policy (SCP) to stop an account from changing this setting:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutBucketPublicAccessBlock"
],
"Resource": "*",
"Effect": "Deny"
}
]
}
According to Using Amazon S3 block public access - Amazon Simple Storage Service:
The DELETE PublicAccessBlock operations require the same permissions as the PUT operations. There are no separate permissions for the DELETE operations.
Therefore, the above policy would also prevent removal of the Block Public Access settings.
If you prevent the account from changing Block Public Access, then you do not actually need the policy you have shown in your question. This is because Block Public Access is overriding such settings, so users cannot make buckets or objects public.

How to access AWS resources without using IAM Role or IAM User Access

Let assume we have Ec2 instance and there are two applications. only one application should be able to access S3 bucket and other application
shouldn't be able to access the S3 bucket.
1) I don't want to use an IAM user Access key ID and Secret access key for this issue, because it's difficult manage. That is not recommended. (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html)
2) But I can't use IAM role . Because it's associate with the Ec2 instance and It will allow access to every applications inside that Ec2.
You can apply a bucket policy to restrict access on same of HTTP header request. allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
AWS Reference Link

Restricting S3 bucket access to a VPC

I am trying to apply the following policy in order to restrict my_bucket's access to a particular VPC.
When I try to apply this as a bucket policy, I get an Policy has an invalid condition key - ec2:Vpc.
How do I correct this?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition":{
"StringNotEquals": {
"ec2:Vpc": "arn:aws:ec2:region:account:vpc/vpc-ccccccc"
}
}
}
]
}
I just got this to work. I had to do two things. 1) Create the bucket policy on the S3 bucket, 2) create a "VPC Endpoint"
My S3 bucket policy looks like this (of course put in your bucket name and VPC identifier):
{
"Version": "2012-10-17",
"Id": "Policy1234567890123",
"Statement": [
{
"Sid": "Stmt1234567890123",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"StringEquals": {
"aws:sourceVpc": "vpc-12345678"
}
}
}
]
}
The S3 bucket also has some permissions outside the bucket policy to allow access from the AWS Console. Doing the above did not give access. To get access, I also had to go to AWS Console -> VPC -> Endpoints, and then create an endpoint. I attached the newly created endpoint to the only routing policy the account has at the moment (that has all subnets attached to it) and I used the default policy of
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
Once I created the endpoint, I was able to read from the S3 bucket from any EC2 instance in my VPC simply using wget with the right URL. I am still able to access the bucket from the AWS Console. But if I try to access the URL from outside the VPC, I get 403 forbidden. Thus, access to the S3 bucket is restricted to a single VPC, just like what you are looking for.
This is apparently a new feature. See this AWS blog entry for more information.
Two things that bit me and which might be helpful to add to Eddie's nice answer are:
First, you won't be able to view your bucket (or even modify its policy once you set the policy above) in the S3 AWS console unless you also give your AWS users permissions to manipulate the bucket. To do that, find your AWS account number (displayed in upper-right here), and add this statement to the bucket policy statements list:
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::YOUR_AWS_ACCOUNT_NUMBER:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"
]
},
Second, if you have more than one VPC, say vpc-XXXXXX and vpc-YYYYYY to give access to, the statement in Eddie's answer needs to be tweaked to something like the following (note the "Allow" "StringEquals" and list of sourceVpc values:
...
"Effect": "Allow",
...
"Condition": {
"StringEquals": {
"aws:sourceVpc": [
"vpc-XXXXXXXX",
"vpc-YYYYYYYY"
]
}
No, you can't do that.
Here's another person asking the same: https://forums.aws.amazon.com/thread.jspa?threadID=102387
Some have gotten overly creative with the problem trying to solve it with networking: https://pete.wtf/2012/05/01/how-to-setup-aws-s3-access-from-specific-ips/
I prefer a more simple route, S3 allows you to sign urls to solve this very problem, but inside of your VPC you may wish to not have to think about signing - or you just couldn't sign, for example you might be using wget, etc. So I wrote this little micro-service for that very reason: https://github.com/rmmeans/S3-Private-Downloader
Hope that helps!
UPDATED:
AWS now has a feature for VPC endpoints: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/, you should use that and not what I previously suggested.