I have a static website hosted on S3, I have set all files to be public.
Also, I have an EC2 instance with nginx that acts as a reverse proxy and can access the static website, so S3 plays the role of the origin.
What I would like to do now is set all files on S3 to be private, so that the website can only be accessed by traffic coming from the nginx (EC2).
So far I have tried the following. I have created and attached a new policy role to the EC2 instance with
Policies Granting Permission: AmazonS3ReadOnlyAccess
And have rebooted the EC2 instance.
I then created a policy in my S3 bucket console > Permissions > Bucket Policy
{
"Version": "xxxxx",
"Id": "xxxxxxx",
"Statement": [
{
"Sid": "xxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXX-bucket/*"
}
]
}
As principal I have set the ARN I got when I created the role for the EC2 instance.
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
However, this does not work, any help is appreciated.
If the Amazon EC2 instance with nginx is merely making generic web requests to Amazon S3, then the question becomes how to identify requests coming from nginx as 'permitted', while rejecting all other requests.
One method is to use a VPC Endpoint for S3, which allows direct communication from a VPC to Amazon S3 (rather than going out an Internet Gateway).
A bucket policy can then restrict access to the bucket such that it can only be accessed via that endpoint.
Here is a bucket policy from Example Bucket Policies for VPC Endpoints for Amazon S3:
The following is an example of an S3 bucket policy that allows access to a specific bucket, examplebucket, only from the VPC endpoint with the ID vpce-1a2b3c4d. The policy uses the aws:sourceVpce condition key to restrict access to the specified VPC endpoint.
{
"Version": "2012-10-17",
"Id": "Policy",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
},
"Principal": "*"
}
]
}
So, the complete design would be:
Object ACL: Private only (remove any current public permissions)
Bucket Policy: As above
IAM Role: Not needed
Route Table configured for VPC Endpoint
Permissions in Amazon S3 can be granted in several ways:
Directly on an object (known as an Access Control List or ACL)
Via a Bucket Policy (which applies to the whole bucket, or a directory)
To an IAM User/Group/Role
If any of the above grant access, then the object can be accessed publicly.
Your scenario requires the following configuration:
The ACL on each object should not permit public access
There should be no Bucket Policy
You should assign permissions in the Policy attached to the IAM Role
Whenever you have permissions relating to a User/Group/Role, it is better to assign the permission in IAM rather than on the Bucket. Use Bucket Policies for general access to all users.
The policy on the Role would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This policy is directly applied to the IAM Role, so there is no need for a principal field.
Please note that this policy only allows GetObject -- it does not permit listing of buckets, uploading objects, etc.
You also mention that "I have set all files to be public". If you did this by making each individual object publicly readable, then anyone will still be able to access the objects. There are two ways to prevent this -- either remove the permissions from each object, or create a Bucket Policy with a Deny statement that stops access, but still permits the Role to get access.
That's starting to get a bit tricky and hard to maintain, so I'd recommend removing the permissions from each object. This can be done via the management console by editing the permissions on each object, or by using the AWS Command-Line Interface (CLI) with a command like:
aws s3 cp s3://my-bucket s3://my-bucket --recursive --acl private
This copies the files in-place but changes the access settings.
(I'm not 100% sure whether to use --acl private or --acl bucket-owner-full-control, so play around a bit.)
Related
I have few buckets in S3 where I want to limit access.
In the process of implementing this I am now confused and appreciate your help in making me understand this.
This is my scenario --
Created a VPC, Public Subnet, ec2.
Created a bucket using an admin user --aws1234-la
Created a bucket policy and attached to the bucket saying allow access only if coming from my vpc.
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::aws1234-la",
"arn:aws:s3:::aws1234-la/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-111bbb22"
}
}
} ] }
Next, from CLI , aws s3 ls
It is displaying the buckets.
Where I am making a mistake ?
Ideally step 4 should return an error as I am not going thru my VPC ?
Any help will be hightly appreciated.
Thanks
From Specifying Conditions in a Policy - Amazon Simple Storage Service:
The new condition keys aws:sourceVpce and aws:sourceVpc are used in bucket policies for VPC endpoints.
Therefore, you need to be accessing the S3 bucket via a VPC Endpoint to be able to restrict access to a VPC. This is because, without the VPC Endpoint, the request being received by Amazon S3 simply appears to be coming "from the Internet", so it is not able to identify the source VPC. In difference, a request coming via a VPC Endpoint includes an identifier of the source VPC.
Making it work
Assumption: You already have an IAM Policy on your user(s) that allow access to the bucket. You are wanting to know how to further restrict the bucket so that it is only accessible from a specific VPC. If this is not the case, then you should be using an Allow policy to grant access to the bucket, since access is denied by default.
To reproduce your situation, I did the following:
Created a new VPC with a public subnet
Added a VPC Endpoint to the VPC
Launched an Amazon EC2 instance in the public subnet, assigning an IAM Role that already has permission to access all of my Amazon S3 buckets
Created an Amazon S3 bucket (my-vpc-only-bucket)
Added a Bucket Policy to the bucket (from Example Bucket Policies for VPC Endpoints for Amazon S3):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::my-vpc-only-bucket",
"arn:aws:s3:::my-vpc-only-bucket/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-111bbb22"
}
}
}
]
}
Please note that this policy assumes that the user(s) already have access to the bucket via an IAM Policy that grants Allow access. This policy is adding a Deny that will override the access they already have to the bucket.
Logged in to the Amazon EC2 instance in the new VPC and then:
Run aws s3 ls s3://my-vpc-only-bucket
It worked!
From my own computer on the Internet:
Run aws s3 ls s3://my-vpc-only-bucket
Received a AccessDenied error (which is what we want!)
By the way, the Deny policy will also prohibit your use of the Amazon S3 management console to manage the bucket because requests are not coming from the VPC. This is a side-effect of using Deny and s3:* on the bucket. You can always remove the bucket policy by using your root credentials (login via email address), then go to the Bucket Policy in the S3 console and click Delete. (You'll see some errors on the screen getting to the Bucket Policy, but it will work.)
Alternate method via Allow
If, on the other hand, the user(s) do not already have access to all Amazon S3 buckets, then by default they will not have access to the new bucket. Thus, you will need to grant Allow access to the bucket, but only from the VPC via the VPC Endpoint.
Setup is the same as above, but the Bucket Policy would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Allow", <-- This changed
"Resource": ["arn:aws:s3:::my-vpc-only-bucket",
"arn:aws:s3:::my-vpc-only-bucket/*"],
"Condition": {
"StringEquals": { <--- This changed
"aws:sourceVpc": "vpc-111bbb22"
}
}
}
]
}
I then tested it with an IAM Role assigned to the EC2 instance that does not have any permissions to access Amazon S3
Ran aws s3 ls s3://my-vpc-only-bucket
It worked!
Ran from my own computer, using an IAM User that does not have any permissions to access Amazon S3
Received a AccessDenied error (which is what we want!)
Bottom line: You need to add a VPC Endpoint to the VPC.
Since you specified the resource at the bucket level, it will denied all the operations inside the bucket. However, the listing of the bucket is acting on the resource arn:aws:s3:::*, and it is not denied, thus the bucket will be displayed even if you are not inside of the VPC.
AFAIK, there is no way to partially hide only for the bucket without blocking all the buckets.
I have an Amazon AWS account and I'm using Amazon S3.
I'd like to give access to specific people to a Amazon S3 bucket.
Here's what I'd like to do :
Amazon AWS: Access limited to my account
Amazon S3: Access limited to my account
Bucket "website-photos": Access authorized to 3 people that will be able to read and write in the bucket through AWS management console.
Files in the bucket "website-photos": Public can read them.
How can I setup this config?
Just create an IAM policy and attach to the users you want to give access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
See: Amazon S3: Allows Read and Write Access to Objects in an S3 Bucket - AWS Identity and Access Management
The general approach is:
If you want something to be "public" (accessible by anyone), then use a Bucket Policy
If you want to only assign permissions to a specific IAM User, then attach a policy to the IAM User
If you want to only assign permissions to a group of IAM Users, then create an IAM Group, attach a policy and assign the group to the desired IAM Users
I want to proxy an Amazon S3 bucket through our reverse proxy (Nginx).
For higher security, I want to forbid the read access to the bucket to anything except of the HTTPS host at which I ran the proxy.
Is there a way to configure Amazon S3 for this task?
Please provide the configuration.
I considered to add a password in S3 bucket name, but it is not a solution, because we need also signed uploads to the bucket and so the bucket name will be publicly available.
If your reverse proxy has a Public IP address, then you would add this policy to the Amazon S3 bucket:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.22/32"}
}
}
]
}
This grants permissions to GetObject if the request is coming from the specific IP address. Amazon S3 is private by default, so this is granting access only in that particular situation. You will also want to grant access to IAM Users/Groups (via IAM, not a Bucket Policy) so that bucket content can be updated.
See: Bucket Policy Examples - Amazon Simple Storage Service
I have an S3 bucket with confidential data.
I added a bucket policy to allow only a limited set of roles within the account. This stops other user from accessing the s3 bucket from console.
One of the allowed roles, say "foo-role" is created for EC2 instances to read the S3 bucket.
Now, even the denied roles can create a VM, assign the "foo-role" to this VM, ssh into this VM and look at the bucket content.
Is there a way that I can prevent other users from assigning the "foo-role" to their EC2 instances.
Add this policy to your IAM Users. This policy will prevent a user from associating or replacing a role to an EC2 instance.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "DENY",
"Action": [
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation",
"iam:PassRole"
],
"Resource": "*"
}
]
}
Scenario: I have an EC2 instance and a S3 bucket under the same account, and my web app on that EC2 wants access to resources in that bucket.
Following official docs, I created an IAM role with s3access and assigned it to the EC2 instance. To my understanding, now my web app should be able to access the bucket. However, after trials, seems I have to add a allowPublicRead bucket policy like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
Otherwise I got access forbidden.
But why should I use this allowPublicRead bucket policy, since I already granted s3access IAM role to the EC2 instance?
S3 s3:GetObject will only allow access to objects from your ec2 instance and what you want is to access these objects from your web-app which means from your browser, in this case these images/objects will be rendered to user browser and if its a public facing application then you need to assign AllowPublicRead permission as well.