S3 is not detecting the bucket policies - amazon-web-services

I have created an Amazon S3 bucket and I want to provide access to all my corporate users, which means anybody can access the S3 bucket and download objects.
I have written a Bucket Policy for IP restriction:
{
"Version": "2012-10-17",
"Id": "S3PolicyIPRestrict",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"10.0.8.10"
]
}
}
}
]
}
When I add ACL permission as everyone "read" then only I am able to access the object. But the bucket policy is not applying. Anybody can access from outside the network also. What I am doing wrong?

Your bucket policy is granting permission for a source IP address of 10.0.8.10. This is a private IP address.
However, Amazon S3 is a public service sitting on the Internet. When accessing Amazon S3, requests go across the Internet (or at least to the AWS edge of the Internet) and your request will appear to be coming from a Public IP address. Therefore, the policy is not permitting access, since your address does not match the one in the policy.
You should change the policy to use the Public IP address that is used when your corporate traffic access the Internet. You could discover this address via http://checkip.amazonaws.com.
Also, if you are granting access via a Bucket Policy, you should not grant access via the ACLs.

Related

Can I make a public S3 Bucket accessible to whitelisted IPs or DNS names?

We are facing an issue with S3 private files which is taking excess time to load on a Drupal website.
Can we make a S3 bucket public but restrict the public files to whitelisted IP addresses or DNS names.
A link to the documentation would be a great help
You can restrict an Amazon S3 bucket so that it is accessible only to a range of IP addresses. This can be done by adding a Bucket Policy.
See: Bucket policy examples - Amazon Simple Storage Service
For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOUR-BCKET-NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "54.240.143.0/24"
}
}
}
]
}
This permits access to an object (GetObject), but only if the request is coming from the CIDR range given in aws:SourceIp. This can a list of CIDR ranges too.
It is not possible to "restrict by DNS names", but a bucket can be restricted by referer (which is the domain of the website that the user was using when they clicked a link in their browser). However, this is not a secure method and can be easily faked.

Cross account S3 static website access over VPN only

I'm trying to allow access to s3 bucket static website over VPN from network aws account , bucket in prod account.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": "account-prod",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*",
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-1"
}
}
}
{
"Sid": "",
"Effect": "Allow",
"Principal": "account-network",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*",
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-2" <<<>>> tried SourceVpce as well
}
}
}
]
}
I used VPC endpoint interface in the account where VPN is setup , I tried using Condition SourceVpc and SourceVpce but non worked.
I'm using transit gateway and aws client vpn and allowed s3 endpoint IPs on the vpn endpoint + SGs + auth rules. (tgw is used and s3 prefix list, route entry from s3 prefix list via tgw)
bucket uses object owner + private ACL + bucket policy and I tried adding grantee with the canonical account id.
Any ideas what am I doing wrong here ?
This currently works in the prod account as we have another VPN solution that runs there, we are trying to migrate everything to network account and move to aws client vpn.
Any ideas what am I doing wrong here ?
Yes. s3 bucket static website can only be accesses over the Internet. You can't access them using private IP addresses from VPC or VPN. If you use VPN, you have to setup some proxy which will access the website using the internet, and then pass it back to your host.
Make sure that your VPC Subnet route table has a route to the S3 endpoint, and the policy for the endpoint is giving access.
https://tomgregory.com/when-to-use-an-aws-s3-vpc-endpoint/
next, setup your bucket policy as below, try to give access from the source of your VPC Endpoint, and not the VPC itself. (note the vpce in the policy doc).
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies-vpc-endpoint.html

Restricting Access to S3 buckets from a VPC only

I have few buckets in S3 where I want to limit access.
In the process of implementing this I am now confused and appreciate your help in making me understand this.
This is my scenario --
Created a VPC, Public Subnet, ec2.
Created a bucket using an admin user --aws1234-la
Created a bucket policy and attached to the bucket saying allow access only if coming from my vpc.
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::aws1234-la",
"arn:aws:s3:::aws1234-la/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-111bbb22"
}
}
} ] }
Next, from CLI , aws s3 ls
It is displaying the buckets.
Where I am making a mistake ?
Ideally step 4 should return an error as I am not going thru my VPC ?
Any help will be hightly appreciated.
Thanks
From Specifying Conditions in a Policy - Amazon Simple Storage Service:
The new condition keys aws:sourceVpce and aws:sourceVpc are used in bucket policies for VPC endpoints.
Therefore, you need to be accessing the S3 bucket via a VPC Endpoint to be able to restrict access to a VPC. This is because, without the VPC Endpoint, the request being received by Amazon S3 simply appears to be coming "from the Internet", so it is not able to identify the source VPC. In difference, a request coming via a VPC Endpoint includes an identifier of the source VPC.
Making it work
Assumption: You already have an IAM Policy on your user(s) that allow access to the bucket. You are wanting to know how to further restrict the bucket so that it is only accessible from a specific VPC. If this is not the case, then you should be using an Allow policy to grant access to the bucket, since access is denied by default.
To reproduce your situation, I did the following:
Created a new VPC with a public subnet
Added a VPC Endpoint to the VPC
Launched an Amazon EC2 instance in the public subnet, assigning an IAM Role that already has permission to access all of my Amazon S3 buckets
Created an Amazon S3 bucket (my-vpc-only-bucket)
Added a Bucket Policy to the bucket (from Example Bucket Policies for VPC Endpoints for Amazon S3):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::my-vpc-only-bucket",
"arn:aws:s3:::my-vpc-only-bucket/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-111bbb22"
}
}
}
]
}
Please note that this policy assumes that the user(s) already have access to the bucket via an IAM Policy that grants Allow access. This policy is adding a Deny that will override the access they already have to the bucket.
Logged in to the Amazon EC2 instance in the new VPC and then:
Run aws s3 ls s3://my-vpc-only-bucket
It worked!
From my own computer on the Internet:
Run aws s3 ls s3://my-vpc-only-bucket
Received a AccessDenied error (which is what we want!)
By the way, the Deny policy will also prohibit your use of the Amazon S3 management console to manage the bucket because requests are not coming from the VPC. This is a side-effect of using Deny and s3:* on the bucket. You can always remove the bucket policy by using your root credentials (login via email address), then go to the Bucket Policy in the S3 console and click Delete. (You'll see some errors on the screen getting to the Bucket Policy, but it will work.)
Alternate method via Allow
If, on the other hand, the user(s) do not already have access to all Amazon S3 buckets, then by default they will not have access to the new bucket. Thus, you will need to grant Allow access to the bucket, but only from the VPC via the VPC Endpoint.
Setup is the same as above, but the Bucket Policy would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Allow", <-- This changed
"Resource": ["arn:aws:s3:::my-vpc-only-bucket",
"arn:aws:s3:::my-vpc-only-bucket/*"],
"Condition": {
"StringEquals": { <--- This changed
"aws:sourceVpc": "vpc-111bbb22"
}
}
}
]
}
I then tested it with an IAM Role assigned to the EC2 instance that does not have any permissions to access Amazon S3
Ran aws s3 ls s3://my-vpc-only-bucket
It worked!
Ran from my own computer, using an IAM User that does not have any permissions to access Amazon S3
Received a AccessDenied error (which is what we want!)
Bottom line: You need to add a VPC Endpoint to the VPC.
Since you specified the resource at the bucket level, it will denied all the operations inside the bucket. However, the listing of the bucket is acting on the resource arn:aws:s3:::*, and it is not denied, thus the bucket will be displayed even if you are not inside of the VPC.
AFAIK, there is no way to partially hide only for the bucket without blocking all the buckets.

Restrict Amazon S3 access to single HTTPS host

I want to proxy an Amazon S3 bucket through our reverse proxy (Nginx).
For higher security, I want to forbid the read access to the bucket to anything except of the HTTPS host at which I ran the proxy.
Is there a way to configure Amazon S3 for this task?
Please provide the configuration.
I considered to add a password in S3 bucket name, but it is not a solution, because we need also signed uploads to the bucket and so the bucket name will be publicly available.
If your reverse proxy has a Public IP address, then you would add this policy to the Amazon S3 bucket:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.22/32"}
}
}
]
}
This grants permissions to GetObject if the request is coming from the specific IP address. Amazon S3 is private by default, so this is granting access only in that particular situation. You will also want to grant access to IAM Users/Groups (via IAM, not a Bucket Policy) so that bucket content can be updated.
See: Bucket Policy Examples - Amazon Simple Storage Service

Only allow EC2 instance to access static website on S3

I have a static website hosted on S3, I have set all files to be public.
Also, I have an EC2 instance with nginx that acts as a reverse proxy and can access the static website, so S3 plays the role of the origin.
What I would like to do now is set all files on S3 to be private, so that the website can only be accessed by traffic coming from the nginx (EC2).
So far I have tried the following. I have created and attached a new policy role to the EC2 instance with
Policies Granting Permission: AmazonS3ReadOnlyAccess
And have rebooted the EC2 instance.
I then created a policy in my S3 bucket console > Permissions > Bucket Policy
{
"Version": "xxxxx",
"Id": "xxxxxxx",
"Statement": [
{
"Sid": "xxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXX-bucket/*"
}
]
}
As principal I have set the ARN I got when I created the role for the EC2 instance.
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
However, this does not work, any help is appreciated.
If the Amazon EC2 instance with nginx is merely making generic web requests to Amazon S3, then the question becomes how to identify requests coming from nginx as 'permitted', while rejecting all other requests.
One method is to use a VPC Endpoint for S3, which allows direct communication from a VPC to Amazon S3 (rather than going out an Internet Gateway).
A bucket policy can then restrict access to the bucket such that it can only be accessed via that endpoint.
Here is a bucket policy from Example Bucket Policies for VPC Endpoints for Amazon S3:
The following is an example of an S3 bucket policy that allows access to a specific bucket, examplebucket, only from the VPC endpoint with the ID vpce-1a2b3c4d. The policy uses the aws:sourceVpce condition key to restrict access to the specified VPC endpoint.
{
"Version": "2012-10-17",
"Id": "Policy",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
},
"Principal": "*"
}
]
}
So, the complete design would be:
Object ACL: Private only (remove any current public permissions)
Bucket Policy: As above
IAM Role: Not needed
Route Table configured for VPC Endpoint
Permissions in Amazon S3 can be granted in several ways:
Directly on an object (known as an Access Control List or ACL)
Via a Bucket Policy (which applies to the whole bucket, or a directory)
To an IAM User/Group/Role
If any of the above grant access, then the object can be accessed publicly.
Your scenario requires the following configuration:
The ACL on each object should not permit public access
There should be no Bucket Policy
You should assign permissions in the Policy attached to the IAM Role
Whenever you have permissions relating to a User/Group/Role, it is better to assign the permission in IAM rather than on the Bucket. Use Bucket Policies for general access to all users.
The policy on the Role would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This policy is directly applied to the IAM Role, so there is no need for a principal field.
Please note that this policy only allows GetObject -- it does not permit listing of buckets, uploading objects, etc.
You also mention that "I have set all files to be public". If you did this by making each individual object publicly readable, then anyone will still be able to access the objects. There are two ways to prevent this -- either remove the permissions from each object, or create a Bucket Policy with a Deny statement that stops access, but still permits the Role to get access.
That's starting to get a bit tricky and hard to maintain, so I'd recommend removing the permissions from each object. This can be done via the management console by editing the permissions on each object, or by using the AWS Command-Line Interface (CLI) with a command like:
aws s3 cp s3://my-bucket s3://my-bucket --recursive --acl private
This copies the files in-place but changes the access settings.
(I'm not 100% sure whether to use --acl private or --acl bucket-owner-full-control, so play around a bit.)