Hardening S3 bucket - amazon-web-services

We store RPMs required for our deployment in a S3 bucket, we are hosting a yum repo on the bucket to make it easier for updating RPMS.
Currently, our bucket is accessible publicly over the S3 endpoint (s3.amazonaws.com) and open to the world for access as we currently can’t pull down yum packages from a private S3 repo.
We need to harden the security of the Repo bucket to enable authentication based access to S3 over s3.amazonaws.com endpoint. Any suggestion towards it ? Thanks !
`{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow Access From QA, dev",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::XXXXXXX:root"
]
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::test-repo",
"arn:aws:s3:::test-repo/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"X.X.X.X/32"
]
},
"StringNotEquals": {
"aws:sourceVpce": "vpce-xxxxxxx"
}
}
}
]
}
`

Rather than trying to specifically add an authentication layer take a look at adding a VPC endpoint to the VPC(s) that you want to be able to access your S3 bucket.
Once you have this in place (and added to the route tables) then you can update the bucket policy for your S3 bucket to add a condition to Deny all traffic not from the source VPC endpoint (aws:sourceVpce).
The advantage to this approach is that you will not need to make any changes to the servers themselves.
More documentation available here.

Related

S3 Bucket Policy for Limiting Access to Cloudflare IP Addresses

I will be using Cloudflare as a proxy for my S3 website bucket to make sure users can't directly access the website with the bucket URL.
I have an S3 bucket set up for static website hosting with my custom domain: www.mydomain.com and have uploaded my index.html file.
I have a CNAME record with www.mydomain.com -> www.mydomain.com.s3-website-us-west-1.amazonaws.com and Cloudflare Proxy enabled.
Issue: I am trying to apply a bucket policy to Deny access to my website bucket unless the request originates from a range of Cloudflare IP addresses. I am following the official AWS docs to do this, but every time I try to access my website, I get a Forbidden 403 AccessDenied error.
This is my bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudflareGetObject",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::ACCOUNT_ID:user/Administrator",
"arn:aws:iam::ACCOUNT_ID:root"
]
},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::www.mydomain.com/*",
"arn:aws:s3:::www.mydomain.com"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"2c0f:f248::/32",
"2a06:98c0::/29",
"2803:f800::/32",
"2606:4700::/32",
"2405:b500::/32",
"2405:8100::/32",
"2400:cb00::/32",
"198.41.128.0/17",
"197.234.240.0/22",
"190.93.240.0/20",
"188.114.96.0/20",
"173.245.48.0/20",
"172.64.0.0/13",
"162.158.0.0/15",
"141.101.64.0/18",
"131.0.72.0/22",
"108.162.192.0/18",
"104.16.0.0/12",
"103.31.4.0/22",
"103.22.200.0/22",
"103.21.244.0/22"
]
}
}
}
]
}
By default, AWS Deny all the request. Source
Your policy itself does not grant access to the Administrator [or any other user], it only omits him from the list of principals that are explicitly denied. To allow him access to the resource, another policy statement must explicitly allow access using "Effect": "Allow". Source
Now, we have to create Two Policy Statment's:- First with Allow and Second with Deny. Then, It is better to have only One Policy With "allow" only to Specific IP.
It is better not to complicate simple things like using Deny with Not Principal and NotIPAddress. Even AWS says :
Very few scenarios require the use of NotPrincipal, and we recommend that you explore other authorization options before you decide to use NotPrincipal. Source
Now, the questions come on how to whitelist Cloudflare IP's???.
Let's go with a simple approach. Below is the Policy. Replace your bucket name and your Cloudflare Ip's. I have tested it and it is running.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFlareIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:getObject",
"Resource": [
"arn:aws:s3:::my-poc-bucket",
"arn:aws:s3:::my-poc-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"IP1/32",
"IP2/32"
]
}
}
}
]
}

Unable to copy files to and from s3 from vpc server: "Unable to locate credentials"

I'd like to copy files to/from an s3 bucket from servers on my vpc without having to add credentials to each server in my cloud.
I followed the instructions at https://aws.amazon.com/premiumsupport/knowledge-center/s3-private-connection-no-authentication/ to set up a policy for a newly created VPC endpoint. As far as I can tell, I did everything correctly and added a good bucket policy to my bucket. I double checked the routing table settings. All appears good.
But perhaps I don't understand what this is supposed to do. When I type in:
aws s3 cp s3://My_Bucket_Name/some.pdf .
I just get:
fatal error: Unable to locate credentials
from my server in the vpc.
Here is the anonymized bucket policy I have:
{
"Version": "2012-10-17",
"Id": "Policy232323232323",
"Statement": [
{
"Sid": "Stmt1607462615603",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567789:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::MyBucket/*",
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-123456c789"
}
}
}
]
}
Try appending: --no-sign-request
From Command line options - AWS Command Line Interface:
A Boolean switch that disables signing the HTTP requests to the AWS service endpoint. This prevents credentials from being loaded.

S3 bucket policy is not allowing Athena to perform query execution

I am performing Amazon Athena queries on an S3 bucket. Let's call it athena-bucket. Today I got a requirement to restrict this bucket over VPC Enpoints. So I have tried this S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VPCe and SourceIP",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::**********:user/user_admin",
"arn:aws:iam::**********:root",
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::athena-bucket",
"arn:aws:s3:::athena-bucket/abc/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": [
"vpce-XXXXxxxxe",
"vpce-xxxxxxxxxx",
"vpce-XXXXXXXXXXXXXX"
]
},
"NotIpAddress": {
"aws:SourceIp": [
"publicip/32",
"publicip2/32"
]
}
}
}
]
}
Please note that Athena has full permission to access the above bucket. I want to use the S3 bucket policy to restrict access from only certain IP addresses and VPC Endpoint.
However, I am getting access denied error although request is routed through VPC Endpoints mentioned in the policy.
Amazon Athena is an Internet-based service. It accesses Amazon S3 directly and does not connect via an Amazon VPC.
If you restrict the bucket to only be accessible via a VPC Endpoint, Amazon Athena will not be able to access it.
There is actually a solution for you to get what you are asking for. The following policy condition allows actions from all of your VPC endpoints and Athena:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VPCe and SourceIP",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::**********:user/user_admin",
"arn:aws:iam::**********:root",
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::athena-bucket",
"arn:aws:s3:::athena-bucket/abc/*"
],
"Condition": {
"ForAllValues:StringNotEquals": {
"aws:sourceVpce": [
"vpce-XXXXxxxxe",
"vpce-xxxxxxxxxx",
"vpce-XXXXXXXXXXXXXX"
],
"aws:CalledVia": [ "athena.amazonaws.com" ]
}
}
}
]
}
The "ForAllValues" portion of the condition is what turns this AND condition into an OR.
Not sure how your IP restrictions would play with this, since you cannot tell which IPs Athena would be coming from.

Making S3 static web hosting content private

I want to be able to publish test reports to S3 and have it accessible to the URL sent at the end of the Drone build.
Is it possible to have the S3 static site not view-able by anyone? So its only accessible by people who can already access resources in the VPC using a VPN.
I read that the content must have public read access, so checking if that is avoidable.
Yes:
Set up the static website as normal,
Add a VPC endpoint for S3,
Use a bucket policy to deny all but traffic from your VPC.
Here is a good article describing it in more detail: https://blog.monsterxx03.com/2017/08/19/build-private-staticwebsite-on-s3/
The other option is to write an S3 bucket policy like below, where x.x.x.x/x is the CIDR of the VPC:
{
"Id": "Policy1564215115240",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1564215036691",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::<s3 bucket name>",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "x.x.x.x/x"
}
},
"Principal": "*"
}
]
}

Set ACL to private on all the files in a folder

For the testing purpose I have uploaded 'n'- number of files to a folder in the s3 bucket as "any aws user" ACL. Now I want to change ACL to "private" for all the files in that folder. I find that it can be done more easily by a third-party tool called s3cmd. But I have no permission to use third party tools. So is there any way to do it by the aws console(other than doing it programtically iterating over each file and setting the ACL).I am using php api's, Or is there any way to set acl recursively through AWS cli
You can Refer to this AWS CLI Documentation using put-bucket-acl the adding --acl to private
Alternatively you can specify with bucket policy to enforce private access to bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PrivateAclPolicy", "Effect": "Deny",
"Principal": { "AWS": "*"},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bucket_name/foldername/*"
],
"Condition": {
"StringNotEquals": {
"s3:x-amz-acl": [
"private"
]
}
}
}
]
}
Replace bucket_name and foldername with the name of your bucket and folder.
Thanks