Limit public access of bucket to vpc endpoint - amazon-web-services

So I want my S3 bucket to be publicly accessible but only if the request is sent through the VPC endpoint. I allowed public access on both my bucket level and account level and also added the following statements to my bucket policy:
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"${media_storage_bucket_arn}",
"${media_storage_bucket_arn}/*"
]
},
{
"Sid": "Access-to-specific-VPCE-only",
"Effect": "Deny",
"Principal": {
"AWS": "arn:aws:iam::${current_account}:root"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"${media_storage_bucket_arn}",
"${media_storage_bucket_arn}/*"
],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "${vpc_endpoint}"
}
}
}
I have an EC2 server in a private subnet that needs to read images from an S3 bucket using a curl command and the object URL, so far, the easiest way to accomplish this would be to lift all the public access blocks, but this compromises the safety of the files in the bucket, so that is why I implemented the vpc endpoint statement to restrict access if the request is not sent through the endpoint, This works fine, but it still allows me to read any object in the bucket through its URL even if the request is not sent through the vpc endpoint. I'm sure there has to be an easier/better approach.

Related

Access denied error when sending curl to S3 bucket [duplicate]

This question already exists:
Is there a way to curl an S3 bucket from an EC2 instance
Closed 5 months ago.
I am trying to send a curl request to an S3 bucket from my EC2 to retrieve a specific object within the bucket:
I want to create a transparent proxy with caching implemented by nginx so the aws cli wont work for this.
The EC2 instance (Linux machine) works as a proxy server with NGINX to send HTTP requests to the bucket for caching purposes, I do not have an SSL cert on this instance.
The bucket only contains images.
The curl request looks like this:
curl my-bucket.s3.eu-west-1.amazonaws.com/1450/1349/5467_1012.jpg
But I get an Access Denied error
I have attached a full read access policy to my EC2 instance role.
Here is my bucket policy:
{
"Version": "2012-10-17",
"Id": "MediaStorageBucketPolicy",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${current_account}:root"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"${media_storage_bucket_arn}",
"${media_storage_bucket_arn}/*"
]
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${current_account}:root"
},
"Action": [
"s3:PutObject"
],
"Resource": [
"${media_storage_bucket_arn}",
"${media_storage_bucket_arn}/*"
]
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:I am::<account number>:role/ssm-ec2-service-role"
]
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"${media_storage_bucket_arn}",
"${media_storage_bucket_arn}/*"
]
}
]
}
Can this be achieved without making the bucket public?
The solution I found was creating an IAM user with S3 read access that can then be used within the EC2 (credentials of the user) to pull data from the bucket, see the following doc:
http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html

How can I add IP restrictions to s3 bucket(in the bucket Policy) already having a User restriction

I have a few s3 buckets, for which I have given access to only a specific IAM user. I did it by setting the following bucket policies :
Effect : "Deny"
NotPrincipal : { "AWS " : "<My_IAM_User>" }
I'm able to access the buckets only from the IAM user, so the policy works as expected, but I also want to restrict the bucket access to only a specific IP. This IP is the ec2 IP address my server is running on. The policy values I've used is as :
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"<My_EC2_Server_IP_Address>"
]
}
}
I was expecting the above policy would allow only my EC2 server to access the s3 bucket objects, but if I'm making a call from any other IP ( eg : running the server on my local machine and trying to access the buckets. ) it's still responds with valid objects from the bucket.
The above policy does NOT seem to block any request to access the bucket is made from other random IP addresses.
My entire bucket policy looks like :
{
"Version": "<Version_value>",
"Statement": [
{
"Sid": "<Sid_value>",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "<My_IAM_User>"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<My_Bucket_name>/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "<My_EC2_Server_IP_Address>"
}
}
}
]
}
My References :
1. https://aws.amazon.com/premiumsupport/knowledge-center/block-s3-traffic-vpc-ip/
2. https://medium.com/#devopslearning/100-days-of-devops-day-11-restricting-s3-bucket-access-to-specific-ip-addresses-a46c659b30e2
If your intention is to deny all AWS credentials except a given IAM user and to deny all IP addresses other than a given IP, then I would write that policy as two, independent deny statements.
Something like this:
{
"Version": "<Version_value>",
"Statement": [
{
"Sid": "deny1",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "<My_IAM_User>"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<My_Bucket_name>/*"
},
{
"Sid": "deny2",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<My_Bucket_name>/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "<My_EC2_Server_IP_Address>"
}
}
}
]
}
Be careful with the IP address condition. Unless you are using an Elastic IP, your EC2 instance's IP can change e.g. if you stop then restart the instance.
Also note: you should not be using IAM User credentials on an EC2 instance. Instead, you should be using IAM Roles.

Allow lambda function to access S3 bucket but block external IPs

I am trying to write in a S3 bucket with the help of a lambda function but would like to have the S3 bucket accessible only to IPs inside office network.
I have used this bucket policy but this does not allow my lambda to write to the S3 bucket, when i remove the IP blocking part, lambda function works fine.
How can i change this bucket policy so that it allows lambda to write but does not allow external IPS to access the S3 bucket?
Thanks!
{
"Version": "2012-10-17",
"Id": "",
"Statement": [
{
"Sid": "AllowSESPuts",
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::mybucket.net/*",
"Condition": {
"StringEquals": {
"aws:Referer": "230513111850"
}
}
},
{
"Sid": "AllowECSPuts",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::mybucket.net/*"
},
{
"Sid": "",
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::abc.net/*",
"arn:aws:s3:::abc.net"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"AROAJIS5E4JXTWB4RTX3I:*",
"230513111751"
]
},
"NotIpAddress": {
"aws:SourceIp": [
"81.111.111.111/24" --dummy IP
]
}
}
}
]
}
As a general rule, it makes life easier if you can avoid Deny statements in policies.
Therefore, you could configure:
An Amazon S3 bucket with a Bucket Policy that permits access from the desired CIDR range
An IAM Role for the Lambda function that permits access to the Amazon S3 bucket
There should be no need for a Deny statement in the bucket policy since access is denied by default.
One typical approach is to place the lambda function inside a private VPC subnet. Then attach an S3 gateway VPC endpoint to it and set the corresponding S3 bucket policy to only allow certain actions performed from the VPC endpoint. [ref]

Access S3 bucket from VPC

I'm running a NodeJS script and using the aws-sdk package to write files to an S3 bucket. This works fine when I run the script locally, but not from a ECS Fargate service, that's when I get Error: AccessDenied: Access Denied.
The service has the allowed VPC vpc-05dd973c0e64f7dbc. I've tried adding an Internet Gateway to this VPC, and also an endpoint (as seen in the attached image) - but nothing resolves the Access Denied error. Any ideas what I'm missing here?
SOLVED: the problem was me misunderstanding aws:sourceVpce. It requires the VPC endpoint id and not the VPC id. **
Endpoint
Internet Gateway
Bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3MKW5OAU5CHLI"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mywebsite.com/*"
},
{
"Sid": "Stmt1582486025157",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mywebsite.com/*",
"Principal": "*",
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpc-05dd973c0e64f7dbc"
}
}
}
]
}
Please add an bucket policy that allows access from the VPC endpoint.
Update your bucket policy with a condition, that allows users to access the S3 bucket when the request is from the VPC endpoint that you created. To white list those users to download objects, you can use a bucket policy that's similar to the following:
Note: For the value of aws:sourceVpce, enter the VPC endpoint ID of the endpoint that you created.
{
"Version": "2012-10-17",
"Id": "Policy1314555909999",
"Statement": [
{
"Sid": "<<Access-to-specific-VPConly>>",
"Principal": "*",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::awsexamplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1c2g3t4e"
}
}
}
]
}

AWS S3 Bucket Policy security

I have a website where the users can upload files like images or pdfs, and I'm storing them in AWS S3. It's working correctly, but I put a "public policy" to test it like this one:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
]
}
]
}
It works but I think that a malicius user could make a lot of requests and amazon charge me for that. So what would be the way to limit the access but keep working correctly with my webapp?
Thanks in advance.
You could create a time-limited Amazon S3 pre-signed URL for those objects. This grants access to a private object for a limited time.
You can Restrict Access to Specific HTTP Referrer by modifying your bucket policy like this.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*"
],
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
}
]
}
Replace example.com with your website name.This allows the objects can be only accessed from domain staring with your domain name. Make sure the browsers you use include the http referer header in the request . For more details see Restricting Access to a Specific HTTP Referrer