Private S3 static website accessed only by API Gateway - amazon-web-services

I want to set up a static S3 website that is only accessible via API Gateway, so what I've done is.
S3 side
1. Enabled static website hosting on the S3 bucket.
2. Blocked all public access.
3. Put in a bucket policy that only allows my VPC Endpoint to access it.
{
"Version": "2012-10-17",
"Id": "VPCe",
"Statement": [
{
"Sid": "VPCe",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket.com/*",
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "vpce-my-vpce"
}
}
}
]
}
API Gateway side
1. Mapped that same VPCE to the API
2. Created a proxy resource
3. In the integration request, I made it HTTP and put my S3 website URL as the endpoint URL, content handling as passththrough.
4. But when I test this through APIGW, I get access denied.
Is there something I'm missing, or am I wrong to expect this to work?
I get a 403, Access Denied on this.

I want to set up a static S3 website that is only accessible via API Gateway,
You can't do this, as its not possible. S3 static websites are only accessible through public URL, thus you need to access it from the internet.
They are not meant to be accessed from VPC using private IP addresses or any VPC endpoints.
If you want to have private static website, you have to host it yourself on private EC2 instance or ECS container.

Related

Cross account S3 static website access over VPN only

I'm trying to allow access to s3 bucket static website over VPN from network aws account , bucket in prod account.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": "account-prod",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*",
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-1"
}
}
}
{
"Sid": "",
"Effect": "Allow",
"Principal": "account-network",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket/*",
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-2" <<<>>> tried SourceVpce as well
}
}
}
]
}
I used VPC endpoint interface in the account where VPN is setup , I tried using Condition SourceVpc and SourceVpce but non worked.
I'm using transit gateway and aws client vpn and allowed s3 endpoint IPs on the vpn endpoint + SGs + auth rules. (tgw is used and s3 prefix list, route entry from s3 prefix list via tgw)
bucket uses object owner + private ACL + bucket policy and I tried adding grantee with the canonical account id.
Any ideas what am I doing wrong here ?
This currently works in the prod account as we have another VPN solution that runs there, we are trying to migrate everything to network account and move to aws client vpn.
Any ideas what am I doing wrong here ?
Yes. s3 bucket static website can only be accesses over the Internet. You can't access them using private IP addresses from VPC or VPN. If you use VPN, you have to setup some proxy which will access the website using the internet, and then pass it back to your host.
Make sure that your VPC Subnet route table has a route to the S3 endpoint, and the policy for the endpoint is giving access.
https://tomgregory.com/when-to-use-an-aws-s3-vpc-endpoint/
next, setup your bucket policy as below, try to give access from the source of your VPC Endpoint, and not the VPC itself. (note the vpce in the policy doc).
https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies-vpc-endpoint.html

AWS S3 - Public file access to EC2 thru URL

I am using AWS S3 for storing pdf files, docs and other files and running my Backend java application and front end react application on EC2.
My requirements,
EC2 backend application should have access to the S3 bucket for put and get objects using the API or the Java client
EC2 front end application should be able to access the objects using the S3 URL
EC2 back end application should be able to access the objects using S3 URL
Added IAM policy to get EC2 instance access to the S3 bucket and it works for put and get using the API.
For, accessing objects using S3 URL, enabled pUblic access, but limiting only to the front end domain, so that nobody else can access the URL directly and it will be only thru the Front end.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-uat/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com:9000/*",
]
}
}
}
]
}
This is working as expected. Now, I am able to access the files using S3 URL only within the front end app and not outside the front end app.
Now, I need EC2 instance also access S3 objects using the URL. Not sure, How can I add EC2 instance access in this policy. My EC2 instance count can be changing from 2 to 5,6,.... So want to make it generic so that all my current and future backend EC2 instances can access thru URL.

Is there any way to host a static wesbite on AWS S3 without giving public access?

I wish to host a static website on Amazon S3 without actually giving the public access to the bucket. I am using a client AWS account in which all the buckets have public accessed blocked, when I try to configure my bucket as public, it redirects me to a page where I have to grant public access to all the buckets.
You can front your static site with an Amazon CloudFront distribution. In addition to providing the benefits of an integrated CDN, you can configure an Origin Access Identity that ensures that the bucket can only be accessed through CloudFront, not through public S3.
Similar to what #PaulG said, you can also include a bucket policy that includes a sourceVpc condition, which allows you to set up a vpc endpoint to the bucket and only access the bucket from that VPC. I remember testing this setup a few months back and it worked to only access the website from a vpc.
Speaking about just 1 S3 bucket that is hosting a static site, you can add a bucket policy under the Permissions tab, allowing or disallowing IP Addresses. There are some great examples here & I've added a simplified example allowing certain IPs. In this case granting access to the other account's VPC NAT Gateway IP address should work. https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html
{
"Id":"PolicyId54",
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AllowIPmix",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:*",
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET",
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"54.240.143.0/24",
"2001:DB8:1234:5678::/64"
]
}
}
}
]
}
Note, you still turn "Block public access" off along with the above policy.
Yes it is possible, All you need to do is serve s3 through cloudfront.
Client -> Route53 -> Cloudfront -> S3 (blocked public access)
In Cloudfront
Create cloudfront function (from left menu), this will redirect any
request with index.html appended. For ex: example.com/home to
example.com/home/index.html
'use strict';
function handler(event) {
var request = event.request;
var uri = request.uri;
// Check whether the URI is missing a file name.
if (uri.endsWith('/')) {
request.uri += 'index.html';
}
// Check whether the URI is missing a file extension.
else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
Create the origin access (from left menu), this will be used in
distribution's origin
In Distributions
In origin tab
Create origin as S3 type, by choosing the s3 bucket
Click on origin access control settings that create at first step
Edit general settings and put index.html in default root object.
Edit Behaviours, In Function associations, select cloudfront function
in viewer request. Don’t need to go with lambda function
In S3
In properties, disable static s3 website hosting
In permissions
Block all public access
Edit the bucket policy with below:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "AllowCloudFrontServicePrincipal",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::BUCKET_NAME/*",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "arn:aws:cloudfront::ACC_NUMBER:distribution/DISTRIBUTION_ID"
}
}
}
]
}
In Route53
Create A record by selecting cloudfront distribution

How can I deny public access to an AWS API gateway?

Similar to this question, I would like to deny public access to an AWS API Gateway and only allow access when the API is invoked via a specific user account. I have applied the following resource policy to the gateway:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::123456789012:root",
"arn:aws:iam::123456789012:user/apitestuser"
]
},
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:123456789012:abcd123456/*"
}
]
}
But when I run
curl -X GET https://abcd123456.execute-api.us-east-1.amazonaws.com/dev/products
I still receive a success response with data:
[{"id":1,"name":"Product 1"},{"id":2,"name":"Product 2"}]
I am expecting to receive a 4XX response instead.
How can I change the policy to deny public access to the gateway? Or, is it not possible to deny public access without using a VPC? Ideally I wish to avoid using a VPC as using a NAT gateway in multiple regions will be costly. I also want to avoid building in any authentication mechanism as authentication and authorization take place in other API gateways which proxy to this gateway.
Based on the comments.
The issue was that the stage was not re-deployed after adding/changing the policy.
So the solution was to re-deploy the stage for the policy to take effect.

Restrict access to s3 static website behind a cloudfront distribution

I want to temporarily restrict users from being able to access my static website hosted in s3 which sits behind a cloudfront distribution.
Is this possible and if so what methods could i use to implement this?
I've been able to restrict specific access to my s3 bucket by using a condition in the bucket policy which looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "12.34.56.73/32"
}
}
}
]
}
which works and restricts my s3 bucket to my ip, however this means that the cloudfront url gets 403 forbidden: access denied.
When reading the AWS docs, it suggests that to restrict specific access to s3 resources, use an Origin Access Identity. However they specify the following:
If you don't see the Restrict Bucket Access option, your Amazon S3 origin might be configured as a website endpoint. In that configuration, S3 buckets must be set up with CloudFront as custom origins and you can't use an origin access identity with them.
which suggests to me that i can't use it in this instance. Ideally i'd like to force my distribution or bucket policy to use a specific security group and control it that way so i can easily add/remove approved ip.
You can allow CloudFront IP addresses on CloudFront because static website endpoint doesn't support Origin access identity.
Here is the list of CloudFront IP addresses:
http://d7uri8nf7uskq.cloudfront.net/tools/list-cloudfront-ips
This link also explains how you can limit access via referral headers
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
You can tell CloudFront to add a header to every request and then modify your S3 bucket policy to require that header.
E.g.
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition":{
"StringLike":{"aws:Referer":"mysecretvalue"}
}
}
]
}