I would like to configure an AWS CloudFront CDN to serve HTML static content from two AWS S3 buckets. One bucket should host the objects in the root, the second one should host objects in a specific subpath.
S3 config
The first bucket, myapp.home, should host the home page and all other objects directly under "/".
The second bucket, myapp.subpage, should be used for the same purpose but for a specific set of URLs starting with "/subpage/".
Both buckets have been configured with "static website hosting" option enabled and with a default document "index.html", which has been uploaded to both.
Both buckets have been made public using the following policy (in the case of myapp.subpage the Resource has been adapted accordingly)
{
"Version": "2012-10-17",
"Id": "Policy1529690634746",
"Statement": [
{
"Sid": "Stmt1529690623267",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myapp.home/*"
}
]
}
CloudFront config
The CDN is configured to respond to a name "host.domain.tld".
The CDN is configured having 2 origins:
the bucket myapp.home, having these properties:
Origin Domain Name: myapp.home.s3.amazonaws.com
Origin Path: empty
Origin Type: S3 Origin
the bucket myapp.subpage, having these properties:
Origin Domain Name: myapp.subpage.s3.amazonaws.com
Origin Path: empty
Origin Type: S3 Origin
These origins are linked to 2 Cache Behaviors:
First Behavior
Origin: the bucket myapp.subpage:
Precedence: 0
Path Pattern: subpage/*
Second Behavior
Origin: the bucket myapp.home:
Precedence: 1
Path Pattern: Default (*)
The problem
The myapp.home origin seems to work correctly, but myapp.subpath instead always returns an AccessDenied error using all of the following URIs:
host.domain.tld/subpath
host.domain.tld/subpath/
host.domain.tld/subpath/index.html
Update: I also tried substituting the origins using the S3 website domains, e.g. myapp.subpath.s3-website-eu-west-1.amazonaws.com, instead of the plain buckets domains: the homepage still works anyway, but the subpath this time returns a 404 with Message: "The specified key does not exist" for all URIs above.
What am i doing wrong?
Thanks in advance
The "subpage/*" in first behaviors is the directory in myapp.subpage.
Make a directory named subpage in the bucket, then put index.html into this bucket.
Like below:
* myapp.subpage <bucket name>
* subpage <directory>
* index.html
Related
I'm getting 404 errors on a specific path pattern say xyz/index.html, how to return custom HTML content instead of a 404 Not Found error?
Go to your Cloudfront Distribiution you want to create a custom error page.
Then in the CloudFront Management Console choose Error Responses
Click on the Create Custom Error Response button to get started, then create the error response.
You can also choose the HTTP status code that will be returned along with the response page.
There's a blog post on how to do it step by step here.
Full documentation on generating custom error responses for Cloudfront is here.
You can achieve this using CLoudfront Edge Functions:
Create lambda function in us-east-1 region (irrespective of your local region).
Update the function role Trusted Relationships to allow access for cloudfront.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"edgelambda.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Update the function code to handle the origin response.
def lambda_handler(event, context):
response = event['Records'][0]['cf']['response']
is_404_page = int(response['status']) == 404
is_html_page = "index.html" in event['Records'][0]['cf']['request']["uri"]
if is_404_page and is_html_page:
response['status'] = 200
response['statusDescription'] = 'OK'
response['body'] = """
<!DOCTYPE html>
<html>
<head>
<title>Custom Response</title>
</head>
<body>
<p>Custom response page.</p>
</body>
</html>
"""
return response
Deploy Function.
Click on Actions -> Choose Deploy to Lambda#Edge under Capabilities
Configure the CloudFront trigger, and deploy it.
It takes around 5 minutes to update CloudFront distribution, then you can test your changes.
I have two s3 buckets, images and website.
I set up cloudfront to have two origins for these buckets.
The default behavior routes to website, which is the build folder of a hello world react app.
The behavior for the images bucket is /images/. I uploaded an image, test.png in the images bucket.
For some reason, every route in /images/, takes me back to the website bucket (I know this because a 404 on the bucket routes back to index.html). This means it doesn't even look at the image bucket. I have no clue why it never hits the images bucket.
Could it be because I'm using error handling on it?.
Here's the CDK code to make the cloudfront.
const websiteDistribution = new Distribution(this, 'WebsiteDistribution', {
defaultBehavior: {
origin: new S3Origin(this._websiteBucket, {
originAccessIdentity: oaiWeb,
originPath: '/website-ui',
}),
allowedMethods: AllowedMethods.ALLOW_ALL,
compress: true,
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
},
additionalBehaviors: {
'images/*': {
origin: new S3Origin(this._imagesBucket, {
originAccessIdentity: oaiWeb,
}),
allowedMethods: AllowedMethods.ALLOW_ALL,
compress: true,
viewerProtocolPolicy: ViewerProtocolPolicy.HTTPS_ONLY
},
},
defaultRootObject: 'index.html',
domainNames: [`www.${props.domain}`],
enabled: true,
priceClass: PriceClass.PRICE_CLASS_100,
certificate: this._websiteCert,
logBucket: this._logBucket,
httpVersion: HttpVersion.HTTP2,
logFilePrefix: 'logs',
enableLogging: true,
errorResponses: [
{
httpStatus: 403,
responseHttpStatus: 200,
responsePagePath: '/index.html',
ttl: Duration.seconds(0),
}
]
});
Turns out the full path route was forwarding to the images bucket.
test.png in the main s3 directory wasn't found. It needed to be in an images folder in the images bucket.
Example:
URL PATH = cloudfront/images/test.png
Route to behavior /images
In images bucket, look for file /images/test.png
File was not found since test.png is NOT in subdirectory images.
Cloudfront now looks at the default behavior (*)
The file does not exist in the website bucket, throw 404 error
I added a folder images in the images bucket and it found the file.
I have a website (say example.com) that is hosted on AWS S3 (bucket name - "xyz") and is serving traffic via a Cloudfront distribution. The CDN has the Origin mapped to the S3 as per usual practice to deliver the content. The DNS (Route 53) record is mapped to this CDN distribution.
I recently deleted an object from this S3 bucket, say xyz/hello/hello-jon
So when the users are trying to hit example.com/hello/hello-jon, they are getting a 404 error as expected. I'd like to redirect this to a different page that is loading from a different object in the same bucket, say, xyz/world/world-right. So that when the users try to hit the URL example.com/hello/hello-jon they should be redirected to example.com/world/world-right page.
I referred to several Amazon Docs and finally settled on this one :-
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html
I tried the second example Example 2: Redirect requests for a deleted folder to a page. The following JSON based rule was setup in the Redirection Rules of the bucket xyz:-
[
{
"Condition": {
"KeyPrefixEquals": "hello/hello-jon/"
},
"Redirect": {
"ReplaceKeyPrefixWith": "world/world-right/"
}
}
]
And the redirection did work, but the expected result was different. I'm getting the resultant URL as:-
http://S3-bucket-name.S3-bucket-region.amazonaws.com/world/world-right/
Instead of https://www.example.com/world/world-right/
Could you please help me in resolving this issue or provide an alternative that could work in this scenario?
Do this changes :
[
{
"Condition": {
"KeyPrefixEquals": "hello/hello-jon/"
},
"Redirect": {
"HostName": "www.example.com",
"HttpRedirectCode": "301",
"Protocol": "https",
"ReplaceKeyPrefixWith": "world/world-right/"
}
}
]
Mentioned in document for redirect host.
I have an S3 bucket (e.g. mybucket) which currently has permissions set as follows:
Block all public access
On
|
|_Block public access to buckets and objects granted through new access control lists (ACLs)
| On
|
|_Block public access to buckets and objects granted through any access control lists (ACLs)
| On
|
|_Block public access to buckets and objects granted through new public bucket or access point policies
| On
|
|_Block public and cross-account access to buckets and objects through any public bucket or access point policies
On
Inside this bucket I have two folders:
images (e.g. https://mybucket.s3.us-west-2.amazonaws.com/images/puppy.jpg)
private (e.g. https://mybucket.s3.us-west-2.amazonaws.com/private/mydoc.doc)
I want the images folder to be publicly accessible so that I can display images on my site.
I want the private folder to be restricted and can only be accessible using an IAM account programmatically.
How do I set these permissions? I've tried switching the above permissions off, I've also clicked on the images and unders actions, clicked on 'make public'. I then attempt the following to upload:
$image = Image::make($file->getRealPath())->resize(360, 180);
Storage::disk('s3')->put($filePath, $image->stream());
File gets uploaded but when I try to display the image as follows I get 403 error:
<img src="{{ Storage::disk('s3')->url($file->path) }}" />
And to download a private documents i have the following:
$response = [
'Content-Type' => $file->mime_type,
'Content-Length' => $file->size,
'Content-Description' => 'File Transfer',
'Content-Disposition' => "attachment; filename={$file->name}",
'Content-Transfer-Encoding' => 'binary',
];
return Response::make(Storage::disk('s3')->get($file->path), 200, $response);
What's the correct way to set up these permissions?
I'm new to S3 storage.
Amazon S3 Block Public Access is a bucket-level configuration that will prevent you making any of the objects in that bucket public. So, if you want to make one or more objects public, e.g. images/*, then you need to disable S3 Block Public Access for this bucket.
That, in and of itself, will not make any of your objects public, of course. S3 buckets are private by default. To make the objects in images/ public, you will need to configure an S3 bucket policy, for example:
{
"Id": "id101",
"Statement": [
{
"Sid": "publicimages",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/images/*",
"Principal": "*"
}
]
}
I'm trying to use and enforce amazon s3 server side encryption.
I followed their documentation and I've created the following bucket policy:
{
"Version":"2012-10-17",
"Id":"PutObjPolicy",
"Statement":[{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::YourBucket/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"AES256"
}
}
}
]
}
I'm using python boto package, and when I'm adding x-amz-server-side-encryption header its works like a charm.
The problem is that there are several places in the application, that are using a post request from an HTML form to upload files to s3.
I've managed to add the x-amz-server-side-encryption header and the files are uploaded. However, when checking in the amazon backend console I can see that those files are not encrypted.
Does anybody have an idea what I'm doing wrong? I also tried to pass the x-amz-server-side-encryption as a form field but it doesn't help.
The interesting part is that when I remove the x-amz-server-side-encryption header, the requests are failing with "access deny" reason.
The solution was to add the x-amz-server-side-encryption to the policy object.
For example:
POLICY = """{'expiration': '2016-01-01T00:00:00Z',
'conditions': [
{'bucket': 'my_bucket'},
['starts-with', '$key', '%s/'],
{'acl': 'public-read'},
['starts-with', '$Content-Type', ''],
['content-length-range', 0, 314572800],
{'x-amz-server-side-encryption': 'AES256'}
]
}"""
And to add 'x-amz-server-side-encryption' form field with "AES256" value. There is no need to add it as a header in this case