I am trying to understand how Cloudfront works. Assume static website is static.com and dynamic website is dynamic.com. static.com has thousands of html files containing img tags referencing images coming from static.com.
dynamic.com is Java based dynamically generating HTML and img tags and images comes from dynamic.com
Assume images are not manually copied to s3. No modifications are made in both sites in regards to Cloudfront other than DNS settings.
Assume Cloudfront url setup for static.com is mystaticxyzz.cloudfront.net and for dynamic.com it is mydynamicxyz.cloudfront.net
CloudFront works as a CDN sitting in front of what are called Origins.
These origins are the endpoints that CloudFront forwards traffic to, to retrieve the response and content. This could be a single server, a load balancer or any other resolvable hostname that is publicly accessible.
If you want to split between static and dynamic content you would create an origin for each type of content within the same distribution. One would be the default origin whilst the other would be matched based on a file path (/css or /images).
Each of these origins can include their own cache behaviours which enable you to define whether they should be cached and how long.
When a user accesses the CloudFront domain dependant on the path it will route to the appropriate origin or retrieve a response from the edge cache where possible.
I know this is rather late, but I am just going to add this here for those struggling to cache dynamic and static content.
Firstly, you need to understand your application your application.
Client Side Rendering
if you have a reactjs you don't need to worry too much about your caching behavior as you will be rendering , the data which will be fetched from an api client side.
none of the static files/content will be changing which are being delivered to the end user.
Since the APIs requests will be coming from a different domain , that data won't be cached by the cdn . Moreover, the data being rendered will update the html via javascript. If your javascript files are continously updating then you can use invalidations for them.
If you have content that is not stored on the origin and your CSR app is fetching the content from using a separate domain from your website domain, you will need to set up a separate cdn and point the domain name to that cdn. You wont need to make any changes to your application as the domain name stays the same for that.
However, if you static content that exists in the same origin (e.g. s3) then you would just request the content using the domain name of the cdn from which the request will come from client to cdn to origin (if not cached / expired)
lastly, assume we have separate origins like an s3 bucket for react app and s3 bucket for images . We can set up a single cdn with multiple origins . This means we can use cloudfront as an aggregator , you will then be able to cache content from different origins by using special paths.
This means , where ever you make calls to those origins previously. i.e. using the the s3 domain names, you would need to update them to that single domain name as the caching behaviors will handle the requests to the respective domains
example:
www.example.com(react-app react s3 bucket)
www.example.com/images (some s3 image bucket)
<img source={{url:"www.example.com/images/example.jpg"}} />
cloud front will make a request to that server based on that origin for the behavior configured on "/images"
Server Side Rendered
for serverside rendered apps , ideally the default cahcing behavior on the origin should allow all the different http methods , because you will have post and put http requests which you will want cloudfront to forward to the origin .
Make sure that you forward all query strings and cookies to the origin using a request policy. You can fine tune it with white listing query strings or cookies but this will make life easier. Also, the default caching behavior should use a caching policy that disables cache i.e. min,default,max ttls = 0secs . this is because the content is dynamic in nature and gets rendered on the server and not client side thus you will encounter unexpected behaviors in your application depending on how it is set up.
if you have static content on different paths like "/img", "/css" , or "/web/pages/information" cache those independently from the default behavior the respective ttls on them.
you could do some cool stuff using the cache-control header which can by pass the cache if you dont want to configure a 101 behaviors.
https://aws.amazon.com/blogs/networking-and-content-delivery/improve-your-website-performance-with-amazon-cloudfront/
Just understand your application and you will be able to leverage cdn properly
if you have a webserver that does a mixture of server side and client side rendering
just identify which paths are client-side rendered and cache those static files.
Any thing that is dynamic in nature that requires the application to make requests to the origin , make use of the caching disabled policy within a behavior.
Moreover, any of those patterns(of using a single cdn with a single/multiple origins or multiple cdns with differing origins ) mentioned earlier is applicable to serverside rendering if some content gets rendered clients side such as images
We would like to serve several test domains off a single S3 bucket using CloudFront as a frontend.
Namely, https://test-1.domain.com/index.html goes to bucket-1.s3.amazonaws.com/test-1/index.html, https://test-2.domain.com/index.html to bucket-1.s3.amazonaws.com/test-2/index.html and so on.
The problem is that our web app is an SPA, so when there is no content in the S3 bucket we should return 200 not 404, say https://test-2.domain.com/some/url should get bucket-1.s3.amazonaws.com/test-2/index.html without modifying an URL (thus, 302 is not an option).
It would be perfectly possible using an Error Pages setting for a CloudFront distribution if we were serving just a single domain, but we need to distinguish between test-1. and test-2. and use index.htmls from different subfolders. Is this still possible anyhow?
I think this is possible using Lambda#edge Origin request Function.
This is how I would do it in complicated way:
Whitelist HOST header (I know we shouldn't do it for S3)
Write a Lambda#edge function to read HOST header values and
if it test-1.domain.com, choose the Origin with path as
bucket-1.s3.amazonaws.com/test-1/ else bucket-1.s3.amazonaws.com/test-2/
https://aws.amazon.com/blogs/networking-and-content-delivery/dynamically-route-viewer-requests-to-any-origin-using-lambdaedge/
I'm looking for a way to pass the requesting host header onto either the API Gateway or a custom endpoint (outside of amazon) from a cloudfront origin.
Essentially I have multiple domains mapped to a cloudfront catchall and I'm trying to pre-render based off the index request on the server while letting all other resources through.
IF this is not possible, would lambda edge be able to achieve such a thing?
Thanks!
Until such time as Lambda#Edge leaves preview, here's your workaround:
For each domain name, create a separate CloudFront distribution, and add a unique custom origin header.
If you've configured more than one CloudFront distribution to use the same origin, you can specify different custom headers for the origins in each distribution and use the logs for your web server to distinguish between the requests that CloudFront forwards for each distribution.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html
It should go without saying that "use the logs for your web server" is only one possible use for this value. You can also use it to identify which domain the request is for, by inspecting the inserted request header.
For example, for the site api-42.example.com, add a custom origin header X-Forwarded-Host with the static value the same as the hostname, api-42.example.com.
CloudFront adds the custom origin header to each request when sending it to the origin server.
If the client, for whatever reason, sends the same header, CloudFront discards what the client sent, before adding your header and value to each request.
Since the actual CloudFront distributions themselves are free, there's no real harm in this solution. If you need to create a lot of them, that's easily scripted with aws-cli. By default, accounts can create 200 different distributions, but you can submit a free support request to increase that limit.
You may now be contemplating the impact of this on your cache hit rate, since the different sites wouldn't share a common cache. That's a valid concern, but the impact may not be as substantial as you expect, for a variety of reasons -- not the least of which is that CloudFront's cache is not monolithic. If you have viewers hitting a single distribution but from two different parts of the world, those users are almost certainly connecting to different CloudFront edge locations, thus hitting different cache instances anyway.
How do you set a default root object for subdirectories on a statically hosted website on Cloudfront? Specifically, I'd like www.example.com/subdir/index.html to be served whenever the user asks for www.example.com/subdir. Note, this is for delivering a static website held in an S3 bucket. In addition, I would like to use an origin access identity to restrict access to the S3 bucket to only Cloudfront.
Now, I am aware that Cloudfront works differently than S3 and amazon states specifically:
The behavior of CloudFront default root objects is different from the
behavior of Amazon S3 index documents. When you configure an Amazon S3
bucket as a website and specify the index document, Amazon S3 returns
the index document even if a user requests a subdirectory in the
bucket. (A copy of the index document must appear in every
subdirectory.) For more information about configuring Amazon S3
buckets as websites and about index documents, see the Hosting
Websites on Amazon S3 chapter in the Amazon Simple Storage Service
Developer Guide.
As such, even though Cloudfront allows us to specify a default root object, this only works for www.example.com and not for www.example.com/subdir. In order to get around this difficulty, we can change the origin domain name to point to the website endpoint given by S3. This works great and allows the root objects to be specified uniformly. Unfortunately, this doesn't appear to be compatable with origin access identities. Specifically, the above links states:
Change to edit mode:
Web distributions – Click the Origins tab, click the origin that you want to edit, and click Edit. You can only create an origin access
identity for origins for which Origin Type is S3 Origin.
Basically, in order to set the correct default root object, we use the S3 website endpoint and not the website bucket itself. This is not compatible with using origin access identity. As such, my questions boils down to either
Is it possible to specify a default root object for all subdirectories for a statically hosted website on Cloudfront?
Is it possible to setup an origin access identity for content served from Cloudfront where the origin is an S3 website endpoint and not an S3 bucket?
There IS a way to do this. Instead of pointing it to your bucket by selecting it in the dropdown (www.example.com.s3.amazonaws.com), point it to the static domain of your bucket (eg. www.example.com.s3-website-us-west-2.amazonaws.com):
Thanks to This AWS Forum thread
(New Feature May 2021) CloudFront Function
Create a simple JavaScript function below
function handler(event) {
var request = event.request;
var uri = request.uri;
// Check whether the URI is missing a file name.
if (uri.endsWith('/')) {
request.uri += 'index.html';
}
// Check whether the URI is missing a file extension.
else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
Read here for more info
Activating S3 hosting means you have to open the bucket to the world. In my case, I needed to keep the bucket private and use the origin access identity functionality to restrict access to Cloudfront only. Like #Juissi suggested, a Lambda function can fix the redirects:
'use strict';
/**
* Redirects URLs to default document. Examples:
*
* /blog -> /blog/index.html
* /blog/july/ -> /blog/july/index.html
* /blog/header.png -> /blog/header.png
*
*/
let defaultDocument = 'index.html';
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
if(request.uri != "/") {
let paths = request.uri.split('/');
let lastPath = paths[paths.length - 1];
let isFile = lastPath.split('.').length > 1;
if(!isFile) {
if(lastPath != "") {
request.uri += "/";
}
request.uri += defaultDocument;
}
console.log(request.uri);
}
callback(null, request);
};
After you publish your function, go to your cloudfront distribution in the AWS console. Go to Behaviors, then chooseOrigin Request under Lambda Function Associations, and finally paste the ARN to your new function.
I totally agree that it's a ridiculous problem! The fact that CloudFront knows about serving index.html as Default Root Object AND STILL they say it doesn't work for subdirectories (source) is totally strange!
The behavior of CloudFront default root objects is different from the behavior of Amazon S3 index documents. When you configure an Amazon S3 bucket as a website and specify the index document, Amazon S3 returns the index document even if a user requests a subdirectory in the bucket.
I, personally, believe that AWS has made it this way so CloudFront becomes a CDN only (loading assets, with no logic in it whatsoever) and every request to a path in your website should be served from a "Server" (e.g. EC2 Node/Php server, or a Lambda function.)
Whether this limitation exists to enhance security, or keep things apart (i.e. logic and storage separated), or make more money (to enforce people to have a dedicated server, even for static content) is up to debate.
Anyhow, I'm summarizing the possible solutions workarounds here, with their pros and cons.
1) S3 can be Public - Use Custom Origin.
It's the easiest one, originally posted by #JBaczuk answer as well as in this github gist. Since S3 already supports serving index.html in subdirectories via Static Website Hosting, all you need to do is:
Go to S3, enable Static Website Hosting
Grab the URL in the form of http://<bucket-name>.s3-website-us-west-2.amazonaws.com
Create a new Origin in CloudFront and enter this as a Custom Origin (and NOT S3 ORIGIN), so CloudFront treats this as an external website when getting the content.
Pros:
Very easy to set up.
It supports /about/, /about, and /about/index.html and redirect the last two to the first one, properly.
Cons:
If your files in the S3 bucket are not in the root of S3 (say in /artifacts/* then going to www.domain.com/about (without the trailing /) will redirect you to www.domain.com/artifacts/about which is something you don't want at all! Basically the /about to /about/ redirect in S3 breaks if you serve from CloudFront and the path to files (from the root) don't match.
Security and Functionality: You cannot make S3 Private. It's because CloudFront's Origin Access Identity is not going to be supported, clearly, because CloudFront is instructed to take this Origin as a random website. It means that users can potentially get the files from S3 directly, which might not be what you ever what due to security/WAF concerns, as well as the website actually working if you have JS/html that relies on the path being your domain only.
[maybe an issue] The communication between CloudFront and S3 is not the way it's recommended to optimize stuff.
[maybe?] someone has complained that it doesn't work smoothly for more than one Origin in the Distribution (i.e. wanting /blog to go somewhere)
[maybe?] someone has complained that it doesn't preserve the original query params as expected.
2) Official solution - Use a Lambda Function.
It's the official solution (though the doc is from 2017). There is a ready-to-launch 3rd-party Application (JavaScript source in github) and example Python Lambda function (this answer) for it, too.
Technically, by doing this, you create a mini-server (they call it serverless!) that only serves CloudFront's Origin Requests to S3 (so, it basically sits between CloudFront and S3.)
Pros:
Hey, it's the official solution, so probably lasts longer and is the most optimized one.
You can customize the Lambda Function if you want and have control over it. You can support further redirect in it.
If implemented correctly, (like the 3rd party JS one, and I don't think the official one) it supports /about/ and /about both (with a redirect from the latter without trailing / to the former).
Cons:
It's one more thing to set up.
It's one more thing to have an eye, so it doesn't break.
It's one more thing to check when something breaks.
It's one more thing to maintain -- e.g. the third-party one here has open PRs since Jan 2021 (it's April 2021 now.)
The 3rd party JS solution doesn't preserve the query params. So /about?foo=bar is 301 redirected to /about/ and NOT /about/?foo=bar. You need to make changes to that lambda function to make it work.
The 3rd party JS solution keeps /about/ as the canonical version. If you want /about to be the canonical version (i.e. other formats get redirected to it via 301), you have to make changes to the script.
[minor] It only works in us-east-1 (open issue in Github since 2020, still open and an actual problem in April 2021).
[minor] It has its own cost, although given CloudFront's caching, shouldn't be significant.
3) Create fake "Folder File"s in S3 - Use a manual Script.
It's a solution between the first two -- It supports OAI (private S3) and it doesn't require a server. It's a bit nasty though!
What you do here is, you run a script that for each subdirectory of /about/index.html it creates an object in S3 named (has key of) /about and copy that HTML file (the content and the content-type) into this object.
Example scripts can be found in this Reddit answer and this answer using AWS CLI.
Pros:
Secure: Supports S3 Private and CloudFront OAI.
No additional live piece: The script runs pre-upload to S3 (or one-time) and then the system remains intact with the two pieces of S3 and CF only.
Cons:
[Needs Confirmation] It supports /about but not /about/ with trailing / I believe.
Technically you have two different files being stored. Might look confusing and make your deploys expensive if there are tons of HTML files.
Your script has to manually find all the subdirectories and create a dummy object out of them in S3. That has the potential to break in the future.
PS. Other Tricks)
Dirty trick using Javascript on Custom Error
While it doesn't look like a real thing, this answer deserves some credit, IMO!
You let the Access Denied (404s turning into 403) go through, then catch them, and manually, via a JS, redirect them to the right place.
Pros
Again, easy to set up.
Cons
It relies on JavaScript in Client-Side.
It messes up with SEO -- especially if the crawler doesn't run JS.
It messes up with the user's browser history. (i.e. back button) and possibly could be improved (and get more complicated!) via HTML5 history.replace.
There is an "official" guide published on AWS blog that recommends setting up a Lambda#Edge function triggered by your CloudFront distribution:
Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of Lambda#Edge, you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin.
Solution
In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’.
When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code:
'use strict';
exports.handler = (event, context, callback) => {
// Extract the request from the CloudFront event that is sent to Lambda#Edge
var request = event.Records[0].cf.request;
// Extract the URI from the request
var olduri = request.uri;
// Match any '/' that occurs at the end of a URI. Replace it with a default index
var newuri = olduri.replace(/\/$/, '\/index.html');
// Log the URI as received by CloudFront and the new URI to be used to fetch from origin
console.log("Old URI: " + olduri);
console.log("New URI: " + newuri);
// Replace the received URI with the URI that includes the index page
request.uri = newuri;
// Return to CloudFront
return callback(null, request);
};
Follow the guide linked above to see all steps required to set this up, including S3 bucket, CloudFront distribution and Lambda#Edge function creation.
There is one other way to get a default file served in a subdirectory, like example.com/subdir/. You can actually (programatically) store a file with the key subdir/ in the bucket. This file will not show up in the S3 management console, but it actually exists, and CloudFront will serve it.
Johan Gorter and Jeremie indicated index.html can be stored as an object with key subdir/.
I validated this approach works and an alternative easy way to do this with awscli's s3api copy-object
aws s3api copy-object --copy-source bucket_name/subdir/index.html --key subdir/ --bucket bucket_name
Workaround for the issue is to utilize lambda#edge for rewriting the requests. One just needs to setup the lambda for the CloudFront distribution's viewer request event and to rewrite everything that ends with '/' AND is not equal to '/' with default root document e.g. index.html.
UPDATE: It looks like I was incorrect! See JBaczuk's answer, which should be the accepted answer on this thread.
Unfortunately, the answer to both your questions is no.
1. Is it possible to specify a default root object for all subdirectories for a statically hosted website on Cloudfront?
No. As stated in the AWS CloudFront docs...
... If you define a default root object, an end-user request for a subdirectory of your distribution does not return the default root object. For example, suppose index.html is your default root object and that CloudFront receives an end-user request for the install directory under your CloudFront distribution:
http://d111111abcdef8.cloudfront.net/install/
CloudFront will not return the default root object even if a copy of index.html appears in the install directory.
...
The behavior of CloudFront default root objects is different from the behavior of Amazon S3 index documents. When you configure an Amazon S3 bucket as a website and specify the index document, Amazon S3 returns the index document even if a user requests a subdirectory in the bucket. (A copy of the index document must appear in every subdirectory.)
2. Is it possible to setup an origin access identity for content served from Cloudfront where the origin is an S3 website endpoint and not an S3 bucket?
Not directly. Your options for origins with CloudFront are S3 buckets or your own server.
It's that second option that does open up some interesting possibilities, though. This probably defeats the purpose of what you're trying to do, but you could setup your own server whose sole job is to be a CloudFront origin server.
When a request comes in for http://d111111abcdef8.cloudfront.net/install/, CloudFront will forward this request to your origin server, asking for /install. You can configure your origin server however you want, including to serve index.html in this case.
Or you could write a little web app that just takes this call and gets it directly from S3 anyway.
But I realize that setting up your own server and worrying about scaling it may defeat the purpose of what you're trying to do in the first place.
Another alternative to using lambda#edge is to use CloudFront's error pages. Set up a Custom Error Response to send all 403's to a specific file. Then add javascript to that file to append index.html to urls that end in a /. Sample code:
if ((window.location.href.endsWith("/") && !window.location.href.endsWith(".com/"))) {
window.location.href = window.location.href + "index.html";
}
else {
document.write("<Your 403 error message here>");
}
One can use newly released cloudfront functions and here is sample code.
Note: If you are using static website hosting, then you do not need any function!
I know this is an old question, but I just struggled through this myself. Ultimately my goal was less to set a default file in a directory, and more to have the the end result of a file that was served without .html at the end of it
I ended up removing .html from the filename and programatically/manually set the mime type to text/html. It is not the traditional way, but it does seem to work, and satisfies my requirements for the pretty urls without sacrificing the benefits of cloudformation. Setting the mime type is annoying, but a small price to pay for the benefits in my opinion
#johan-gorter indicated above that CloudFront serves file with keys ending by /
After investigation, it appears that this option works, and that one can create this type of files in S3 programatically. Therefore, I wrote a small lambda that is triggered when a file is created on S3, with a suffix index.html or index.htm
What it does is copying an object dir/subdir/index.html into an object dir/subdir/
import json
import boto3
s3_client = boto3.client("s3")
def lambda_handler(event, context):
for f in event['Records']:
bucket_name = f['s3']['bucket']['name']
key_name = f['s3']['object']['key']
source_object = {'Bucket': bucket_name, 'Key': key_name}
file_key_name = False
if key_name[-10:].lower() == "index.html" and key_name.lower() != "index.html":
file_key_name = key_name[0:-10]
elif key_name[-9:].lower() == "index.htm" and key_name.lower() != "index.htm":
file_key_name = key_name[0:-9]
if file_key_name:
s3_client.copy_object(CopySource=source_object, Bucket=bucket_name, Key=file_key_name)
I'm trying to make Cloudfront work on my solution. I'm using Route 53 + CloudFront + ELB.
Consider the following:
1. Route 53 is pointing to CloudFront through a record set alias.
2. CloudFront is pointing to the ELB through a origin domain name.
3. CloudFront has an Alternate Domain Name set to my custom domain (mysite.com)
If I make a request using the CloudFront domain name (d1ngxxxx.cloudfront.net) or the custom domain (mysite.com), the initial request goes to CloudFront which responds with a HTTP 302. All the subsequent requests (for resources like images, css, js..) are made directly to the ELB domain name bypassing CloudFront.
What should I do to make all requests go throuhg CloudFront?
Thanks is advance!
I can't come up with a circumstance where Cloudfront would issue these redirects.
It seems likely that what's happening is that your server itself is issuing the 302 redirect, because it doesn't like the Host: header it's getting from Cloudfront.
Host: CloudFront sets the value to the domain name of the origin that is associated with the requested object.
— http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html
Cloudfront is then returning the redirect to the browser.
Cloudfront can also cache such a redirect, so be mindful of that as you're troubleshooting. The response headers should indicate whether cloudfront went to the origin for the particular reponse:
X-Cache: Miss from cloudfront
...or whether cloudfront served the request from cache.
X-Cache: Hit from cloudfront
Two possible approaches to resolve this:
If your legacy code is reacting to the Host: header in a negative way, you might be able to reconfigure the web server to modify that value before the code is able to see it, so the redirection wouldn't occur.
Alternately, you could use something outboard, a reverse-proxying engine like Varnish or HAProxy (of which I have touched on elsewhere). In HAProxy, for a simple example:
reqirep ^Host:\ .* Host:\ expected-domain.example.com if { hdr(host) -i unexpected-domain.example.com }
A rule in form similar to this would replace the Host: unexpected-domain.example.com header with Host: expected-domain.example.com in all incoming requests where that header was present, which should keep your legacy code happy and avoid the redirects. Running HAProxy in front of your legacy system doesn't impose a significant load, since the code is very tight. All of my legacy web systems are now fronted with these systems, to give me the ability to manipulate and modify behavior much more easily than might otherwise be possible.