In my CloudFront origin request lambda#edge function I want to return a response which will set a cookie value in the browser and redirects to other page. I do it by the following return statement:
return {
status: '302',
statusDescription: 'Found',
headers: {
location: [
{ key: 'Location', value: 'my.website.com' },
],
'set-cookie': [
{ key: 'Set-Cookie', value: 'key=value; Max-Age=600' },
]
}
};
Unfortunately CloudFront seems to remove/ignore this set-cookie header and the browser receives a response without it. What's interesting, the exact same code works when placed in the CloudFront viewer-request function. Is there a way to make origin-request lambda to keep the set-cookie header in the response?
The solution turn out to be a cache policy with Cookies - include specified cookies option turned on with proper whitelisted cookie name. The behaviour in the question is caused (as documentation states) by:
Don’t forward cookies to your origin – CloudFront doesn’t cache your objects based on cookie sent by the viewer. In addition, CloudFront removes cookies before forwarding requests to your origin, and removes Set-Cookie headers from responses before returning responses to your viewers.
To prevent caching by whitelisted cookie name add the following header to the response: Cache-Control: no-cache="Set-Cookie".
I have a single page application and I'm trying to prevent clickjacking by adding X-Frame-Options header to the HTML responses. My website is hosted on S3 through CloudFront.
The CloudFront distribution is configured to send index.html by default:
Default root object
index.html
In the Error Pages section I configured 404 page to also point to the index.html. This way all URLs that are not in S3 return the default HTML, i.e. /login, /admin etc.
Update The 403 status code is also configured:
Then I have created a CloudFront function as described here and assigned it to the Viewer response:
function handler(event) {
var response = event.response;
var headers = response.headers;
headers['x-frame-options'] = {value: 'DENY'};
return response;
}
This works, but only for /:
curl -v https://<MYSITE.com>
....
< x-frame-options: DENY
For other URLs it doesn't work - the x-frame-options header is missing:
curl -v https://<MYSITE.com>/login
....
< x-cache: Error from cloudfront
My question is - why my cloudfront function does not append a header in the error response, and what can I do to add it?
I understand that your questions are:
Q1: Why does the CloudFront function work for /?
Q2: Why doesn't the CloudFront function work for other url path?
Please refer to the responses below:
A1: Since you might specify a Default Root Object [1] (e.g.index.html) which returning the object when a user requests the root URL. When CloudFront returns the object with 200 ok, the CloudFront Function will be invoked on the viewer response event.
A2: You might not give the s3:ListBucket permissions in your S3 bucket policy(e.g. OAI). As the result, you will get Access Denied(403) errors for missing objects instead of 404 Not Found errors. Namely, the Error Pages you have configured isn't applied to this case, and the CloudFront Function won't be invoked because the HTTP status code is higher than 399[2].
[Updated] Suggestion:
Since CloudFront does not invoke edge functions for viewer response events when the origin returns HTTP status code 400 or higher. However, Lambda#Edge functions for origin response events are invoked for all origin responses. In this senario, I'll suggest that we should use Lambda#Edge instead of CloudFront Functions.
For your convenience, please refer to the sample code of l#e:
exports.handler = async (event, context) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
headers['x-frame-options'] = [{
key: 'X-Frame-Options',
value: 'DENY',
}];
return response;
};
FYI. Here is my curl test result:
# PATH: `/`
$ curl -sSL -D - https://dxxxxxxx.cloudfront.net/
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 12
Connection: keep-alive
ETag: "e59ff97941044f85df5297e1c302d260"
___snipped___
Server: AmazonS3
X-Frame-Options: DENY
___snipped___
# PATH: `/login`
$ curl -sSL -D - https://dxxxxxxx.cloudfront.net/login
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 12
Connection: keep-alive
ETag: "e59ff97941044f85df5297e1c302d260"
___snipped___
Server: AmazonS3
X-Frame-Options: DENY
___snipped___
I have a website (say example.com) that is hosted on AWS S3 (bucket name - "xyz") and is serving traffic via a Cloudfront distribution. The CDN has the Origin mapped to the S3 as per usual practice to deliver the content. The DNS (Route 53) record is mapped to this CDN distribution.
I recently deleted an object from this S3 bucket, say xyz/hello/hello-jon
So when the users are trying to hit example.com/hello/hello-jon, they are getting a 404 error as expected. I'd like to redirect this to a different page that is loading from a different object in the same bucket, say, xyz/world/world-right. So that when the users try to hit the URL example.com/hello/hello-jon they should be redirected to example.com/world/world-right page.
I referred to several Amazon Docs and finally settled on this one :-
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html
I tried the second example Example 2: Redirect requests for a deleted folder to a page. The following JSON based rule was setup in the Redirection Rules of the bucket xyz:-
[
{
"Condition": {
"KeyPrefixEquals": "hello/hello-jon/"
},
"Redirect": {
"ReplaceKeyPrefixWith": "world/world-right/"
}
}
]
And the redirection did work, but the expected result was different. I'm getting the resultant URL as:-
http://S3-bucket-name.S3-bucket-region.amazonaws.com/world/world-right/
Instead of https://www.example.com/world/world-right/
Could you please help me in resolving this issue or provide an alternative that could work in this scenario?
Do this changes :
[
{
"Condition": {
"KeyPrefixEquals": "hello/hello-jon/"
},
"Redirect": {
"HostName": "www.example.com",
"HttpRedirectCode": "301",
"Protocol": "https",
"ReplaceKeyPrefixWith": "world/world-right/"
}
}
]
Mentioned in document for redirect host.
I want to redirect my root domain to www.domain.com. The site is published using s3 CloudFront and route53.
I have seen a lot of docs on how to redirect using s3, but I cannot do it because there's already a bucket with my root domain somewhere. So I cannot create a bucket with my root domain as the name. Also, I doubt whether this s3 redirection will work for the https request.
I haven't seen any blog on redirecting without using the s3 bucket.
So how can I redirect an HTTP/https root domain to www subdomain in aws.
You cannot redirect using Route 53 (it is a DNS configuration service afterall, whereas redirects are a HTTP operation).
If you cannot use S3, another solution could be to use use CloudFront with a Lambda#Edge function.
If the hostname is not the www domain you could perform the redirect to the www domain.
The function might look similar to the below
def lambda_handler(event, context):
request = event["Records"][0]["cf"]["request"]
response = event["Records"][0]["cf"]["response"]
# Generate HTTP redirect response with 302 status code and Location header.
if request['headers']['host'][0]['value'] != 'www.example.com':
response = {
'status': '302',
'statusDescription': 'Found',
'headers': {
'location': [{
'key': 'Location',
'value': 'https://www.example.com/{}' .format(request['uri'])
}]
}
}
return response
This is how I solved it:
Create a CloudFront distribution for your root domain, e.g. domain.com
Set an Application Load Balancer as an origin for this distribution
Create a rule for your ALB listener:
if (all match)
HTTP Host Header is domain.com
Then
Redirect to #{protocol}://www.domain.com:#{port}/#{path}?#{query}
Status code: HTTP_301
I'm receiving the following error on a couple of Chrome browsers but not all. Not sure entirely what the issue is at this point.
Font from origin https://ABCDEFG.cloudfront.net has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin https://sub.domain.example is therefore not allowed access.
I have the following CORS Configuration on S3
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedHeader>*</AllowedHeader>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
The request
Remote Address:1.2.3.4:443
Request URL:https://abcdefg.cloudfront.net/folder/path/icons-f10eba064933db447695cf85b06f7df3.woff
Request Method:GET
Status Code:200 OK
Request Headers
Accept:*/*
Accept-Encoding:gzip,deflate
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Host:abcdefg.cloudfront.net
Origin:https://sub.domain.example
Pragma:no-cache
Referer:https://abcdefg.cloudfront.net/folder/path/icons-e283e9c896b17f5fb5717f7c9f6b05eb.css
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36
All other requests from Cloudfront/S3 work properly, including JS files.
Add this rule to your .htaccess
Header add Access-Control-Allow-Origin "*"
even better, as suggested by #david thomas, you can use a specific domain value, e.g.
Header add Access-Control-Allow-Origin "your-domain.example"
Chrome since ~Sep/Oct 2014 makes fonts subject to the same CORS checks as Firefox has done https://code.google.com/p/chromium/issues/detail?id=286681. There is a discussion on this in https://groups.google.com/a/chromium.org/forum/?fromgroups=#!topic/blink-dev/TT9D5-Zfnzw
Given that for fonts the browser may do a preflight check, then your S3 policy needs the cors request header as well. You can check your page in say Safari (which at present doesn't do CORS checking for fonts) and Firefox (that does) to double check this is the problem described.
See Stack overflow answer on Amazon S3 CORS (Cross-Origin Resource Sharing) and Firefox cross-domain font loading for the Amazon S3 CORS details.
NB in general because this used to apply to Firefox only, so it may help to search for Firefox rather than Chrome.
I was able to solve the problem by simply adding <AllowedMethod>HEAD</AllowedMethod> to the CORS policy of the S3 Bucket.
Example:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Nginx:
location ~* \.(eot|ttf|woff)$ {
add_header Access-Control-Allow-Origin '*';
}
AWS S3:
Select your bucket
Click properties on the right top
Permisions => Edit Cors Configuration => Save
Save
http://schock.net/articles/2013/07/03/hosting-web-fonts-on-a-cdn-youre-going-to-need-some-cors/
On June 26, 2014 AWS released proper Vary: Origin behavior on CloudFront so now you just
Set a CORS Configuration for your S3 bucket:
<AllowedOrigin>*</AllowedOrigin>
In CloudFront -> Distribution -> Behaviors for this origin, use the Forward Headers: Whitelist option and whitelist the 'Origin' header.
Wait for ~20 minutes while CloudFront propagates the new rule
Now your CloudFront distribution should cache different responses (with proper CORS headers) for different client Origin headers.
The only thing that has worked for me (probably because I had inconsistencies with www. usage):
Paste this in to your .htaccess file:
<IfModule mod_headers.c>
<FilesMatch "\.(eot|font.css|otf|ttc|ttf|woff)$">
Header set Access-Control-Allow-Origin "*"
</FilesMatch>
</IfModule>
<IfModule mod_mime.c>
# Web fonts
AddType application/font-woff woff
AddType application/vnd.ms-fontobject eot
# Browsers usually ignore the font MIME types and sniff the content,
# however, Chrome shows a warning if other MIME types are used for the
# following fonts.
AddType application/x-font-ttf ttc ttf
AddType font/opentype otf
# Make SVGZ fonts work on iPad:
# https://twitter.com/FontSquirrel/status/14855840545
AddType image/svg+xml svg svgz
AddEncoding gzip svgz
</IfModule>
# rewrite www.example.com → example.com
<IfModule mod_rewrite.c>
RewriteCond %{HTTPS} !=on
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L]
</IfModule>
http://ce3wiki.theturninggate.net/doku.php?id=cross-domain_issues_broken_web_fonts
I had this same problem and this link provided the solution for me:
http://www.holovaty.com/writing/cors-ie-cloudfront/
The short version of it is:
Edit S3 CORS config (my code sample didn't display properly)
Note: This is already done in the original question
Note: the code provided is not very secure, more info in the linked page.
Go to the "Behaviors" tab of your distribution and click to edit
Change "Forward Headers" from “None (Improves Caching)” to “Whitelist.”
Add “Origin” to the "Whitelist Headers" list
Save the changes
Your cloudfront distribution will update, which takes about 10 minutes. After that, all should be well, you can verify by checking that the CORS related error messages are gone from the browser.
For those using Microsoft products with a web.config file:
Merge this with your web.config.
To allow on any domain replace value="domain" with value="*"
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.webserver>
<httpprotocol>
<customheaders>
<add name="Access-Control-Allow-Origin" value="domain" />
</customheaders>
</httpprotocol>
</system.webserver>
</configuration>
If you don't have permission to edit web.config, then add this line in your server-side code.
Response.AppendHeader("Access-Control-Allow-Origin", "domain");
For AWS S3, setting the Cross-origin resource sharing (CORS) to the following worked for me:
[
{
"AllowedHeaders": [
"Authorization"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
There is a nice writeup here.
Configuring this in nginx/apache is a mistake.
If you are using a hosting company you can't configure the edge.
If you are using Docker, the app should be self contained.
Note that some examples use connectHandlers but this only sets headers on the doc. Using rawConnectHandlers applies to all assets served (fonts/css/etc).
// HSTS only the document - don't function over http.
// Make sure you want this as it won't go away for 30 days.
WebApp.connectHandlers.use(function(req, res, next) {
res.setHeader('Strict-Transport-Security', 'max-age=2592000; includeSubDomains'); // 2592000s / 30 days
next();
});
// CORS all assets served (fonts/etc)
WebApp.rawConnectHandlers.use(function(req, res, next) {
res.setHeader('Access-Control-Allow-Origin', '*');
return next();
});
This would be a good time to look at browser policy like framing, etc.
Late to the party, but I just ran into this problem and solved it with the following settings in my AWS bucket config (Permission tab). The requested format is not XML anymore but JSON:
[
{
"AllowedHeaders": [
"Content-*"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"https://www.yourdomain.example",
"https://yourdomain.example"
],
"ExposeHeaders": []
}
]
Just add use of origin in your if you use node.js as server...
like this
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
next();
});
We Need response for origin
If you want to allow all the fonts from a folder for a specific domain then you can use this:
<location path="assets/font">
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="http://localhost:3000" />
</customHeaders>
</httpProtocol>
</system.webServer>
</location>
where assets/font is the location where all fonts are and http://localhost:3000 is the location which you want to allow.
Add this to your .htaccess file. This solved my problem.
<FilesMatch ".(eot|otf|ttf|woff|woff2)">
Header always set Access-Control-Allow-Origin "*"
</FilesMatch>
Working solution for heroku is here http://kennethjiang.blogspot.com/2014/07/set-up-cors-in-cloudfront-for-custom.html
(quotes follow):
Below is exactly what you can do if you are running your Rails app in Heroku and using Cloudfront as your CDN. It was tested on Ruby 2.1 + Rails 4, Heroku Cedar stack.
Add CORS HTTP headers (Access-Control-*) to font assets
Add gem font_assets to Gemfile .
bundle install
Add config.font_assets.origin = '*' to config/application.rb . If you want more granular control, you can add different origin values to different environment, e.g., config/config/environments/production.rb
curl -I http://localhost:3000/assets/your-custom-font.ttf
Push code to Heroku.
Configure Cloudfront to forward CORS HTTP headers
In Cloudfront, select your distribution, under "behavior" tab, select and edit the entry that controls your fonts delivery (for most simple Rails app you only have 1 entry here). Change Forward Headers from "None" to "Whilelist". And add the following headers to whitelist:
Access-Control-Allow-Origin
Access-Control-Allow-Methods
Access-Control-Allow-Headers
Access-Control-Max-Age
Save it and that's it!
Caveat: I found that sometimes Firefox wouldn't not refresh the fonts even if CORS error is gone. In this case keep refreshing the page a few times to convince Firefox that you are really determined.