Add X-Frame-Options header to all URLs using CloudFront functions - amazon-web-services

I have a single page application and I'm trying to prevent clickjacking by adding X-Frame-Options header to the HTML responses. My website is hosted on S3 through CloudFront.
The CloudFront distribution is configured to send index.html by default:
Default root object
index.html
In the Error Pages section I configured 404 page to also point to the index.html. This way all URLs that are not in S3 return the default HTML, i.e. /login, /admin etc.
Update The 403 status code is also configured:
Then I have created a CloudFront function as described here and assigned it to the Viewer response:
function handler(event) {
var response = event.response;
var headers = response.headers;
headers['x-frame-options'] = {value: 'DENY'};
return response;
}
This works, but only for /:
curl -v https://<MYSITE.com>
....
< x-frame-options: DENY
For other URLs it doesn't work - the x-frame-options header is missing:
curl -v https://<MYSITE.com>/login
....
< x-cache: Error from cloudfront
My question is - why my cloudfront function does not append a header in the error response, and what can I do to add it?

I understand that your questions are:
Q1: Why does the CloudFront function work for /?
Q2: Why doesn't the CloudFront function work for other url path?
Please refer to the responses below:
A1: Since you might specify a Default Root Object [1] (e.g.index.html) which returning the object when a user requests the root URL. When CloudFront returns the object with 200 ok, the CloudFront Function will be invoked on the viewer response event.
A2: You might not give the s3:ListBucket permissions in your S3 bucket policy(e.g. OAI). As the result, you will get Access Denied(403) errors for missing objects instead of 404 Not Found errors. Namely, the Error Pages you have configured isn't applied to this case, and the CloudFront Function won't be invoked because the HTTP status code is higher than 399[2].
[Updated] Suggestion:
Since CloudFront does not invoke edge functions for viewer response events when the origin returns HTTP status code 400 or higher. However, Lambda#Edge functions for origin response events are invoked for all origin responses. In this senario, I'll suggest that we should use Lambda#Edge instead of CloudFront Functions.
For your convenience, please refer to the sample code of l#e:
exports.handler = async (event, context) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
headers['x-frame-options'] = [{
key: 'X-Frame-Options',
value: 'DENY',
}];
return response;
};
FYI. Here is my curl test result:
# PATH: `/`
$ curl -sSL -D - https://dxxxxxxx.cloudfront.net/
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 12
Connection: keep-alive
ETag: "e59ff97941044f85df5297e1c302d260"
___snipped___
Server: AmazonS3
X-Frame-Options: DENY
___snipped___
# PATH: `/login`
$ curl -sSL -D - https://dxxxxxxx.cloudfront.net/login
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 12
Connection: keep-alive
ETag: "e59ff97941044f85df5297e1c302d260"
___snipped___
Server: AmazonS3
X-Frame-Options: DENY
___snipped___

Related

returning response with set-cookie header in AWS Cloudfront origin request

In my CloudFront origin request lambda#edge function I want to return a response which will set a cookie value in the browser and redirects to other page. I do it by the following return statement:
return {
status: '302',
statusDescription: 'Found',
headers: {
location: [
{ key: 'Location', value: 'my.website.com' },
],
'set-cookie': [
{ key: 'Set-Cookie', value: 'key=value; Max-Age=600' },
]
}
};
Unfortunately CloudFront seems to remove/ignore this set-cookie header and the browser receives a response without it. What's interesting, the exact same code works when placed in the CloudFront viewer-request function. Is there a way to make origin-request lambda to keep the set-cookie header in the response?
The solution turn out to be a cache policy with Cookies - include specified cookies option turned on with proper whitelisted cookie name. The behaviour in the question is caused (as documentation states) by:
Don’t forward cookies to your origin – CloudFront doesn’t cache your objects based on cookie sent by the viewer. In addition, CloudFront removes cookies before forwarding requests to your origin, and removes Set-Cookie headers from responses before returning responses to your viewers.
To prevent caching by whitelisted cookie name add the following header to the response: Cache-Control: no-cache="Set-Cookie".

The CORS header is present on PUT requests but not on OPTIONS requests

I have a Google Cloud Storage bucket with the following CORS configuration:
[
{
"origin": ["http://localhost:8080"],
"responseHeader": [
"Content-Type",
"Access-Control-Allow-Origin",
"Origin"],
"method": ["GET", "HEAD", "DELETE", "POST", "PUT", "OPTIONS"],
"maxAgeSeconds": 3600
}
]
I am generating a signed URL with the following code:
let bucket = storage.bucket(bucketName);
let file = bucket.file(key);
const options = {
version: "v4",
action: "write",
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: "application/zip"
};
let url = await file.getSignedUrl(options))[0];
For my requests I am using the following headers:
Origin: http://localhost:8080
Content-Type: application/zip
When I try using a PUT request to upload the data everything works fine and I get the Access-Control-Allow-Origin header containing my Origin. But when I do a OPTIONS request with the exact same headers it fails to return the Access-Control-Allow-Origin header. I have tried many alterations to my CORS config but none have worked like:
Changing the origins to *
The different changes described in Stackoverflow it's answer and comments.
The different changes described in GitHub Google Storage API Issues
I solved my own problem with some help from my colleagues, when I was testing the function in Postman the CORS header was not sent as a response to the OPTIONS request because the request was missing the Access-Control-Request-Method header. When I added this header it worked fine.
Notice that as stated on the public documentation you shouldn't specify OPTIONS in your CORS configuration and notice that Cloud Storage only supports DELETE, GET, HEAD, POST, PUT for the XML API and DELETE, GET, HEAD, PATCH, POST, PUT for the JSON API. So, I believe that what you are experiencing with the OPTIONS method should be expected behavior.

AWS Cloudfront redirects TOO_MANY times

I folowed Redirecting Internet Traffic to Another Domain and Redirecting HTTP Requests to HTTPS.
This is my status.
s3 : 3 buckets for web hosting
1) example.com(index.html in it, has policy),
2) www.example.com(for request redirection, no policy, redirect to example.com)
3) bucket-for-redirection(for cloudfront, no policy, redirect to example.com, https protocol)
cloudfront: 1 CloudFront
CNAMEs: example.com, www.example.com
Origin Domain Name and Path: bucket-for-redirection.s3-website.ap-northeast-2.amazonaws.com
Origin ID: S3-Website-bucket-for-redirection.s3-website.ap-northeast-2.amazonaws.com
Route 53
Type A for 2 domain
1) example.com: Alias target is CloudFront
2) www.example.com: Alias target is s3
But my site returns ERR_TOO_MANY_REDIRECTS. Is there something i missed?
solution
I removed all the buckets except one(bucket-for-redirection).
Put the resources(ex. index.html) in there.
create the bucket policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
Then, set its properties with "Static Web Hosting", select first option, and type 'index.html' or else.
Make sure the Alias's target is CloudFront in Route 53.
(If you're Korean, please refer to my blog.)
This problem is caused by too many redirects. This means that the user is going to one URL, then the user is sent to another URL and then sent to another URL, ... The web browser detects multiple redirects and displays an error the user. Otherwise it is possible for the user to get into a loop constantly moving from one URL to another and never displaying the desired web page.
There is not enough information in your question about how you have configured S3 and CloudFront, so I will explain how to determine the exact problem.
To debug this problem use curl.
Let's say that your goal is that all users go to https://www.example.com. Now verify that this url does not redirect. Note: some webservers will redirect a DNS name to a DNS name + the home page url. If this is the case test with the home page url (second command).
curl -i https://www.example.com > data.txt
OR (replace with your home page url):
curl -i https://www.example.com/index.html > data.txt
Now open the file data.txt in an editor. The first line should be HTTP/1.1 200 or HTTP/2 200. The key is the 200 (anything between 200 and 299). If instead the number is 301 (Moved Permanently) or 307 (Temporary Redirect), then you are redirecting the user. See my example below. This is most likely the problem. The key is to figure out why your desired DNS name is redirecting and to what it is redirecting to. Then find the configuration file / service that is redirecting incorrectly.
If the previous command is working correct, then test the other supported DNS names and see if they are redirecting correctly to your desire DNS name (https://www.example.com). A common problem is that the redirects are going to the wrong desired page which then loops back and forth redirecting.
Your goal is that the webserver returns the following (which includes both the HTTP headers and HTML body). The important items are the status code (301 or 307) and the redirect location (5th line below). The HTML body is ignored for redirects.
Example correct redirect for everything but the desired DNS name:
HTTP/2 301
date: Fri, 08 Mar 2019 04:17:18 GMT
server: Apache
x-frame-options: SAMEORIGIN
location: https://www.example.com/
content-length: 232
content-type: text/html; charset=iso-8859-1
via: 1.1 google
alt-svc: quic=":443"; ma=2592000; v="46,44,43,39"
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
</body></html>
Use curl and test all supported possibilities:
curl -i http://www.example.com
This should redirect to https://www.example.com
curl -i http://example.com
This should redirect to https://www.examle.com
curl -i https://example.com
This should redirect to https://www.examle.com
Repeat the above tests using your home page url and a few sub pages.
A common problem that I see even on correctly running websites is that the user is redirected more than once. Properly designed redirects should send the user to the correct location in one step, not in multiple steps. Multiple redirects slow down getting to the correct page.

hls.js CORS using AWS Cloudfront issues with Cookies

I'm trying to set up a video streaming using Cloudfront HLS capabilities but I'm having trouble getting Hls.js to send my credential cookies in the request.
I already have Cloudfront configured to forward cookies and to forward Access-control headers. I also have set my S3 CORS policies to include GET, HEAD.
The problem I'm having is that even though I'm setting the xhr.withCredentials=true and the cookies are defined in the session, when I look at the request using chrome console, I can see that the HLS request has no cookies. As a result I get an error response from cloudfront saying I need to include the credential cookies.
Code:
First I do an ajax request to my server to generate the cookies. The server returns three Set-Cookies headers stored as session cookies in the browser:
$.ajax(
{
type: 'GET',
url: 'http://subdomain.mydomain.com:8080/service-
webapp/rest/resourceurl/cookies/98400738-a415-4e32-898c-9592d48d1ad7',
success: function (data) {
playMyVideo();
},
headers: { "Authorization": 'Bearer XXXXXX' }
});
Once the cookies are stored the test function is called to play my video using HLS.js:
function test(){
if (Hls.isSupported()) {
var video = document.getElementById('video');
var config = {
debug: true,
xhrSetup: function (xhr,url) {
xhr.withCredentials = true; // do send cookie
xhr.setRequestHeader("Access-Control-Allow-Headers","Content-Type, Accept, X-Requested-With");
xhr.setRequestHeader("Access-Control-Allow-Origin","http://sybdomain.domain.com:8080");
xhr.setRequestHeader("Access-Control-Allow-Credentials","true");
}
};
var hls = new Hls(config);
// bind them together
hls.attachMedia(video);
hls.on(Hls.Events.MEDIA_ATTACHED, function () {
console.log("video and hls.js are now bound together !");
hls.loadSource("http://cloudfrontDomain.net/small.m3u8");
hls.on(Hls.Events.MANIFEST_PARSED, function (event, data) {
console.log("manifest loaded, found " + data.levels.length + " quality level");
});
});
}
video.play();
}
As you can see below HLS OPTIONS and GET request do not set the session cookies:
HLS OPTIONS request:
OPTIONS /hls/98400738-a415-4e32-898c-9592d48d1ad7/small.m3u8 HTTP/1.1
Host: cloudfrontDomain.net
Connection: keep-alive
Access-Control-Request-Method: GET
Origin: subdomain.mydomain.com:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36
Access-Control-Request-Headers: access-control-allow-credentials,access-control-allow-headers,access-control-allow-origin
Accept: */*
Referer: http://subdomain.mydomain.com:8080/play.html
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,es;q=0.6
CloudFront response:
HTTP/1.1 200 OK
Content-Length: 0
Connection: keep-alive
Date: Fri, 07 Jul 2017 00:16:31 GMT
Access-Control-Allow-Origin: http://subdomain.mydomain.com:8080
Access-Control-Allow-Methods: GET, HEAD
Access-Control-Allow-Headers: access-control-allow-credentials, access-control-allow-headers, access-control-allow-origin
Access-Control-Max-Age: 3000
Access-Control-Allow-Credentials: true
Server: AmazonS3
Vary: Origin,Access-Control-Request-Headers,Access-Control-Request-Method
Age: 845
X-Cache: Hit from cloudfront
Via: 1.1 cloudfrontDomain.net (CloudFront)
X-Amz-Cf-Id: XXXXXX
HLS subsequent GET request missing the cookies:
GET /hls/98400738-a415-4e32-898c-9592d48d1ad7/small.m3u8 HTTP/1.1
Host: cloudfrontDomain.net
Connection: keep-alive
Origin: http://subdomain.mydomain.com:8080
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.104 Safari/537.36
Access-Control-Allow-Origin: http://subdomain.mydomain.com:8080
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: Content-Type, Accept, X-Requested-With
Accept: */*
Referer: http://subdomain.mydomain.com:8080/play.html
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,es;q=0.6
I've spent 4 days trying to figure this out. I've done plenty of research but I just can't figure out the solution. I'm new to CORS so maybe I'm not understanding some principle. I thought if the cookies are stored in the session they would get set if you enabled xhr with credentials but it doesn't seem to be the case.
Another thing I noted is that the GET request generated by HLS.js is not setting any xmlhttprequest header.
Thanks for your help :)
I was finally able to make it work. Thanks Michael for helping! Turns out it was a mix of not understanding how CORS principles work and properly configuring aws services. The main issue is to avoid cross domain requests by using cloudfront to serve both your webservice and s3 bucket. One important note I want to add is that any change you make in aws you have to wait for it to propagate. As a new aws dev I didn't know that and got very frustrated making changes that had no effect. Here is the solution:
1) Create your S3 bucket.
2) Create a Cloudfront distribution.
3) In the distribution set as the default origin your web-service domain.
4) Add a second origin and add a behavior in the distribution to forward all .m3u8 and .ts files to your S3 bucket.
5) When you add your bucket origin make sure you mark restrict access and also update bucket policy checkboxes.
6) In your bucket distribution behavior make sure you forward all white list headers and cookies. This can all be set in aws console.
7) If you are using different ports in your service make sure you set those too in the distribution.
8) Go to your S3 bucket settings and update the CORS config to the following:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
It is important that if your are using HLS.js to set the following config:
var config = {
debug: true,
xhrSetup: function (xhr,url) {
xhr.withCredentials = true; // do send cookie
xhr.setRequestHeader("Access-Control-Allow-Headers","Content-Type, Accept, X-Requested-With");
xhr.setRequestHeader("Access-Control-Allow-Origin","http://sybdomain.domain.com:8080");
xhr.setRequestHeader("Access-Control-Allow-Credentials","true");
}
};
var hls = new Hls(config);
Other important notes:
When you serve a cookie with your web service you can set the Path to be "/" and it will apply to all request in your domain.
For anyone that might be having this issue only on Chrome for Android, our problem was that the browser was caching the m3u8 files and giving the same CORS error. The solution was to append a timestamp parameter to the querystring of the file url:
var config = {
xhrSetup: function (xhr, url) {
xhr.withCredentials = true; // do send cookies
url = url + '?t=' + new Date().getTime();
xhr.open('GET', url, true);
}
};
var hls = new Hls(config);

Why does my serverless Lambda function reject Cache-Control header?

I'm using FineUploader to upload files to S3. While utilizing the DELETE functionality I get the following error:
XMLHttpRequest cannot load
https://xxxxxxx.execute-api.us-east-1.amazonaws.com/prod/deleteS3File?.
Request header field Cache-Control is not allowed by
Access-Control-Allow-Headers in preflight response.
The lambda function was created using the awesome Serverless Framework with the following configuration:
functions:
deleteS3File:
handler: handler.deleteS3File
events:
- http:
path: deleteS3File
method: POST
integration: lambda
cors: true
response:
headers:
Access-Control-Allow-Origin: "*"
Any idea what this error means for a Lambda function and how to tackle it?
The POST verb preflights an OPTIONS verb that you don't support.
So, you need to create a method for OPTIONS that will return status code 200 (success) and with the expected headers.
For both the OPTIONS and POST, try the following headers:
Access-Control-Allow-Origin: "*"
Access-Control-Allow-Methods: "GET, HEAD, OPTIONS, POST, PUT, DELETE"
Access-Control-Allow-Headers: "Access-Control-Allow-Headers, Cache-Control, Origin, Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers"
you may fine tune the headers later to allow just what you need