How to restrict Cloudfront access to my domain only? - amazon-web-services

i need to find a solution how to do it. Basically i have one .m3u8 video and i want to restrict it to be only played on my domain. Basically what are people doing right now, is stealing my video and playing on their sites, which causes big overload and a lot of bandwidth...
d23ek3kf.cloudfront.net/video.m3u8 > mydomain.com > video accessable
d23ek3kf.cloudfront.net/video.m3u8 > randomdomain.com > video not accessable

This solution does not prevent anyone from downloading your content and the uploading it to their own site, but it does prevent other sites from hot-linking to your content.
Create a Lambda#Edge Viewer Request trigger. This allows you to inspect the request before the cache is checked, and either allow processing to continue or to return a generated response.
'use strict';
exports.handler = (event, context, callback) => {
// extract the request object
const request = event.Records[0].cf.request;
// extract the HTTP `Referer` header if present
// otherwise an empty string to simplify the matching logic
const referer = (request.headers['referer'] || [ { value: '' } ])[0].value;
// verify that the referring page is yours
// replace example.com with your domain
// add other conditions with logical or ||
if(referer.startsWith('https://example.com/') ||
referer.startsWith('http://example.com/'))
{
// return control to CloudFront and allow the request to continue normally
return callback(null,request);
}
// if we get here, the referring page is not yours.
// generate a 403 Forbidden response
// you can customize the body, but the size is limited to ~40 KB
return callback(null, {
status: '403',
body: 'Access denied.',
headers: {
'cache-control': [{ key: 'Cache-Control', value: 'private, no-cache, no-store, max-age=0' }],
'content-type': [{ key: 'Content-Type', value: 'text/plain' }],
}
});
};

The way to do it is using signed URLs. Your website will generate signed URLs for the video that is being played that the user wants to play and cloudfront will allow the content to be downloaded. Signed URLs exire after a specified amount of time.
Any other website will just have he link of the video which is not enough to download the video. Take a look at AWS documentation here to understand the details and mechanism to achieve it. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

Related

how can I set Alias Redirection in AWS S3 website CloudFront?

I have a main website at apex domain (xxx.com) working, now have a short term project which is a landing page only, use Gatsby4(React) deployed as all static files at a S3 bucket, and setup proper Cloudfront distribution for https working
now have https://landingpage.xxx.com which is working, but it's long and hard to remember, want to also set a short alias,
when user enter lp.xxx.com in address bar, it should do HTTP 301 Redirect to destination https://landingpage.xxx.com; I have setup this lp.xxx.com in Route53 as CNAME landingpage.xxx.com and add lp.xxx.com to Cloudfront Altname,
So now https://lp.xxx.com is working (as well as https://landingpage.xxx.com) but it works like 2 different entrances, with no redirection,
wonder is there an easy redirection rule setup I missed somewhere? avoid changing the Gatsby4(React) code; does this require some Lambda#Edge function to do so?
I'm using a CloudFront function redirect resolved it:
modified from this sample code https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/example-function-redirect-url.html
function handler(event) {
var request = event.request;
var headers = request.headers;
var host = request.headers.host.value;
if (host !== 'landingpage.xxx.com') {
var response = {
statusCode: 302,
statusDescription: 'Found',
headers:
{ "location": { "value": 'https://landingpage.xxx.com' } }
}
return response;
}
return request;
}

XMLHttpRequest error while using http post flutter web

I am facing this error XMLHttpRequest error. while making an HTTP post call to my API-AWS API Gateway. My current Flow is Flutter web -> API gateway -> lambda -> rds.
I know there are already a couple of question-related to this like but as suggested in one of the answers to add some headers in response to lambda. but it doesn't work for me.
After doing some research I found out that the problem is regarding to CORS. now disabling cors in chrome is a temporary fix and suggested in this question.
some other solution that I found after researching suggested to enabled cors in my API and also in the frontend part I have added headers but none of them works.
fetchData() async {
String url =
"myUrl";
Map<String, String> headers = {
"Access-Control-Allow-Origin": "*", // Required for CORS support to work
};
String json = '{"emailId":"emailId"}';
http.Response response =
await http.post(Uri.parse(url), headers: headers, body: json);
print(response.body);
return response.body;
}
what is the correct way of solving this problem?
1- Go to flutter\bin\cache and remove a file named: flutter_tools.stamp
2- Go to flutter\packages\flutter_tools\lib\src\web and open the file chrome.dart.
3- Find '--disable-extensions'
4- Add '--disable-web-security'
I have Solved my problem, and not going to delete this question because there aren't many well-defined solutions to this problem.
For Future viewer who is using flutter web and AWS API-gateway.
if you encounter this problem it means its from backend side not from flutter side
XMLHttpRequest error. is caused due to CORS
The solution to the problem you have to enable CORS in api-gateway follow this link.
but if you are using proxy integration with lambda and api-gateway then in that case enabling CORS doesn't going to help, you have to pass on headers from the response of lambda function. like
return {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*", // Required for CORS support to work
"Access-Control-Allow-Credentials": true, // Required for cookies, authorization headers with HTTPS
"Access-Control-Allow-Headers": "Origin,Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,locale",
"Access-Control-Allow-Methods": "POST, OPTIONS"
},
body: JSON.stringify(item)
};
the format needs to be the same. also, one particular question that helps me a lot to understand this whole issue is going through the various answer of the question link.
Now comes my problem, what I'm doing wrong i that i am passing "Access-Control-Allow-Origin": "*", from frontend and enabling CORS in API gateway also send similar headers which are creating a problem for me
Access to XMLHttpRequest at 'API-URL' from origin 'http://localhost:63773' has been blocked by CORS policy:
Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. //this particular line
So after changing my function to this everything works perfectly fine
fetchData() async {
String url =
"API-url";
Map<String, String> headers = {
"Content-Type": "text/plain",
};
String json = '{"emailId":"emailId"}';
Map<String, String> map = Map();
map["emailId"] = "fake#gmail.com";
http.Response response = await http
.post(Uri.parse(url), headers: headers, body: jsonEncode(map))
.then((value) {
print("onThen> " + value.body.toString());
}).onError((error, stackTrace) {
print("onError> " +
error.toString() +
" stackTrace> " +
stackTrace.toString());
});
}
In flutter web api Access-Control-Allow-Origin use in header to might resolve this issue.
header("Access-Control-Allow-Origin: header");
in your backend php file add this code
<?php
header("Access-Control-Allow-Origin: *");
finish!

Google Cloud Storage: CORS settings doesn't work for signed URLs

The response of PUT request with signed URL doesn't contain header Access-Control-Allow-Origin.
import os
from datetime import timedelta
import requests
from google.cloud import storage
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = <path to google credentials>
client = storage.Client()
bucket = client.get_bucket('my_bucket')
policies = [
{
'origin': ['*'],
'method': ['PUT'],
}
]
bucket.cors = policies
bucket.update()
blob = bucket.blob('new_file')
url = blob.generate_signed_url(timedelta(days=30), method='PUT')
response = requests.put(url, data='some data')
for header in response.headers.keys():
print(header)
Output:
X-GUploader-UploadID
ETag
x-goog-generation
x-goog-metageneration
x-goog-hash
x-goog-stored-content-length
x-goog-stored-content-encoding
Vary
Content-Length
Date
Server
Content-Type
Alt-Svc
As you can see there is no CORS-headers. So, can I conclude that GCS doesn't support CORS properly/fully?
Cross Origin Resource Sharing (CORS) allows interactions between resources from different origins. By default, in Google Cloud Storage it is prohibited/disabled in order to prevent malicious behavior.
You can enable it either using Cloud Libraries, Rest API or Cloud SDK, keeping in mind following rules:
Authenticate using user/service account with the permissions for Cloud Storage type: FULL_CONTROL.
Using XML API to get proper CORS headers, use one of the two URLs:
- storage.googleapis.com/[BUCKET_NAME]
- [BUCKET_NAME].storage.googleapis.com
Origin storage.cloud.google.com/[BUCKET_NAME] will not respond with CORS header.
Request need proper ORIGIN header to match bucket policy ORIGIN configuration as stated in the point 3 of the CORS troubleshooting documentation, in case of your code:
headers = {
'ORIGIN': '*'
}
response = requests.put(url, data='some data', headers=headers)
for header in response.headers.keys():
print(header)
gives following output:
X-GUploader-UploadID
ETag
x-goog-generation
x-goog-metageneration
x-goog-hash
x-goog-stored-content-length
x-goog-stored-content-encoding
Access-Control-Allow-Origin
Access-Control-Expose-Headers
Content-Length
Date
Server
Content-Type
I had this issue. For me the problem was I was using POST instead of PUT. Furthermore, I had to set the Content-Type of the upload to match the content type used to generate the form. The default Content-Type in the demo is "application/octet-stream", so I had to change it to be whatever was the content type of the upload. When doing the XMLHttpRequest, I just had to send the file directly instead of using FormData.
This was how I got the signed url.
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'application/octet-stream',
};
// Get a v4 signed URL for uploading file
const [url] = await storage
.bucket("lsa-storage")
.file(upload.id)
.getSignedUrl(options as any);

Basic User Authentication for Static Site using AWS & S3 Bucket

I am looking to add Basic User Authentication to a Static Site I will have up on AWS so that only those with the proper username + password which I will supply to those users have access to see the site. I found s3auth and it seems to be exactly what I am looking for, however, I am wondering if I will need to somehow set the authorization for pages besides the index.html. For example, I have 3 pages- index, about and contact.html, without authentication setup for about.html what is stopping an individual for directly accessing the site via www.mywebsite.com/about.html? I am more so looking for clarification or any resources anyone can provide to explain this!
Thank you for your help!
This is the perfect use for Lambda#Edge.
Because you're hosting your static site on S3, you can easily and very economically (pennies) add some really great features to your site by using CloudFront, AWS's content distribution network, to serve your site to your users. You can learn how to host your site on S3 with CloudFront (including 100% free SSL) here.
While your CloudFront distribution is deploying, you'll have some time to go set up your Lambda that you'll be using to do the basic user auth. If this is your first time creating a Lambda or creating a Lambda for use #Edge the process is going to feel really complex, but if you follow my step-by-step instructions below you'll be doing serverless basic-auth that is infinitely scalable in less than 10 minutes. I'm going to use us-east-1 for this and it's important to know that if you're using Lambda#Edge you should author your functions in us-east-1, and when they're associated with your CloudFront distribution they'll automagically be replicated globally. Let's begin...
Head over to Lambda in the AWS console, and click on "Create Function"
Create your Lambda from scratch and give it a name
Set your runtime as Node.js 8.10
Give your Lambda some permissions by selecting "Choose or create an execution role"
Give the role a name
From Policy Templates select "Basic Lambda#Edge permissions (for CloudFront trigger)"
Click "Create function"
Once your Lambda is created take the following code and paste it in to the index.js file of the Function Code section - you can update the username and password you want to use by changing the authUser and authPass variables:
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Click "Save" in the upper right hand corner.
Now that your Lambda is saved it's ready to attach to your CloudFront distribution. In the upper menu, select Actions -> Deploy to Lambda#Edge.
In the modal that appears select the CloudFront distribution you created earlier from the drop down menu, leave the Cache Behavior as *, and for the CloudFront Event change it to "Viewer Request", and finally select/tick "Include Body". Select/tick the Confirm deploy to Lambda#Edge and click "Deploy".
And now you wait. It takes a few minutes (15-20) to replicate your Lambda#Edge across all regions and edge locations. Go to CloudFront to monitor the deployment of your function. When your CloudFront Distribution Status says "Deployed", your Lambda#Edge function is ready to use.
Deploying Lambda#edge is quiet difficult to replicate via console. So I have created CDK Stack that you just add your own credentials and domain name and deploy.
https://github.com/apoorvmote/cdk-examples/tree/master/password-protect-s3-static-site
I have tested the following function with Node12.x
exports.handler = async (event, context, callback) => {
const request = event.Records[0].cf.request
const headers = request.headers
const user = 'my-username'
const password = 'my-password'
const authString = 'Basic ' + Buffer.from(user + ':' + password).toString('base64')
if (typeof headers.authorization === 'undefined' || headers.authorization[0].value !== authString) {
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: 'Unauthorized',
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
}
}
callback(null, response)
}
callback(null, request)
}
By now, this is also possible with CloudFront functions which I like more because it reduces the complexity even more (from what is already not too complex with Lambda). Here's my writeup on what I just did...
It's basically 3 things that need to be done:
Create a CloudFront function to add Basic Auth into the request.
Configure the Origin of the CloudFront distribution correctly in a few places.
Activate the CloudFront function.
That's it, no particular bells & whistles otherwise. Here's what I've done:
First, go to CloudFront, then click on Functions on the left, create a new function with a name of your choice (no region etc. necessary) and then add the following as the code of the function:
function handler(event) {
var user = "myuser";
var pass = "mypassword";
function encodeToBase64(str) {
var chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
for (
// initialize result and counter
var block, charCode, idx = 0, map = chars, output = "";
// if the next str index does not exist:
// change the mapping table to "="
// check if d has no fractional digits
str.charAt(idx | 0) || ((map = "="), idx % 1);
// "8 - idx % 1 * 8" generates the sequence 2, 4, 6, 8
output += map.charAt(63 & (block >> (8 - (idx % 1) * 8)))
) {
charCode = str.charCodeAt((idx += 3 / 4));
if (charCode > 0xff) {
throw new InvalidCharacterError("'btoa' failed: The string to be encoded contains characters outside of the Latin1 range."
);
}
block = (block << 8) | charCode;
}
return output;
}
var requiredBasicAuth = "Basic " + encodeToBase64(`${user}:${pass}`);
var match = false;
if (event.request.headers.authorization) {
if (event.request.headers.authorization.value === requiredBasicAuth) {
match = true;
}
}
if (!match) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
},
};
}
return event.request;
}
Then you can test with directly on the UI and assuming it works and assuming you have customized username and password, publish the function.
Please note that I have found individual pieces of the function above on the Internet so this is not my own code (other than piecing it together). I wish I would still find the sources so I can quote them here but I can't find them anymore. Credits to the creators though! :-)
Next, open your CloudFront distribution and do the following:
Make sure your S3 bucket in the origin is configured as a REST endpoint and not a website endpoint, i.e. it must end on .s3.amazonaws.com and not have the word website in the hostname.
Also in the Origin settings, under "S3 bucket access", select "Yes use OAI (bucket can restrict access to only CloudFront)". In the setting below click on "Create OAI" to create a new OAI (unless you have an existing one and know what you're doing). And select "Yes, update the bucket policy" to allow AWS to add the necessary permissions to your OAI.
Finally, open your Behavior of the CloudFront distribution and scroll to the bottom. Under "Function associations", for "Viewer request" select "CloudFront Function" and select your newly created CloudFront function. Save your changes.
And that should be it. With a bit of luck a matter of a couple of minutes (realistically more, I know) and especially not additional complexity once this is all set up.
Thanks for the useful post. An alternative to listing the pain text user name and password in the code, and to having base64 encoding logic, is to pre-generate the base64 encoded string. One such encoder, https://www.debugbear.com/basic-auth-header-generator
From there the script becomes simpler. The following is for 'user' / 'password'
function handler(event) {
var base64UserPassword = "Y3liZXJmbG93c3VyZmVyOnRhbHR4cGNnIzIwMjI="
if (event.request.headers.authorization &&
event.request.headers.authorization.value === ("Basic " + base64UserPassword)) {
return event.request;
}
return {
statusCode: 401,
statusDescription: "Unauthorized ",
headers: {
"www-authenticate": { value: "Basic" },
},
}
}
Here is already exists answer how to use Cloudfront functions, but I want to add improved version of the function:
Hardcoded credentials stored as SHA256 hash instead of plain (or base64 that is the same as plain) text. And that is more secure.
It is possible to allow access by whitelisted global IP addresses:
function handler(event) {
var crypto = require('crypto');
var headers = event.request.headers;
var wlist_ips = [
"1.1.1.1",
"2.2.2.2"
];
var authString = "9c06d532edf0813659ab41d26ab8ba9ca53b985296ee4584a79f34fe9cd743a4";
if (
typeof headers.authorization === "undefined" ||
crypto.createHash(
'sha256'
).update(headers.authorization.value).digest('hex') !== authString
) {
if (
!wlist_ips.includes(event.viewer.ip)
) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
"x-source-ip": { value: event.viewer.ip}
}
};
}
}
return event.request;
}
Command below may be used to get correct authString hash value for username user and password password:
printf "Basic $(printf 'user:password' | base64 -w 0)" | sha256sum | awk '{print$1}'

Password protect s3 bucket with lambda function in aws

I added website authentication for s3 bucket using lambda function and then connect the lambda function with the CloudFront by using behavior settings in distribution settings and it worked fine and added authentication(means htaccess authentication in simple servers). Now I want to change the password for my website authentication. For that, I updated the password and published the new version of the lambda function and then in the distribution settings; I created a new invalidation to clear cache. But it didn't work, and website authentication password didn't change. Below is my lambda function code to add authentication.
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Can anyone please help me to solve the problem.
Thanks in advance.
On Lambda function view, After you save your changes (using Firefox could be a safer option, see below if you wonder why)
you will see a menu item under Configuration - > Designer -> CloudFront. You will see following screens.
After you deploy :
You can publish your change to CloudFront distribution. Once you publish this, it will automatically start deploying CF distribution which you can view on CF menu.
Also i would prefer using "Viewer Request" as a CloudFront trigger event, not sure which one you are using as this should avoid Cloudfront caching. On top of this Chrome sometimes fails to save changes on Lambda. There should be a bug on aws console. Try Firefox just to be safe when you are editing lambda functions.