I am looking to add Basic User Authentication to a Static Site I will have up on AWS so that only those with the proper username + password which I will supply to those users have access to see the site. I found s3auth and it seems to be exactly what I am looking for, however, I am wondering if I will need to somehow set the authorization for pages besides the index.html. For example, I have 3 pages- index, about and contact.html, without authentication setup for about.html what is stopping an individual for directly accessing the site via www.mywebsite.com/about.html? I am more so looking for clarification or any resources anyone can provide to explain this!
Thank you for your help!
This is the perfect use for Lambda#Edge.
Because you're hosting your static site on S3, you can easily and very economically (pennies) add some really great features to your site by using CloudFront, AWS's content distribution network, to serve your site to your users. You can learn how to host your site on S3 with CloudFront (including 100% free SSL) here.
While your CloudFront distribution is deploying, you'll have some time to go set up your Lambda that you'll be using to do the basic user auth. If this is your first time creating a Lambda or creating a Lambda for use #Edge the process is going to feel really complex, but if you follow my step-by-step instructions below you'll be doing serverless basic-auth that is infinitely scalable in less than 10 minutes. I'm going to use us-east-1 for this and it's important to know that if you're using Lambda#Edge you should author your functions in us-east-1, and when they're associated with your CloudFront distribution they'll automagically be replicated globally. Let's begin...
Head over to Lambda in the AWS console, and click on "Create Function"
Create your Lambda from scratch and give it a name
Set your runtime as Node.js 8.10
Give your Lambda some permissions by selecting "Choose or create an execution role"
Give the role a name
From Policy Templates select "Basic Lambda#Edge permissions (for CloudFront trigger)"
Click "Create function"
Once your Lambda is created take the following code and paste it in to the index.js file of the Function Code section - you can update the username and password you want to use by changing the authUser and authPass variables:
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Click "Save" in the upper right hand corner.
Now that your Lambda is saved it's ready to attach to your CloudFront distribution. In the upper menu, select Actions -> Deploy to Lambda#Edge.
In the modal that appears select the CloudFront distribution you created earlier from the drop down menu, leave the Cache Behavior as *, and for the CloudFront Event change it to "Viewer Request", and finally select/tick "Include Body". Select/tick the Confirm deploy to Lambda#Edge and click "Deploy".
And now you wait. It takes a few minutes (15-20) to replicate your Lambda#Edge across all regions and edge locations. Go to CloudFront to monitor the deployment of your function. When your CloudFront Distribution Status says "Deployed", your Lambda#Edge function is ready to use.
Deploying Lambda#edge is quiet difficult to replicate via console. So I have created CDK Stack that you just add your own credentials and domain name and deploy.
https://github.com/apoorvmote/cdk-examples/tree/master/password-protect-s3-static-site
I have tested the following function with Node12.x
exports.handler = async (event, context, callback) => {
const request = event.Records[0].cf.request
const headers = request.headers
const user = 'my-username'
const password = 'my-password'
const authString = 'Basic ' + Buffer.from(user + ':' + password).toString('base64')
if (typeof headers.authorization === 'undefined' || headers.authorization[0].value !== authString) {
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: 'Unauthorized',
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
}
}
callback(null, response)
}
callback(null, request)
}
By now, this is also possible with CloudFront functions which I like more because it reduces the complexity even more (from what is already not too complex with Lambda). Here's my writeup on what I just did...
It's basically 3 things that need to be done:
Create a CloudFront function to add Basic Auth into the request.
Configure the Origin of the CloudFront distribution correctly in a few places.
Activate the CloudFront function.
That's it, no particular bells & whistles otherwise. Here's what I've done:
First, go to CloudFront, then click on Functions on the left, create a new function with a name of your choice (no region etc. necessary) and then add the following as the code of the function:
function handler(event) {
var user = "myuser";
var pass = "mypassword";
function encodeToBase64(str) {
var chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
for (
// initialize result and counter
var block, charCode, idx = 0, map = chars, output = "";
// if the next str index does not exist:
// change the mapping table to "="
// check if d has no fractional digits
str.charAt(idx | 0) || ((map = "="), idx % 1);
// "8 - idx % 1 * 8" generates the sequence 2, 4, 6, 8
output += map.charAt(63 & (block >> (8 - (idx % 1) * 8)))
) {
charCode = str.charCodeAt((idx += 3 / 4));
if (charCode > 0xff) {
throw new InvalidCharacterError("'btoa' failed: The string to be encoded contains characters outside of the Latin1 range."
);
}
block = (block << 8) | charCode;
}
return output;
}
var requiredBasicAuth = "Basic " + encodeToBase64(`${user}:${pass}`);
var match = false;
if (event.request.headers.authorization) {
if (event.request.headers.authorization.value === requiredBasicAuth) {
match = true;
}
}
if (!match) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
},
};
}
return event.request;
}
Then you can test with directly on the UI and assuming it works and assuming you have customized username and password, publish the function.
Please note that I have found individual pieces of the function above on the Internet so this is not my own code (other than piecing it together). I wish I would still find the sources so I can quote them here but I can't find them anymore. Credits to the creators though! :-)
Next, open your CloudFront distribution and do the following:
Make sure your S3 bucket in the origin is configured as a REST endpoint and not a website endpoint, i.e. it must end on .s3.amazonaws.com and not have the word website in the hostname.
Also in the Origin settings, under "S3 bucket access", select "Yes use OAI (bucket can restrict access to only CloudFront)". In the setting below click on "Create OAI" to create a new OAI (unless you have an existing one and know what you're doing). And select "Yes, update the bucket policy" to allow AWS to add the necessary permissions to your OAI.
Finally, open your Behavior of the CloudFront distribution and scroll to the bottom. Under "Function associations", for "Viewer request" select "CloudFront Function" and select your newly created CloudFront function. Save your changes.
And that should be it. With a bit of luck a matter of a couple of minutes (realistically more, I know) and especially not additional complexity once this is all set up.
Thanks for the useful post. An alternative to listing the pain text user name and password in the code, and to having base64 encoding logic, is to pre-generate the base64 encoded string. One such encoder, https://www.debugbear.com/basic-auth-header-generator
From there the script becomes simpler. The following is for 'user' / 'password'
function handler(event) {
var base64UserPassword = "Y3liZXJmbG93c3VyZmVyOnRhbHR4cGNnIzIwMjI="
if (event.request.headers.authorization &&
event.request.headers.authorization.value === ("Basic " + base64UserPassword)) {
return event.request;
}
return {
statusCode: 401,
statusDescription: "Unauthorized ",
headers: {
"www-authenticate": { value: "Basic" },
},
}
}
Here is already exists answer how to use Cloudfront functions, but I want to add improved version of the function:
Hardcoded credentials stored as SHA256 hash instead of plain (or base64 that is the same as plain) text. And that is more secure.
It is possible to allow access by whitelisted global IP addresses:
function handler(event) {
var crypto = require('crypto');
var headers = event.request.headers;
var wlist_ips = [
"1.1.1.1",
"2.2.2.2"
];
var authString = "9c06d532edf0813659ab41d26ab8ba9ca53b985296ee4584a79f34fe9cd743a4";
if (
typeof headers.authorization === "undefined" ||
crypto.createHash(
'sha256'
).update(headers.authorization.value).digest('hex') !== authString
) {
if (
!wlist_ips.includes(event.viewer.ip)
) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
"x-source-ip": { value: event.viewer.ip}
}
};
}
}
return event.request;
}
Command below may be used to get correct authString hash value for username user and password password:
printf "Basic $(printf 'user:password' | base64 -w 0)" | sha256sum | awk '{print$1}'
Related
I have a serverless website on AWS S3. But S3 have a limitation that I want to overcome: it don't allow me to have friendly URLs.
For example, I would like to replace URL:
www.mywebsite.com/user.html?login=daniel
With this URL friendly:
www.mywebsite.com/user/daniel
So, I would like to know if I can use Lambda together with API Gateway to achieve this.
My idea is:
API Gateway ---> Lambda function ---> fetch S3 resource
The API Gateway will get ANY request, and pass information to a Lambda funcion, that will process some logic using the request URL (including maybe some database query) and then fetch the resource from S3.
I know AWS API Gateway main purpose is to be a gateway to REST APIs, but can we also use it as a proxy to an entire website?
The good option can be to use CloudFront as a reverse proxy, you can use Viewer/Origin response request to trigger lambda and fetch the resource from S3.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/
It is possible to use API Gateway as a reverse proxy for a S3 website.
I was able to do that following steps below:
In AWS API Gateway, create a "proxy resource" with resource path = "{proxy+}"
Go to AWS Certificate Manager and request a wildcard certificate for your website (*.mywebsite.com)
AWS will tell you to create a CNAME record in you domain registrar, to verify that you own that domain
After your certificate is validated, go to AWS API Gateway and create a Custom Domain Name (click on "Custom Domain Names" and then "Create Custom Domain Name"). In "domain name" type your domain (www.mywebsite.com) and select the ACM Certificate that you just created (step 1 above). Create a "Base Path Mapping" with path = "/" and in "destination" select your API and stage.
After that, you will need to add another CNAME record, with the CloudFront "Target Domain Name" that was generated for that Custom Domain Name.
In the Lambda, we can route the requests:
'use strict';
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const myBucket = 'myBucket';
exports.handler = async (event) => {
var responseBody = "";
if (event.path=="/") {
responseBody = "<h1>My Landing Page</h1>";
responseBody += "<a href='/xpto'>link to another page</a>";
return buildResponse(200, responseBody);
}
if (event.path=="/xpto") {
responseBody = "<h1>Another Page</h1>";
responseBody += "<a href='/'>home</a>";
return buildResponse(200, responseBody);
}
if (event.path=="/my-s3-resource") {
var params = {
Bucket: myBucket,
Key: 'path/to/my-s3-resource.html',
};
const data = await s3.getObject(params).promise();
return buildResponse(200, data.Body.toString('utf-8'));
}
return buildResponse(404, '404 Error');
};
function buildResponse(statusCode, responseBody) {
var response = {
"isBase64Encoded": false,
"statusCode": statusCode,
"headers": {
"Content-Type" : "text/html; charset=utf-8"
},
"body": responseBody,
};
return response;
}
A good bet would be to use CloudFront and Lambda#Edge.
Lambda#Edge allows you to run Lambda function in the edge location of the CloudFront CDN network.
CloudFront gives you the option to hook into various events during its lifecycle and apply logic.
This article looks like it might be describing something similar to what you're talking about.
https://aws.amazon.com/blogs/networking-and-content-delivery/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/
I added website authentication for s3 bucket using lambda function and then connect the lambda function with the CloudFront by using behavior settings in distribution settings and it worked fine and added authentication(means htaccess authentication in simple servers). Now I want to change the password for my website authentication. For that, I updated the password and published the new version of the lambda function and then in the distribution settings; I created a new invalidation to clear cache. But it didn't work, and website authentication password didn't change. Below is my lambda function code to add authentication.
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Can anyone please help me to solve the problem.
Thanks in advance.
On Lambda function view, After you save your changes (using Firefox could be a safer option, see below if you wonder why)
you will see a menu item under Configuration - > Designer -> CloudFront. You will see following screens.
After you deploy :
You can publish your change to CloudFront distribution. Once you publish this, it will automatically start deploying CF distribution which you can view on CF menu.
Also i would prefer using "Viewer Request" as a CloudFront trigger event, not sure which one you are using as this should avoid Cloudfront caching. On top of this Chrome sometimes fails to save changes on Lambda. There should be a bug on aws console. Try Firefox just to be safe when you are editing lambda functions.
i need to find a solution how to do it. Basically i have one .m3u8 video and i want to restrict it to be only played on my domain. Basically what are people doing right now, is stealing my video and playing on their sites, which causes big overload and a lot of bandwidth...
d23ek3kf.cloudfront.net/video.m3u8 > mydomain.com > video accessable
d23ek3kf.cloudfront.net/video.m3u8 > randomdomain.com > video not accessable
This solution does not prevent anyone from downloading your content and the uploading it to their own site, but it does prevent other sites from hot-linking to your content.
Create a Lambda#Edge Viewer Request trigger. This allows you to inspect the request before the cache is checked, and either allow processing to continue or to return a generated response.
'use strict';
exports.handler = (event, context, callback) => {
// extract the request object
const request = event.Records[0].cf.request;
// extract the HTTP `Referer` header if present
// otherwise an empty string to simplify the matching logic
const referer = (request.headers['referer'] || [ { value: '' } ])[0].value;
// verify that the referring page is yours
// replace example.com with your domain
// add other conditions with logical or ||
if(referer.startsWith('https://example.com/') ||
referer.startsWith('http://example.com/'))
{
// return control to CloudFront and allow the request to continue normally
return callback(null,request);
}
// if we get here, the referring page is not yours.
// generate a 403 Forbidden response
// you can customize the body, but the size is limited to ~40 KB
return callback(null, {
status: '403',
body: 'Access denied.',
headers: {
'cache-control': [{ key: 'Cache-Control', value: 'private, no-cache, no-store, max-age=0' }],
'content-type': [{ key: 'Content-Type', value: 'text/plain' }],
}
});
};
The way to do it is using signed URLs. Your website will generate signed URLs for the video that is being played that the user wants to play and cloudfront will allow the content to be downloaded. Signed URLs exire after a specified amount of time.
Any other website will just have he link of the video which is not enough to download the video. Take a look at AWS documentation here to understand the details and mechanism to achieve it. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
I want to use an AWS API Gateway as a proxy for fetching files from an S3 bucket and returning them to the client. I'm using a Lambda function to talk to S3 and send the file to the client via the AWS API Gateway. I've rad that the best way to do this is to use a "Lambda proxy integration" so the entire request gets piped to Lambda without any modification. But if I do that then I can't setup an Integration Response for the resulting response from my Lambda function. So all the client gets is JSON.
It seems there should be a way for the API Gateway to take the JSON and transform the request to the proper response for the client but I can't seem to figure out how to make that happen. There are lots of examples that point to setting a content-type on the response from the API Gateway manually but I need to set the content-type header to whatever the file type is.
Also for images and binary formats my Lambda function is returning a base64 encoded string and the property isBase64Encoded set to true. When I go to the "Binary Support" section and specify something like image/* as a content type that should be returned as binary, it doesn't work. I only have success by setting the Binary Support content type to */* (aka everything) which won't work for non-binary content types.
What am I missing and why does this seem so difficult?
Turns out API Gateway isn't the problem. My Lambda function wasn't returning proper headers.
For handling binary responses I found you need to set Binary Support content type to */* (aka everything) and then have your Lambda function return the property isBase64Encoded set to true. Responses that are base64 encoded and indicated as such will be decoded and served as binary while other requests will be returned as is.
Here's a simple Gist for a Lambda function that takes a given path and reads the file from S3 and returns it via the API Gateway:
/**
* This is a simple AWS Lambda function that will look for a given file on S3 and return it
* passing along all of the headers of the S3 file. To make this available via a URL use
* API Gateway with an AWS Lambda Proxy Integration.
*
* Set the S3_REGION and S3_BUCKET global parameters in AWS Lambda
* Make sure the Lambda function is passed an object with `{ pathParameters : { proxy: 'path/to/file.jpg' } }` set
*/
var AWS = require('aws-sdk');
exports.handler = function( event, context, callback ) {
var region = process.env.S3_REGION;
var bucket = process.env.S3_BUCKET;
var key = decodeURI( event.pathParameters.proxy );
// Basic server response
/*
var response = {
statusCode: 200,
headers: {
'Content-Type': 'text/plain',
},
body: "Hello world!",
};
callback( null, response );
*/
// Fetch from S3
var s3 = new AWS.S3( Object.assign({ region: region }) );
return s3.makeUnauthenticatedRequest(
'getObject',
{ Bucket: bucket, Key: key },
function(err, data) {
if (err) {
return err;
}
var isBase64Encoded = false;
if ( data.ContentType.indexOf('image/') > -1 ) {
isBase64Encoded = true;
}
var encoding = '';
if ( isBase64Encoded ) {
encoding = 'base64'
}
var resp = {
statusCode: 200,
headers: {
'Content-Type': data.ContentType,
},
body: new Buffer(data.Body).toString(encoding),
isBase64Encoded: isBase64Encoded
};
callback(null, resp);
}
);
};
via https://gist.github.com/kingkool68/26aa7a3641a3851dc70ce7f44f589350
For our AWS API Endpoints we use AWS_IAM authorization and want to make a call from Swagger UI.
To make a successful call there must be 2 headers 'Authorization' and 'x-amz-date'. To form 'Authorization' we use following steps from aws doc.
We must to change 'x-amz-date' with every call to go through authorization.
The question is: How to write script in Swagger to sign request, which run every time before request send to aws?
(We know how to specify both headers one time before loading Swagger page, but this process should be re-run before every call).
Thanks in advance.
There is built-in support in swagger-js to add requestInterceptors to do just this. The swagger-ui project uses swagger-js under the hood.
Simply create a request interceptor like such:
requestInterceptor: {
apply: function (request) {
// modify the request object here
return request;
}
}
and apply it to your swagger instance on creation:
window.swaggerUi = new SwaggerUi({
url: url,
dom_id: "swagger-ui-container",
requestInterceptor: requestInterceptor,
Here you can set headers in the request object (note, this is not the standard javascript http request object, inspect it for details). But you do have access to all headers here, so you can calculate and inject them as needed.
You can pretty easily monkeypatch signing from the AWS SDK into SwaggerJS(and thus SwaggerUI). See here
I have a slightly modified SwaggerUI here. Given some AWS credentials and an API ID, it will pull down the Swagger definition, display it in SwaggerUI, and then you can call the API using sigv4.
The Authorizer implementation looks like this:
var AWSSigv4RequestSigner = function(credentialProvider, aws) {
this.name = "sigv4";
this.aws = aws;
this.credentialProvider = credentialProvider;
};
AWSSigv4RequestSigner.prototype.apply = function(options, authorizations) {
var serviceName = "execute-api";
//If we are loading the definition itself, then we need to sign for apigateway.
if (options && options.url.indexOf("apigateway") >= 0) {
serviceName = "apigateway";
}
if(serviceName == "apigateway" || (options.operation && options.operation.authorizations && options.operation.authorizations[0].sigv4))
{
/**
* All of the below is an adapter to get this thing into the right form for the AWS JS SDK Signer
*/
var parts = options.url.split('?');
var host = parts[0].substr(8, parts[0].indexOf("/", 8) - 8);
var path = parts[0].substr(parts[0].indexOf("/", 8));
var querystring = parts[1];
var now = new Date();
if (!options.headers)
{
options.headers = [];
}
options.headers.host = host;
if(serviceName == "apigateway")
{
//For the swagger endpoint, apigateway is strict about content-type
options.headers.accept = "application/json";
}
options.pathname = function () {
return path;
};
options.methodIndex = options.method;
options.search = function () {
return querystring ? querystring : "";
};
options.region = this.aws.config.region || 'us-east-1';
//AWS uses CAPS for method names, but swagger does not.
options.method = options.methodIndex.toUpperCase();
var signer = new this.aws.Signers.V4(options, serviceName);
//Actually add the Authorization header here
signer.addAuthorization(this.credentialProvider, now);
//SwaggerJS/yourbrowser complains if these are still around
delete options.search;
delete options.pathname;
delete options.headers.host;
return true;
}
return false;
};