I have created AWS custom lambda Authorizer, which is validating token and add claims in APIGatewayCustomAuthorizerResponse with Context property.
private APIGatewayCustomAuthorizerResponse AuthorizedResponse(TokenIntrospectionResponse result) // result with claims after validating token
{
return new APIGatewayCustomAuthorizerResponse()
{
PrincipalID = "uniqueid",
PolicyDocument = new APIGatewayCustomAuthorizerPolicy()
{
Statement = new List<APIGatewayCustomAuthorizerPolicy.IAMPolicyStatement>
{
new APIGatewayCustomAuthorizerPolicy.IAMPolicyStatement()
{
Effect = "Allow",
Resource = new HashSet<string> { "*" },
Action = new HashSet<string> { "execute-api:Invoke" }
}
}
},
Context = PrepareRequestContextFromClaims(result.Claims) //APIGatewayCustomAuthorizerContextOutput
};
}
private APIGatewayCustomAuthorizerContextOutput PrepareRequestContextFromClaims(IEnumerable<System.Security.Claims.Claim> claims)
{
APIGatewayCustomAuthorizerContextOutput contextOutput = new APIGatewayCustomAuthorizerContextOutput();
var claimsGroupsByType = claims.GroupBy(x => x.Type);
foreach (var claimsGroup in claimsGroupsByType)
{
var type = claimsGroup.Key;
var valuesList = claimsGroup.Select(x => x.Value);
var values = string.Join(',', valuesList);
contextOutput[type] = values;
}
return contextOutput;
}
Added this lambda authorizer with API GW method request.
For integration request, I have added HTTP Proxy request, which is an ASP.NET Core 6 Web API.
I am trying to access claims from the headers, that were added by authorizer in Web API routes, but not getting any claims.
_httpContext.HttpContext.Request.Headers
// not getting with headers
_httpContext.HttpContext.Items["LAMBDA_REQUEST_OBJECT"] as APIGatewayProxyRequest
// not getting with this as well
Is there any way to achieve this?
Needs to configure claim key value with API Gateway's Method & Integration request.
For example, if custom lambda authorizer validates token and add claim 'role' in Context of APIGatewayCustomAuthorizerResponse => we have to add optional role in headers with 'Method Request' and also need to add header with 'Integration request' as (Name : role, Mapped from : context.authorizer.role).
then after we will get 'role' from headers using _httpContext.HttpContext.Request.Headers['role'] with .Net Core 6 Web API.
I am looking to add Basic User Authentication to a Static Site I will have up on AWS so that only those with the proper username + password which I will supply to those users have access to see the site. I found s3auth and it seems to be exactly what I am looking for, however, I am wondering if I will need to somehow set the authorization for pages besides the index.html. For example, I have 3 pages- index, about and contact.html, without authentication setup for about.html what is stopping an individual for directly accessing the site via www.mywebsite.com/about.html? I am more so looking for clarification or any resources anyone can provide to explain this!
Thank you for your help!
This is the perfect use for Lambda#Edge.
Because you're hosting your static site on S3, you can easily and very economically (pennies) add some really great features to your site by using CloudFront, AWS's content distribution network, to serve your site to your users. You can learn how to host your site on S3 with CloudFront (including 100% free SSL) here.
While your CloudFront distribution is deploying, you'll have some time to go set up your Lambda that you'll be using to do the basic user auth. If this is your first time creating a Lambda or creating a Lambda for use #Edge the process is going to feel really complex, but if you follow my step-by-step instructions below you'll be doing serverless basic-auth that is infinitely scalable in less than 10 minutes. I'm going to use us-east-1 for this and it's important to know that if you're using Lambda#Edge you should author your functions in us-east-1, and when they're associated with your CloudFront distribution they'll automagically be replicated globally. Let's begin...
Head over to Lambda in the AWS console, and click on "Create Function"
Create your Lambda from scratch and give it a name
Set your runtime as Node.js 8.10
Give your Lambda some permissions by selecting "Choose or create an execution role"
Give the role a name
From Policy Templates select "Basic Lambda#Edge permissions (for CloudFront trigger)"
Click "Create function"
Once your Lambda is created take the following code and paste it in to the index.js file of the Function Code section - you can update the username and password you want to use by changing the authUser and authPass variables:
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Click "Save" in the upper right hand corner.
Now that your Lambda is saved it's ready to attach to your CloudFront distribution. In the upper menu, select Actions -> Deploy to Lambda#Edge.
In the modal that appears select the CloudFront distribution you created earlier from the drop down menu, leave the Cache Behavior as *, and for the CloudFront Event change it to "Viewer Request", and finally select/tick "Include Body". Select/tick the Confirm deploy to Lambda#Edge and click "Deploy".
And now you wait. It takes a few minutes (15-20) to replicate your Lambda#Edge across all regions and edge locations. Go to CloudFront to monitor the deployment of your function. When your CloudFront Distribution Status says "Deployed", your Lambda#Edge function is ready to use.
Deploying Lambda#edge is quiet difficult to replicate via console. So I have created CDK Stack that you just add your own credentials and domain name and deploy.
https://github.com/apoorvmote/cdk-examples/tree/master/password-protect-s3-static-site
I have tested the following function with Node12.x
exports.handler = async (event, context, callback) => {
const request = event.Records[0].cf.request
const headers = request.headers
const user = 'my-username'
const password = 'my-password'
const authString = 'Basic ' + Buffer.from(user + ':' + password).toString('base64')
if (typeof headers.authorization === 'undefined' || headers.authorization[0].value !== authString) {
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: 'Unauthorized',
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
}
}
callback(null, response)
}
callback(null, request)
}
By now, this is also possible with CloudFront functions which I like more because it reduces the complexity even more (from what is already not too complex with Lambda). Here's my writeup on what I just did...
It's basically 3 things that need to be done:
Create a CloudFront function to add Basic Auth into the request.
Configure the Origin of the CloudFront distribution correctly in a few places.
Activate the CloudFront function.
That's it, no particular bells & whistles otherwise. Here's what I've done:
First, go to CloudFront, then click on Functions on the left, create a new function with a name of your choice (no region etc. necessary) and then add the following as the code of the function:
function handler(event) {
var user = "myuser";
var pass = "mypassword";
function encodeToBase64(str) {
var chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
for (
// initialize result and counter
var block, charCode, idx = 0, map = chars, output = "";
// if the next str index does not exist:
// change the mapping table to "="
// check if d has no fractional digits
str.charAt(idx | 0) || ((map = "="), idx % 1);
// "8 - idx % 1 * 8" generates the sequence 2, 4, 6, 8
output += map.charAt(63 & (block >> (8 - (idx % 1) * 8)))
) {
charCode = str.charCodeAt((idx += 3 / 4));
if (charCode > 0xff) {
throw new InvalidCharacterError("'btoa' failed: The string to be encoded contains characters outside of the Latin1 range."
);
}
block = (block << 8) | charCode;
}
return output;
}
var requiredBasicAuth = "Basic " + encodeToBase64(`${user}:${pass}`);
var match = false;
if (event.request.headers.authorization) {
if (event.request.headers.authorization.value === requiredBasicAuth) {
match = true;
}
}
if (!match) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
},
};
}
return event.request;
}
Then you can test with directly on the UI and assuming it works and assuming you have customized username and password, publish the function.
Please note that I have found individual pieces of the function above on the Internet so this is not my own code (other than piecing it together). I wish I would still find the sources so I can quote them here but I can't find them anymore. Credits to the creators though! :-)
Next, open your CloudFront distribution and do the following:
Make sure your S3 bucket in the origin is configured as a REST endpoint and not a website endpoint, i.e. it must end on .s3.amazonaws.com and not have the word website in the hostname.
Also in the Origin settings, under "S3 bucket access", select "Yes use OAI (bucket can restrict access to only CloudFront)". In the setting below click on "Create OAI" to create a new OAI (unless you have an existing one and know what you're doing). And select "Yes, update the bucket policy" to allow AWS to add the necessary permissions to your OAI.
Finally, open your Behavior of the CloudFront distribution and scroll to the bottom. Under "Function associations", for "Viewer request" select "CloudFront Function" and select your newly created CloudFront function. Save your changes.
And that should be it. With a bit of luck a matter of a couple of minutes (realistically more, I know) and especially not additional complexity once this is all set up.
Thanks for the useful post. An alternative to listing the pain text user name and password in the code, and to having base64 encoding logic, is to pre-generate the base64 encoded string. One such encoder, https://www.debugbear.com/basic-auth-header-generator
From there the script becomes simpler. The following is for 'user' / 'password'
function handler(event) {
var base64UserPassword = "Y3liZXJmbG93c3VyZmVyOnRhbHR4cGNnIzIwMjI="
if (event.request.headers.authorization &&
event.request.headers.authorization.value === ("Basic " + base64UserPassword)) {
return event.request;
}
return {
statusCode: 401,
statusDescription: "Unauthorized ",
headers: {
"www-authenticate": { value: "Basic" },
},
}
}
Here is already exists answer how to use Cloudfront functions, but I want to add improved version of the function:
Hardcoded credentials stored as SHA256 hash instead of plain (or base64 that is the same as plain) text. And that is more secure.
It is possible to allow access by whitelisted global IP addresses:
function handler(event) {
var crypto = require('crypto');
var headers = event.request.headers;
var wlist_ips = [
"1.1.1.1",
"2.2.2.2"
];
var authString = "9c06d532edf0813659ab41d26ab8ba9ca53b985296ee4584a79f34fe9cd743a4";
if (
typeof headers.authorization === "undefined" ||
crypto.createHash(
'sha256'
).update(headers.authorization.value).digest('hex') !== authString
) {
if (
!wlist_ips.includes(event.viewer.ip)
) {
return {
statusCode: 401,
statusDescription: "Unauthorized",
headers: {
"www-authenticate": { value: "Basic" },
"x-source-ip": { value: event.viewer.ip}
}
};
}
}
return event.request;
}
Command below may be used to get correct authString hash value for username user and password password:
printf "Basic $(printf 'user:password' | base64 -w 0)" | sha256sum | awk '{print$1}'
I am having some problems attempting to post to an API gateway endpoint.
On my API gateway I have my gateway all set up, and tested via the tool and am getting results and can verify that the step function is in fact executing the request appropriately.
{
"executionArn": "arn:aws:states:us-east-2:xxxxxxxxxxxx:execution:DevStateMachine-XXXXXXXXXXX:c9047982-e7f8-4b72-98d3-281db0eb4c30",
"startDate": 1531170720.489
}
I have set up a Stage for this for my dev environment and all looks good there as well. where I am given a URL to post against.
https://xxxxxxxxxx.execute-api.us-east-2.amazonaws.com/dev/assignments
In my c# code I have the web client defined as follows:
public Guid QueueAssignment(AssignmentDTO assignment)
{
using (var client = new HttpClient())
{
var data = JsonConvert.SerializeObject(assignment);
var content = new StringContent(data);
var uri = "https://xxxxxxxxxx.execute-api.us-east-2.amazonaws.com/dev/assignments"
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
var response = client.PostAsync(uri, content).Result;
if (response.IsSuccessStatusCode)
{
_logger.Info("Successfully posted to AWS Step Function");
_logger.Info(response);
}
else
_logger.Error("Error posting to AWS Step Function");
_logger.Error(response);
}
}
Everytime this post is attempted I get the following error:
System.Net.WebException: The remote name could not be resolved: 'https://xxxxxxxxxx.execute-api.us-east-2.amazonaws.com'
Is there something I am missing in posting to this URI or some type of conversion I need to do? Im kind of at a loss on where to go on this on.
I want to use an AWS API Gateway as a proxy for fetching files from an S3 bucket and returning them to the client. I'm using a Lambda function to talk to S3 and send the file to the client via the AWS API Gateway. I've rad that the best way to do this is to use a "Lambda proxy integration" so the entire request gets piped to Lambda without any modification. But if I do that then I can't setup an Integration Response for the resulting response from my Lambda function. So all the client gets is JSON.
It seems there should be a way for the API Gateway to take the JSON and transform the request to the proper response for the client but I can't seem to figure out how to make that happen. There are lots of examples that point to setting a content-type on the response from the API Gateway manually but I need to set the content-type header to whatever the file type is.
Also for images and binary formats my Lambda function is returning a base64 encoded string and the property isBase64Encoded set to true. When I go to the "Binary Support" section and specify something like image/* as a content type that should be returned as binary, it doesn't work. I only have success by setting the Binary Support content type to */* (aka everything) which won't work for non-binary content types.
What am I missing and why does this seem so difficult?
Turns out API Gateway isn't the problem. My Lambda function wasn't returning proper headers.
For handling binary responses I found you need to set Binary Support content type to */* (aka everything) and then have your Lambda function return the property isBase64Encoded set to true. Responses that are base64 encoded and indicated as such will be decoded and served as binary while other requests will be returned as is.
Here's a simple Gist for a Lambda function that takes a given path and reads the file from S3 and returns it via the API Gateway:
/**
* This is a simple AWS Lambda function that will look for a given file on S3 and return it
* passing along all of the headers of the S3 file. To make this available via a URL use
* API Gateway with an AWS Lambda Proxy Integration.
*
* Set the S3_REGION and S3_BUCKET global parameters in AWS Lambda
* Make sure the Lambda function is passed an object with `{ pathParameters : { proxy: 'path/to/file.jpg' } }` set
*/
var AWS = require('aws-sdk');
exports.handler = function( event, context, callback ) {
var region = process.env.S3_REGION;
var bucket = process.env.S3_BUCKET;
var key = decodeURI( event.pathParameters.proxy );
// Basic server response
/*
var response = {
statusCode: 200,
headers: {
'Content-Type': 'text/plain',
},
body: "Hello world!",
};
callback( null, response );
*/
// Fetch from S3
var s3 = new AWS.S3( Object.assign({ region: region }) );
return s3.makeUnauthenticatedRequest(
'getObject',
{ Bucket: bucket, Key: key },
function(err, data) {
if (err) {
return err;
}
var isBase64Encoded = false;
if ( data.ContentType.indexOf('image/') > -1 ) {
isBase64Encoded = true;
}
var encoding = '';
if ( isBase64Encoded ) {
encoding = 'base64'
}
var resp = {
statusCode: 200,
headers: {
'Content-Type': data.ContentType,
},
body: new Buffer(data.Body).toString(encoding),
isBase64Encoded: isBase64Encoded
};
callback(null, resp);
}
);
};
via https://gist.github.com/kingkool68/26aa7a3641a3851dc70ce7f44f589350
I use token auth for my WebApi application.
I have the following ConfigureAuth method in Startup class:
// Configure the application for OAuth based flow
PublicClientId = "self";
OAuthOptions = new OAuthAuthorizationServerOptions
{
TokenEndpointPath = new PathString("/Token"),
Provider = new ApplicationOAuthProvider(PublicClientId),
AuthorizeEndpointPath = new PathString("/api/Account/ExternalLogin"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(14),
// In production mode set AllowInsecureHttp = false
AllowInsecureHttp = true
};
// Enable the application to use bearer tokens to authenticate users
app.UseOAuthBearerTokens(OAuthOptions);
and ApplicationOAuthProvider:
public class ApplicationOAuthProvider : OAuthAuthorizationServerProvider
{
private readonly string _publicClientId;
public ApplicationOAuthProvider(string publicClientId)
{
if (publicClientId == null)
{
throw new ArgumentNullException("publicClientId");
}
_publicClientId = publicClientId;
}
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
var userManager = context.OwinContext.GetUserManager<ApplicationUserManager>();
var user = await userManager.FindAsync(context.UserName, context.Password);
//ApplicationUser user = new ApplicationUser() { UserName ="a" };
if (user == null)
{
context.SetError("invalid_grant", "The user name or password is incorrect.");
return;
}
ClaimsIdentity oAuthIdentity = await user.GenerateUserIdentityAsync(userManager,
OAuthDefaults.AuthenticationType);
ClaimsIdentity cookiesIdentity = await user.GenerateUserIdentityAsync(userManager,
CookieAuthenticationDefaults.AuthenticationType);
AuthenticationProperties properties = CreateProperties(user.UserName);
AuthenticationTicket ticket = new AuthenticationTicket(oAuthIdentity, properties);
context.Validated(ticket);
context.Request.Context.Authentication.SignIn(cookiesIdentity);
}
so, I should call /Token and pass credentials to get token. It works, but I want to create Unit Test for it. Is it possible?
The only way to do that is by make an integration test, which asserts the full pipeline testing - from request to response. Before the actual test on the server, you can call the token endpoint to get it, and then use it in the actual unit test by attaching it to the response. I have a sample, which uses MyTested.WebApi here:
Sample
You can do the same without the testing library, this is just how to do it.
I like the idea of pluggable configuration.
For Unit Test project, I want to use specific identity and get predictable data fro LDAP. So, i use the following line in my unit test method when setting http configuration:
config.Filters.Add(new WebApiSetIdentityFilter(config, identityName));
where the filter just "hacks" the identity, replacing the fields I need:
public async Task AuthenticateAsync(HttpAuthenticationContext context, CancellationToken cancellationToken)
{
//This principal flows throughout the request.
context.Principal = new GenericPrincipal(new GenericIdentity(this.IdentityName, "LdapAuthentication"), new string[0]);
}