How to make AWS API gateway to "understand" trailing slash? - amazon-web-services

I have an AWS API Gateway configured such that /auth method calls a Lambda.
However, an existing product tries to call /auth/ with trailing slash and it ends up as error 404.
What can I do so that /auth/ URL goes to the same route as /auth in the API Gateway?

Turns out that the way to solve it is to configure the path like so (Terraform code from API Gateway config)
WAS
"GET /auth" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
}
(this makes /auth work)
NOW
"GET /auth/{proxy+}" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
}
(this makes /auth/ work and breaks /auth).

You could configure the route as ANY /{proxy+} so that any HTTP method (GET, POST, PATCH, DELETE) for any routes are directed to the same handler. Alternatively, you could also specify the HTTP method to narrow it down, like POST /{proxy+}.
So...
What can I do so that /auth/ URL goes to the same route as /auth in the API Gateway?
Technically speaking, this solves your problem, but now it is up to you to differentiate routes and know what to do.
As far as I know, this is the only way to achieve it with API Gateway, since according to some RFC out there routes "/auth" and "/auth/" are actually different routes and API Gateway complies with that RFC.
This is what I ended up doing (using the ANY /{proxy+}) and, if it is any help, this is the code I have to handle my routes and know what to do:
// Queue.ts
class Queue<T = any> {
private items: T[];
constructor(items?: T[]) {
this.items = items || [];
}
get length(): number {
return this.items.length;
}
enqueue(element: T): void {
this.items.push(element);
}
dequeue(): T | undefined {
return this.items.shift()!;
}
peek(): T | undefined {
if (this.isEmpty()) return undefined;
return this.items[0];
}
isEmpty() {
return this.items.length == 0;
}
}
export default Queue;
// PathMatcher.ts
import Queue from "./Queue";
type PathMatcherResult = {
isMatch: boolean;
namedParameters: Record<string, string>;
};
const NAMED_PARAMETER_REGEX = /(?!\w+:)\{(\w+)\}/;
class PathMatcher {
static match(pattern: string, path: string): PathMatcherResult {
const patternParts = new Queue<string>(this.trim(pattern).split("/"));
const pathParts = new Queue<string>(this.trim(path).split("/"));
const namedParameters: Record<string, string> = {};
const noMatch = { isMatch: false, namedParameters: {} };
if (patternParts.length !== pathParts.length) return noMatch;
while (patternParts.length > 0) {
const patternPart = patternParts.dequeue()!;
const pathPart = pathParts.dequeue()!;
if (patternPart === "*") continue;
if (patternPart.toLowerCase() === pathPart.toLowerCase()) continue;
if (NAMED_PARAMETER_REGEX.test(patternPart)) {
const [name, value] = this.extractNamedParameter(patternPart, pathPart);
namedParameters[name] = value;
continue;
}
return noMatch;
}
return { isMatch: true, namedParameters };
}
private static trim(path: string) {
return path.replace(/^[\s\/]+/, "").replace(/[\s\/]+$/, "");
}
private static extractNamedParameter(
patternPart: string,
pathPart: string
): [string, string] {
const name = patternPart.replace(NAMED_PARAMETER_REGEX, "$1");
let value = pathPart;
if (value.includes(":")) value = value.substring(value.indexOf(":") + 1);
return [name, value];
}
}
export default PathMatcher;
export { PathMatcherResult };
Then, in my lambda handler, I do:
const httpMethod = event.requestContext.http.method.toUpperCase();
const currentRoute = `${httpMethod} ${event.rawPath}`;
// This will match both:
// GET /products/asdasdasdas
// GET /products/asdasdasdas/
const match = PathMatcher.match("GET /products/{id}", currentRoute);
if (match.isMatch) {
// Here, the id parameter has been extracted for you
const productId = match.namedParameters.id;
}
Of course you can build a registry of routes and their respective handler functions and automate that matching process and passing of parameters, but that is the easy part.

Related

Apollo Gateway: Forward Subgraph `set-cookie` Header to Gateway to Client

I have a subgraph microservice that handles sessions. We store our sessions via cookies that the subgraph creates, and should set it via the set-cookie header. Only issue is my gateway does not seem to be forwarding the set-cookie header from the subgraph to the client.
Here is the code for my gateway
const { ApolloServer } = require('apollo-server');
const { ApolloGateway, RemoteGraphQLDataSource } = require('#apollo/gateway');
const { readFileSync } = require('fs');
const supergraphSdl = readFileSync('./gateway/supergraph.graphql').toString();
class CookieDataSource extends RemoteGraphQLDataSource {
didReceiveResponse({ response, request, context }) {
const cookie = response.http.headers.get('set-cookie');
console.log("Cookie:", cookie)
return response;
}
}
const gateway = new ApolloGateway({
supergraphSdl,
buildService({url}) {
return new CookieDataSource({url});
}
});
const server = new ApolloServer({
gateway,
cors: {
origin: ["http://localhost:3000", "https://studio.apollographql.com"],
credentials: true
},
csrfPrevention: true,
});
server.listen().then(({ url }) => {
console.log(`🚀 Gateway ready at ${url}`);
}).catch(err => {console.error(err)});
version info
“#apollo/gateway”: “^2.1.2”,
“apollo-server”: “^3.10.2”,
I can confirm that the subgraph is sending back a set-cookie header, however, it is not being passed through to the client.
Thank you!
I ended up resolving the issue by creating both a gateway datasource that added context value. Then, pass the header from the subgraph context value to the response header.
import { GatewayGraphQLResponse, GatewayGraphQLRequestContext } from '#apollo/server-gateway-interface';
import { RemoteGraphQLDataSource } from '#apollo/gateway';
import { ApolloServerPlugin, GraphQLRequestContext, GraphQLRequestListener } from '#apollo/server';
interface ServerContext {
passthrough_cookies?: string
}
export class CookieProcessorDataSource extends RemoteGraphQLDataSource {
didReceiveResponse({response, context}: Required<Pick<GatewayGraphQLRequestContext<Record<string, any>>, 'request' | 'response' | 'context'>>): GatewayGraphQLResponse | Promise<GatewayGraphQLResponse> {
context.passthrough_cookies = response.http?.headers.get('set-cookie');
return response;
}
}
export class CookieServerListener implements GraphQLRequestListener<ServerContext> {
public willSendResponse({contextValue, response}: GraphQLRequestContext<ServerContext>): Promise<void> {
if (contextValue.passthrough_cookies !== undefined) {
response.http.headers.set('set-cookie', contextValue.passthrough_cookies);
}
return Promise.resolve()
}
}
export class CookieServerPlugin implements ApolloServerPlugin<ServerContext> {
async requestDidStart() {
return new CookieServerListener();
}
}

User Logging automation via Cloudwatch

I Have this task for my company where i have to do a monthly User access review via cloudwatch.
This is a manual process where i have to go to cloudwatch > cloudwatch_logs > log_groups > /var/log/example_access > example-instance and then document the logs for a list of users from random generated date. The example instance is a certificate manager box which is linked to our entire production fleet nodes. I also have to document what command that user used on a specific nodes.
Wondering is there any way i can automate this process and dump it into word docs? it's getting painful as the list of user/employees are increasing. Thanks
Sure there is, I don't reckon you want Word docs though, I'd launch an elasticsearch instance on AWS and then give users who want data Kibana access.
Also circulating word docs in an org is bad juju, depending on your windows/office version it carries risks.
Add this lambda function and then go into cloudwatch and add it as subscription filter to the right log groups.
Note you may get missing log entries if they're not logged in JSON format or have funky formatting, if you're using a standard log format it should work.
/* eslint-disable */
// Eslint disabled as this is adapted AWS code.
const zlib = require('zlib')
const elasticsearch = require('elasticsearch')
/**
* This is an example function to stream CloudWatch logs to ElasticSearch.
* #param event
* #param context
* #param callback
* #param utils
*/
export default (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = true
const payload = new Buffer(event.awslogs.data, 'base64')
const esClient = new elasticsearch.Client({
httpAuth: process.env.esAuth, // your params here
host: process.env.esEndpoint, // your params here.
})
zlib.gunzip(payload, (err, result) => {
if (err) {
return callback(null, err)
}
const logObject = JSON.parse(result.toString('utf8'))
const elasticsearchBulkData = transform(logObject)
const params = { body: [] }
params.body.push(elasticsearchBulkData)
esClient.bulk(params, (err, resp) => {
if (err) {
callback(null, 'success')
return
}
})
callback(null, 'success')
})
}
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null
}
let bulkRequestBody = ''
payload.logEvents.forEach((logEvent) => {
const timestamp = new Date(1 * logEvent.timestamp)
// index name format: cwl-YYYY.MM.DD
const indexName = [
`cwl-${process.env.NODE_ENV}-${timestamp.getUTCFullYear()}`, // year
(`0${timestamp.getUTCMonth() + 1}`).slice(-2), // month
(`0${timestamp.getUTCDate()}`).slice(-2), // day
].join('.')
const source = buildSource(logEvent.message, logEvent.extractedFields)
source['#id'] = logEvent.id
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString()
source['#message'] = logEvent.message
source['#owner'] = payload.owner
source['#log_group'] = payload.logGroup
source['#log_stream'] = payload.logStream
const action = { index: {} }
action.index._index = indexName
action.index._type = 'lambdaLogs'
action.index._id = logEvent.id
bulkRequestBody += `${[
JSON.stringify(action),
JSON.stringify(source),
].join('\n')}\n`
})
return bulkRequestBody
}
function buildSource(message, extractedFields) {
if (extractedFields) {
const source = {}
for (const key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
const value = extractedFields[key]
if (isNumeric(value)) {
source[key] = 1 * value
continue
}
const jsonSubString = extractJson(value)
if (jsonSubString !== null) {
source[`$${key}`] = JSON.parse(jsonSubString)
}
source[key] = value
}
}
return source
}
const jsonSubString = extractJson(message)
if (jsonSubString !== null) {
return JSON.parse(jsonSubString)
}
return {}
}
function extractJson(message) {
const jsonStart = message.indexOf('{')
if (jsonStart < 0) return null
const jsonSubString = message.substring(jsonStart)
return isValidJson(jsonSubString) ? jsonSubString : null
}
function isValidJson(message) {
try {
JSON.parse(message)
} catch (e) { return false }
return true
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n)
}
Now you should have your logs going into elastic, go into Kibana and you can search by date and even write endpoints to allow people to query their own data!
Easy way is just give stakeholders Kibana access and let them check it out.
Might not be exactly what ya wanted by I reckon it'll work better.

How to remove .html extension using AWS Lambda & Cloudfront

I've my website's source code stored in AWS S3 and I'm using AWS Cloudfront to deliver my content.
I want to use AWS Lamda#Edge to remove .html extension from all the web links that's served through Cloudfront.
My required output should be www.example.com/foo instead of www.example.com/foo.html or example.com/foo1 instead of example.com/foo1.html.
Please help me to implement this as I can't find clear solution to use. I've referred the point 3 mentioned on this article: https://forums.aws.amazon.com/thread.jspa?messageID=796961&tstart=0. But it's not clear what I need to do.
PFB the lambda code, how can I modify it-
const config = {
suffix: '.html',
appendToDirs: 'index.html',
removeTrailingSlash: false,
};
const regexSuffixless = /\/[^/.]+$/; // e.g. "/some/page" but not "/", "/some/" or "/some.jpg"
const regexTrailingSlash = /.+\/$/; // e.g. "/some/" or "/some/page/" but not root "/"
exports.handler = function handler(event, context, callback) {
const { request } = event.Records[0].cf;
const { uri } = request;
const { suffix, appendToDirs, removeTrailingSlash } = config;
// Append ".html" to origin request
if (suffix && uri.match(regexSuffixless)) {
request.uri = uri + suffix;
callback(null, request);
return;
}
// Append "index.html" to origin request
if (appendToDirs && uri.match(regexTrailingSlash)) {
request.uri = uri + appendToDirs;
callback(null, request);
return;
}
// Redirect (301) non-root requests ending in "/" to URI without trailing slash
if (removeTrailingSlash && uri.match(/.+\/$/)) {
const response = {
// body: '',
// bodyEncoding: 'text',
headers: {
'location': [{
key: 'Location',
value: uri.slice(0, -1)
}]
},
status: '301',
statusDescription: 'Moved Permanently'
};
callback(null, response);
return;
}
// If nothing matches, return request unchanged
callback(null, request);
};
Please help me to remove .html extension from my website and what updated code do I need to paste in my AWS Lambda
Thanks in advance!!

CDK override bind when using LambdaIntegration

When using LambdaIntegration class the bind function add permission to the lambda automatically:
bind(method) {
super.bind(method);
const principal = new iam.ServicePrincipal('apigateway.amazonaws.com');
const desc = `${method.restApi.node.uniqueId}.${method.httpMethod}.${method.resource.path.replace(/\//g, '.')}`;
this.handler.addPermission(`ApiPermission.${desc}`, {
principal,
scope: method,
sourceArn: method.methodArn,
});
// add permission to invoke from the console
if (this.enableTest) {
this.handler.addPermission(`ApiPermission.Test.${desc}`, {
principal,
scope: method,
sourceArn: method.testMethodArn,
});
}
}
Currently, I create multiple API Gateways who 90% of them trigger the same lambda function, this causes me the following error :
The final policy size (XXX) is bigger than the limit (20480)
More info here.
My goal is to override the bind function with my own function and handle the permissions by myself, something like that:
arn:aws:execute-api:{AWS_REGION}:{AWS_ACCOUNT}:{API_ID}/*/*/*
I know this is not a best practice but right now this is the only working workaround.
This is the new class I created :
class customLambdaIntegration extends apigateway.LambdaIntegration{
myHandler: lambda.IFunction;
constructor(handler: lambda.IFunction, options?: LambdaIntegrationOptions) {
super(handler, options);
this.myHandler = handler;
}
bind(method: Method) {
const principal = new iam.ServicePrincipal('apigateway.amazonaws.com');
const desc = `${method.restApi.node.uniqueId}.${method.httpMethod}.${method.resource.path.replace(/\//g, '.')}`;
this.myHandler.addPermission(`ApiPermission.${desc}`, {
principal,
scope: method,
sourceArn: method.methodArn.toString().replace(api.deploymentStage.stageName,'*')
});
}
}
Getting this error when running cdk list:
if (!this.scope) { throw new Error('AwsIntegration must be used in API'); }
Problematic piece of code which throw the error:
class AwsIntegration extends integration_1.Integration {
constructor(props) {
const backend = props.subdomain ? `${props.subdomain}.${props.service}` : props.service;
const type = props.proxy ? integration_1.IntegrationType.AWS_PROXY : integration_1.IntegrationType.AWS;
const { apiType, apiValue } = util_1.parseAwsApiCall(props.path, props.action, props.actionParameters);
super({
type,
integrationHttpMethod: props.integrationHttpMethod || 'POST',
uri: cdk.Lazy.stringValue({ produce: () => {
if (!this.scope) {
throw new Error('AwsIntegration must be used in API');
}
return cdk.Stack.of(this.scope).formatArn({
service: 'apigateway',
account: backend,
resource: apiType,
sep: '/',
resourceName: apiValue,
});
} }),
options: props.options,
});
}
bind(method) {
this.scope = method;
}
}
LambdaIntegration documentation.
Any help will be much appreciated.
To whom this might be helpful, I open a feature request to implement my function and manually handle the lambda permission :
https://github.com/aws/aws-cdk/issues/5774
Found the issue, this['scope'] = method; were missing inside the bind function since AwsIntegration class implements this.scope=method.
Full code :
class customLambdaIntegration extends apigateway.LambdaIntegration{
// myScope : cdk.IConstruct;
myHandler: lambda.IFunction;
MyOptinos: apigateway.LambdaIntegrationOptions | undefined;
constructor(handler: lambda.IFunction, options?: LambdaIntegrationOptions) {
super(handler, options);
this.myHandler = handler;
this.MyOptinos = options;
}
bind(method: Method) {
this['scope'] = method;
const principal = new iam.ServicePrincipal('apigateway.amazonaws.com');
const desc = `${method.restApi.node.uniqueId}.${method.httpMethod}.${method.resource.path.replace(/\//g, '.')}`;
this.myHandler.addPermission(`ApiPermission.${desc}`, {
principal,
scope: method,
sourceArn: method.methodArn.toString().replace(api.deploymentStage.stageName,'*')
});
}
}

How to set content-length-range for s3 browser upload via boto

The Issue
I'm trying to upload images directly to S3 from the browser and am getting stuck applying the content-length-range permission via boto's S3Connection.generate_url method.
There's plenty of information about signing POST forms, setting policies in general and even a heroku method for doing a similar submission. What I can't figure out for the life of me is how to add the "content-length-range" to the signed url.
With boto's generate_url method (example below), I can specify policy headers and have got it working for normal uploads. What I can't seem to add is a policy restriction on max file size.
Server Signing Code
## django request handler
from boto.s3.connection import S3Connection
from django.conf import settings
from django.http import HttpResponse
import mimetypes
import json
conn = S3Connection(settings.S3_ACCESS_KEY, settings.S3_SECRET_KEY)
object_name = request.GET['objectName']
content_type = mimetypes.guess_type(object_name)[0]
signed_url = conn.generate_url(
expires_in = 300,
method = "PUT",
bucket = settings.BUCKET_NAME,
key = object_name,
headers = {'Content-Type': content_type, 'x-amz-acl':'public-read'})
return HttpResponse(json.dumps({'signedUrl': signed_url}))
On the client, I'm using the ReactS3Uploader which is based on tadruj's s3upload.js script. It shouldn't be affecting anything as it seems to just pass along whatever the signedUrls covers, but copied below for simplicity.
ReactS3Uploader JS Code (simplified)
uploadFile: function() {
new S3Upload({
fileElement: this.getDOMNode(),
signingUrl: /api/get_signing_url/,
onProgress: this.props.onProgress,
onFinishS3Put: this.props.onFinish,
onError: this.props.onError
});
},
render: function() {
return this.transferPropsTo(
React.DOM.input({type: 'file', onChange: this.uploadFile})
);
}
S3upload.js
S3Upload.prototype.signingUrl = '/sign-s3';
S3Upload.prototype.fileElement = null;
S3Upload.prototype.onFinishS3Put = function(signResult) {
return console.log('base.onFinishS3Put()', signResult.publicUrl);
};
S3Upload.prototype.onProgress = function(percent, status) {
return console.log('base.onProgress()', percent, status);
};
S3Upload.prototype.onError = function(status) {
return console.log('base.onError()', status);
};
function S3Upload(options) {
if (options == null) {
options = {};
}
for (option in options) {
if (options.hasOwnProperty(option)) {
this[option] = options[option];
}
}
this.handleFileSelect(this.fileElement);
}
S3Upload.prototype.handleFileSelect = function(fileElement) {
this.onProgress(0, 'Upload started.');
var files = fileElement.files;
var result = [];
for (var i=0; i < files.length; i++) {
var f = files[i];
result.push(this.uploadFile(f));
}
return result;
};
S3Upload.prototype.createCORSRequest = function(method, url) {
var xhr = new XMLHttpRequest();
if (xhr.withCredentials != null) {
xhr.open(method, url, true);
}
else if (typeof XDomainRequest !== "undefined") {
xhr = new XDomainRequest();
xhr.open(method, url);
}
else {
xhr = null;
}
return xhr;
};
S3Upload.prototype.executeOnSignedUrl = function(file, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', this.signingUrl + '&objectName=' + file.name, true);
xhr.overrideMimeType && xhr.overrideMimeType('text/plain; charset=x-user-defined');
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
var result;
try {
result = JSON.parse(xhr.responseText);
} catch (error) {
this.onError('Invalid signing server response JSON: ' + xhr.responseText);
return false;
}
return callback(result);
} else if (xhr.readyState === 4 && xhr.status !== 200) {
return this.onError('Could not contact request signing server. Status = ' + xhr.status);
}
}.bind(this);
return xhr.send();
};
S3Upload.prototype.uploadToS3 = function(file, signResult) {
var xhr = this.createCORSRequest('PUT', signResult.signedUrl);
if (!xhr) {
this.onError('CORS not supported');
} else {
xhr.onload = function() {
if (xhr.status === 200) {
this.onProgress(100, 'Upload completed.');
return this.onFinishS3Put(signResult);
} else {
return this.onError('Upload error: ' + xhr.status);
}
}.bind(this);
xhr.onerror = function() {
return this.onError('XHR error.');
}.bind(this);
xhr.upload.onprogress = function(e) {
var percentLoaded;
if (e.lengthComputable) {
percentLoaded = Math.round((e.loaded / e.total) * 100);
return this.onProgress(percentLoaded, percentLoaded === 100 ? 'Finalizing.' : 'Uploading.');
}
}.bind(this);
}
xhr.setRequestHeader('Content-Type', file.type);
xhr.setRequestHeader('x-amz-acl', 'public-read');
return xhr.send(file);
};
S3Upload.prototype.uploadFile = function(file) {
return this.executeOnSignedUrl(file, function(signResult) {
return this.uploadToS3(file, signResult);
}.bind(this));
};
module.exports = S3Upload;
Any help would be greatly appreciated here as I've been banging my head against the wall for quite a few hours now.
You can't add it to a signed PUT URL. This only works with the signed policy that goes along with a POST because the two mechanisms are very different.
Signing a URL is a lossy (for lack of a better term) process. You generate the string to sign, then sign it. You send the signature with the request, but you discard and do not send the string to sign. S3 then reconstructs what the string to sign should have been, for the request it receives, and generates the signature you should have sent with that request. There's only one correct answer, and S3 doesn't know what string you actually signed. The signature matches, or doesn't, either because you built the string to sign incorrectly, or your credentials don't match, and it doesn't know which of these possibilities is the case. It only knows, based on the request you sent, the string you should have signed and what the signature should have been.
With that in mind, for content-length-range to work with a signed URL, the client would need to actually send such a header with the request... which doesn't make a lot of sense.
Conversely, with POST uploads, there is more information communicated to S3. It's not only going on whether your signature is valid, it also has your policy document... so it's possible to include directives -- policies -- with the request. They are protected from alteration by the signature, but they aren't encrypted or hashed -- the entire policy is readable by S3 (so, by contrast, we'll call this the opposite, "lossless.")
This difference is why you can't do what you are trying to do with PUT while you can with POST.