When using LambdaIntegration class the bind function add permission to the lambda automatically:
bind(method) {
super.bind(method);
const principal = new iam.ServicePrincipal('apigateway.amazonaws.com');
const desc = `${method.restApi.node.uniqueId}.${method.httpMethod}.${method.resource.path.replace(/\//g, '.')}`;
this.handler.addPermission(`ApiPermission.${desc}`, {
principal,
scope: method,
sourceArn: method.methodArn,
});
// add permission to invoke from the console
if (this.enableTest) {
this.handler.addPermission(`ApiPermission.Test.${desc}`, {
principal,
scope: method,
sourceArn: method.testMethodArn,
});
}
}
Currently, I create multiple API Gateways who 90% of them trigger the same lambda function, this causes me the following error :
The final policy size (XXX) is bigger than the limit (20480)
More info here.
My goal is to override the bind function with my own function and handle the permissions by myself, something like that:
arn:aws:execute-api:{AWS_REGION}:{AWS_ACCOUNT}:{API_ID}/*/*/*
I know this is not a best practice but right now this is the only working workaround.
This is the new class I created :
class customLambdaIntegration extends apigateway.LambdaIntegration{
myHandler: lambda.IFunction;
constructor(handler: lambda.IFunction, options?: LambdaIntegrationOptions) {
super(handler, options);
this.myHandler = handler;
}
bind(method: Method) {
const principal = new iam.ServicePrincipal('apigateway.amazonaws.com');
const desc = `${method.restApi.node.uniqueId}.${method.httpMethod}.${method.resource.path.replace(/\//g, '.')}`;
this.myHandler.addPermission(`ApiPermission.${desc}`, {
principal,
scope: method,
sourceArn: method.methodArn.toString().replace(api.deploymentStage.stageName,'*')
});
}
}
Getting this error when running cdk list:
if (!this.scope) { throw new Error('AwsIntegration must be used in API'); }
Problematic piece of code which throw the error:
class AwsIntegration extends integration_1.Integration {
constructor(props) {
const backend = props.subdomain ? `${props.subdomain}.${props.service}` : props.service;
const type = props.proxy ? integration_1.IntegrationType.AWS_PROXY : integration_1.IntegrationType.AWS;
const { apiType, apiValue } = util_1.parseAwsApiCall(props.path, props.action, props.actionParameters);
super({
type,
integrationHttpMethod: props.integrationHttpMethod || 'POST',
uri: cdk.Lazy.stringValue({ produce: () => {
if (!this.scope) {
throw new Error('AwsIntegration must be used in API');
}
return cdk.Stack.of(this.scope).formatArn({
service: 'apigateway',
account: backend,
resource: apiType,
sep: '/',
resourceName: apiValue,
});
} }),
options: props.options,
});
}
bind(method) {
this.scope = method;
}
}
LambdaIntegration documentation.
Any help will be much appreciated.
To whom this might be helpful, I open a feature request to implement my function and manually handle the lambda permission :
https://github.com/aws/aws-cdk/issues/5774
Found the issue, this['scope'] = method; were missing inside the bind function since AwsIntegration class implements this.scope=method.
Full code :
class customLambdaIntegration extends apigateway.LambdaIntegration{
// myScope : cdk.IConstruct;
myHandler: lambda.IFunction;
MyOptinos: apigateway.LambdaIntegrationOptions | undefined;
constructor(handler: lambda.IFunction, options?: LambdaIntegrationOptions) {
super(handler, options);
this.myHandler = handler;
this.MyOptinos = options;
}
bind(method: Method) {
this['scope'] = method;
const principal = new iam.ServicePrincipal('apigateway.amazonaws.com');
const desc = `${method.restApi.node.uniqueId}.${method.httpMethod}.${method.resource.path.replace(/\//g, '.')}`;
this.myHandler.addPermission(`ApiPermission.${desc}`, {
principal,
scope: method,
sourceArn: method.methodArn.toString().replace(api.deploymentStage.stageName,'*')
});
}
}
Related
I am trying to invoke lambda B via another lambda A. Call to lambda A is triggered via APIG endpoint. Using curl, a fetch call is done as below:
curl "$#" -L --cookie ~/.midway/cookie --cookie-jar ~/.midway/cookie -X GET -H "Content-Type: application/json" -s https://us-west-2.beta.api.ihmsignage.jihmcdo.com/api/getSignInstances
Above invokes lambda A which handles the request and calls the main handler. Logic for main handler:
const main = (event: any, context: any, lambdaCallback: Function) => {
console.log(JSON.stringify(event, null, 2));
console.log(JSON.stringify(process.env, null, 2));
if (event.path.startsWith('/getUserInfo')) {
const alias = event.headers['X-FORWARDED-USER'];
const userData = JSON.stringify({ alias });
console.info('UserData: ', userData);
return sendResponse(200, userData, lambdaCallback); //This works perfectly fine with api gateway returning proper response
} else if (event.path.startsWith('/api')) {
console.info('Invoke lambda initiate');
invokeLambda(event, context, lambdaCallback); // This somehow invokes lambda B twice
} else {
return sendResponse(404, '{"message": "Resource not found"}', lambdaCallback);
}
};
Also have a wrapper associated as well in order to allow proper response is being sent back to the APIG:
export const handler = (event: any, context: any, lambdaCallback: Function) => {
const wrappedCallback = (error: any, success: any) => {
success.headers['Access-Control-Allow-Origin'] = getAllowedOrigin(event);
success.headers['Access-Control-Allow-Credentials'] = true;
success.headers['Access-Control-Allow-Methods'] = 'GET,PUT,DELETE,HEAD,POST,OPTIONS';
success.headers['Access-Control-Allow-Headers'] =
'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,Access-Control-Allow-Origin,Access-Control-Allow-Methods,X-PINGOVER';
success.headers['Vary'] = 'Accept-Encoding, Origin';
console.info('Logging sucess--', success);
return lambdaCallback(error, success);
};
// Append headers
return main(event, context, wrappedCallback);
};
And finally this is logic of how lambda B should be invoked within lambda A:
const invokeLambda = async (event: any, context: any, lambdaCallback: Function) => {
context.callbackWaitsForEmptyEventLoop = false;
if (!process.env.INVOKE_ARN) {
console.error('Missing environment variable INVOKE_ARN');
return sendResponse(500, '{"message":"internal server error"}', lambdaCallback);
}
const params = {
FunctionName: process.env.INVOKE_ARN,
InvocationType: 'RequestResponse',
Payload: JSON.stringify(event),
};
event.headers = event.headers || [];
const username = event.headers['X-FORWARDED-USER'];
const token = event.headers['X-CLIENT-VERIFY'];
if (!username || !token) {
console.log('No username or token was found');
return sendResponse(401, '{"message":"You shall not pass"}', lambdaCallback);
}
try {
const data = await lambda.invoke(params).promise();
console.info('Got Request router lambda data: ', data);
const invocationResponse = data?.Payload;
console.info('Got invocationResponse: ', invocationResponse);
return lambdaCallback(null, JSON.parse(invocationResponse as string));
} catch (err) {
console.error('Error while running starlet: ', err);
return sendResponse(500, '{"message":"internal server error"}', lambdaCallback);
}
};
Lambda B:
const main = async (event: any = {}) => {
// Log details
console.log('Request router lambda invoked');
console.log(JSON.stringify(event, null, 2));
return {
statusCode: 200,
body: JSON.stringify({ message: 'Hello from RequestRouter Lambda!' }),
headers: {
'Content-Type': 'application/json',
},
isBase64Encoded: false,
};
};
export const handler = main;
All of above works fine (no error logs from cloudwatch for lambdas), however it seems that Lambda A's handler is invoked, but it doesn't invoke Lambda B's handler ultimately returning a response to APIG which doesn't have proper headers.
Any pointers are highly appreciated!! Thank you :)
AWS recommends that you don't orchestrate your lambda functions in the code (one function calling another function).
For that use case, you can use AWS Step Functions.
You can create a state machine, define API Gateway as the trigger, and pass the result from one Lambda function to the next Lambda function.
I have an AWS API Gateway configured such that /auth method calls a Lambda.
However, an existing product tries to call /auth/ with trailing slash and it ends up as error 404.
What can I do so that /auth/ URL goes to the same route as /auth in the API Gateway?
Turns out that the way to solve it is to configure the path like so (Terraform code from API Gateway config)
WAS
"GET /auth" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
}
(this makes /auth work)
NOW
"GET /auth/{proxy+}" = {
integration_type = "AWS_PROXY"
integration_http_method = "POST"
payload_format_version = "2.0"
lambda_arn = module.my-lambda.this_lambda_function_invoke_arn
}
(this makes /auth/ work and breaks /auth).
You could configure the route as ANY /{proxy+} so that any HTTP method (GET, POST, PATCH, DELETE) for any routes are directed to the same handler. Alternatively, you could also specify the HTTP method to narrow it down, like POST /{proxy+}.
So...
What can I do so that /auth/ URL goes to the same route as /auth in the API Gateway?
Technically speaking, this solves your problem, but now it is up to you to differentiate routes and know what to do.
As far as I know, this is the only way to achieve it with API Gateway, since according to some RFC out there routes "/auth" and "/auth/" are actually different routes and API Gateway complies with that RFC.
This is what I ended up doing (using the ANY /{proxy+}) and, if it is any help, this is the code I have to handle my routes and know what to do:
// Queue.ts
class Queue<T = any> {
private items: T[];
constructor(items?: T[]) {
this.items = items || [];
}
get length(): number {
return this.items.length;
}
enqueue(element: T): void {
this.items.push(element);
}
dequeue(): T | undefined {
return this.items.shift()!;
}
peek(): T | undefined {
if (this.isEmpty()) return undefined;
return this.items[0];
}
isEmpty() {
return this.items.length == 0;
}
}
export default Queue;
// PathMatcher.ts
import Queue from "./Queue";
type PathMatcherResult = {
isMatch: boolean;
namedParameters: Record<string, string>;
};
const NAMED_PARAMETER_REGEX = /(?!\w+:)\{(\w+)\}/;
class PathMatcher {
static match(pattern: string, path: string): PathMatcherResult {
const patternParts = new Queue<string>(this.trim(pattern).split("/"));
const pathParts = new Queue<string>(this.trim(path).split("/"));
const namedParameters: Record<string, string> = {};
const noMatch = { isMatch: false, namedParameters: {} };
if (patternParts.length !== pathParts.length) return noMatch;
while (patternParts.length > 0) {
const patternPart = patternParts.dequeue()!;
const pathPart = pathParts.dequeue()!;
if (patternPart === "*") continue;
if (patternPart.toLowerCase() === pathPart.toLowerCase()) continue;
if (NAMED_PARAMETER_REGEX.test(patternPart)) {
const [name, value] = this.extractNamedParameter(patternPart, pathPart);
namedParameters[name] = value;
continue;
}
return noMatch;
}
return { isMatch: true, namedParameters };
}
private static trim(path: string) {
return path.replace(/^[\s\/]+/, "").replace(/[\s\/]+$/, "");
}
private static extractNamedParameter(
patternPart: string,
pathPart: string
): [string, string] {
const name = patternPart.replace(NAMED_PARAMETER_REGEX, "$1");
let value = pathPart;
if (value.includes(":")) value = value.substring(value.indexOf(":") + 1);
return [name, value];
}
}
export default PathMatcher;
export { PathMatcherResult };
Then, in my lambda handler, I do:
const httpMethod = event.requestContext.http.method.toUpperCase();
const currentRoute = `${httpMethod} ${event.rawPath}`;
// This will match both:
// GET /products/asdasdasdas
// GET /products/asdasdasdas/
const match = PathMatcher.match("GET /products/{id}", currentRoute);
if (match.isMatch) {
// Here, the id parameter has been extracted for you
const productId = match.namedParameters.id;
}
Of course you can build a registry of routes and their respective handler functions and automate that matching process and passing of parameters, but that is the easy part.
In the AWS re-Invent video, the solution uses Cognito pool + identity pool
It also uses a lambda authorizer at the API gateway to validate the token and generate the policy. I was reading How to authenticate API Gateway calls with Facebook?
and it says:
To use a federated identity, you set the API Gateway method to use “AWS_IAM” authorization. You use Cognito to create a role and associate it with your Cognito identity pool. You then use the Identity and Access Management (IAM) service to grant this role permission to call your API Gateway method.
-> If that's the case, How are we using lambda authorizer instead of IAM authorizer while we are using identity pool as well
-> What's the difference between using IAM authorizer and generating IAM policies in custom authorizer as I see happening here:
https://github.com/aws-quickstart/saas-identity-cognito/blob/96531568b5bd30106d115ad7437b2b1886379e57/functions/source/custom-authorizer/index.js
or
const Promise = require('bluebird');
const jws = require('jws');
const jwkToPem = require('jwk-to-pem');
const request = require('request-promise');
const AWS = require('aws-sdk');
AWS.config.setPromisesDependency(Promise);
const s3 = new AWS.S3({ apiVersion: '2006-03-01' });
const { env: { s3bucket }} = process
// cache for certificates of issuers
const certificates = {};
// time tenant data was loaded
let tenantLoadTime = 0;
// promise containt tenant data
let tenantPromise;
// this function returns tenant data promise
// refreshes the data if older than a minute
function tenants() {
if (new Date().getTime() - tenantLoadTime > 1000 * 60) {
console.log('Tenant info outdated, reloading');
tenantPromise = s3.getObject({
Bucket: s3bucket,
Key: 'tenants.json'
}).promise().then((data) => {
const config = JSON.parse(data.Body.toString());
console.log('Tenant config: %j', config);
const tenantMap = {};
config.forEach((t) => { tenantMap[t.iss] = t.id; });
return tenantMap;
});
tenantLoadTime = new Date().getTime();
}
return tenantPromise;
}
// helper function to load certificate of issuer
function getCertificate(iss, kid) {
if (certificates[iss]) {
// resolve with cached certificate, if exists
return Promise.resolve(certificates[iss][kid]);
}
return request({
url: `${iss}/.well-known/jwks.json`,
method: 'GET'
}).then((rawBody) => {
const { keys } = JSON.parse(rawBody);
const pems = keys.map(k => ({ kid: k.kid, pem: jwkToPem(k) }));
const map = {};
pems.forEach((e) => { map[e.kid] = e.pem; });
certificates[iss] = map;
return map[kid];
});
}
// extract tenant from a payload
function getTenant(payload) {
return tenants().then(config => config[payload.iss]);
}
// Help function to generate an IAM policy
function generatePolicy(payload, effect, resource) {
return getTenant(payload).then((tenant) => {
if (!tenant) {
return Promise.reject(new Error('Unknown tenant'));
}
const authResponse = {};
authResponse.principalId = payload.sub;
if (effect && resource) {
authResponse.policyDocument = {
Version: '2012-10-17',
Statement: [{
Action: 'execute-api:Invoke',
Effect: effect,
Resource: resource
}]
};
}
// extract tenant id from iss
payload.tenant = tenant;
authResponse.context = { payload: JSON.stringify(payload) };
console.log('%j', authResponse);
return authResponse;
});
}
function verifyPayload(payload) {
if (payload.token_use !== 'id') {
console.log('Invalid token use');
return Promise.reject(new Error('Invalid token use'));
}
if (parseInt(payload.exp || 0, 10) * 1000 < new Date().getTime()) {
console.log('Token expired');
return Promise.reject(new Error('Token expired'));
}
// check if iss is a known tenant
return tenants().then((config) => {
if (config[payload.iss]) {
return Promise.resolve();
}
console.log('Invalid issuer');
return Promise.reject();
});
}
function verifyToken(token, alg, pem) {
if (!jws.verify(token, alg, pem)) {
console.log('Invalid Signature');
return Promise.reject(new Error('Token invalid'));
}
return Promise.resolve();
}
exports.handle = function handle(e, context, callback) {
console.log('processing event: %j', e);
const { authorizationToken: token } = e;
if (!token) {
console.log('No token found');
return callback('Unauthorized');
}
const { header: { alg, kid }, payload: rawToken } = jws.decode(token);
const payload = JSON.parse(rawToken);
return verifyPayload(payload)
.then(() => getCertificate(payload.iss, kid))
.then(pem => verifyToken(token, alg, pem))
.then(() => generatePolicy(payload, 'Allow', e.methodArn))
.then(policy => callback(null, policy))
.catch((err) => {
console.log(err);
return callback('Unauthorized');
});
};
I'm using apollo link in schema stitching as an access control layer. I'm not quite sure how to make the link return error response if a user does not have permissions to access a particular operation. I know about such packages as graphql-shield and graphql-middleware but I'm curious whether it's possible to achieve basic access control using apollo link.
Here's what my link looks like:
const link = setContext((request, previousContext) => merge({
headers: {
...headers,
context: `${JSON.stringify(previousContext.graphqlContext ? _.omit(previousContext.graphqlContext, ['logger', 'models']) : {})}`,
},
})).concat(middlewareLink).concat(new HttpLink({ uri, fetch }));
The middlewareLink has checkPermissions that returns true of false depending on user's role
const middlewareLink = new ApolloLink((operation, forward) => {
const { operationName } = operation;
if (operationName !== 'IntrospectionQuery') {
const { variables } = operation;
const context = operation.getContext().graphqlContext;
const hasAccess = checkPermissions({ operationName, context, variables });
if (!hasAccess) {
// ...
}
}
return forward(operation);
});
What should I do if hasAccess is false. I guess I don't need to forward the operation as at this point it's clear that a user does not have access to it
UPDATE
I guess what I need to do is to extend the ApolloLink class, but so far I didn't manage to return error
Don't know if anyone else needs this, but I was trying to get a NetworkError specifically in the onError callback using Typescript and React. Finally got this working:
const testLink = new ApolloLink((operation, forward) => {
let fetchResult: FetchResult = {
errors: [] // put GraphQL errors here
}
let linkResult = Observable.of(fetchResult).map(_ => {
throw new Error('This is a network error in ApolloClient'); // throw Network errors here
});
return linkResult;
});
Return GraphQL errors in the observable FetchResult response, while throwing an error in the observable callback will produce a NetworkError
After some digging I've actually figured it out. But I'm not quite sure if my approach is correct.
Basically, I've called forward with a subsequent map where I return an object containing errors and data fields. Again, I guess there's a better way of doing this (maybe by extending the ApolloLink class)
const middlewareLink = new ApolloLink((operation, forward) => {
const { operationName } = operation;
if (operationName !== 'IntrospectionQuery') {
const { variables } = operation;
const context = operation.getContext().graphqlContext;
try {
checkPermissions({ operationName, context, variables });
} catch (err) {
return forward(operation).map(() => {
const error = new ForbiddenError('Access denied');
return { errors: [error], data: null };
});
}
}
return forward(operation);
});
I am creating a cloudwatch event which at a specific time in future is supposed to call a aws lambda function . The i am using aws nodejs sdk as described here: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/index.html
the code block to create the cloudwatch event looks like this:
module.exports.createReservationReminder = function (reservationModel, user, restaurant) {
return new Promise(function (resolve, reject) {
const ruleName = "rsv_" + reservationModel.reservationId;
const description = "Reservation reminder of `" + user.name + "` # `" + restaurant.title + "` on `" + reservationModel.time + "`";
let reservationTime = reservationModel.time;
let lambdaFunctionName = module.exports.buildLamdaFunctionArn("restaurant")
let alertTime = moment(reservationTime).tz(AppConfig.defaultTimezone).subtract( // Create alert 45 minute before a reservation
45,
'minutes'
);
let lambda = new AWS.Lambda({
accessKeyId: AppConfig.accessKeyId,
secretAccessKey: AppConfig.secretAccessKey,
region: AppConfig.region
});
let scheduleExpression1 = "cron(" + alertTime.utc().format('m H D MMM ? YYYY') + ')';
let ruleParams = {
Name: ruleName,
Description: description,
ScheduleExpression: scheduleExpression1,
State: 'ENABLED',
};
cloudwatchevents.deleteRule({Name: ruleName}, function (err, deleteRuleData) { //remove if a previous rule was created halfway
cloudwatchevents.putRule(ruleParams, function (err, ruleData) { //create the rule
if (err) {
reject(err)
}
else {
let lambdaPermission = {
FunctionName: lambdaFunctionName,
StatementId: ruleName,
Action: 'lambda:InvokeFunction',
Principal: 'events.amazonaws.com',
SourceArn: ruleData.RuleArn
};
let removePermission = {
FunctionName: lambdaFunctionName,
StatementId: ruleName,
}
//now to create the rule's target, need to add permission to lambda
lambda.removePermission(removePermission, function (err, removeLambdaData) { //remove if rule of same name was added as permission to this lambda before, ignore if rule not found error is thrown
lambda.addPermission(lambdaPermission, function (err, lamdaData) { //now add the permission
if (err) {
reject(err) // FAIL : throws error PolicyLengthExceededException after ~50 cloudwatch events are registered to this lambda function
}
else {
let targetParams = {
Rule: ruleName,
Targets: [
{
Arn: module.exports.buildLamdaFunctionArn("restaurant"),
Id: ruleName,
Input: JSON.stringify({
func: "notifyUserOfUpcomingReservation",
data: {
reservationId: reservationModel.reservationId
}
}),
},
]
};
cloudwatchevents.putTargets(targetParams, function (err, targetData) {
if (err) {
reject(err)
}
else {
resolve(targetData)
}
})
}
})
})
}
});
})
})
}
Above function works fine for the first ~50 times ( so I can easily make reminder for the 50 reservations. ) However , it will always fail eventually with:
PolicyLengthExceededException
Lambda function access policy is limited to 20 KB.
HTTP Status Code: 400
Which makes sense, as policy document can not be too big.
So what is the correct way to approach this problem : make unlimited cloudwatch event reminder with a lambda function target .
create a role and add that policy or permission for that role and then your lambda can assume role and run.
you can use aws STS module for that.
Rather than create and removing permission each time. STS will assume role temporarily then execute the code.