Direct traffic to Lambda function with API Gateway in CDK - amazon-web-services

I am trying to create a REST API to return data to my front-end using a Lambda function all done in CDK.
Basically my api-gateway would route traffic from /uploads to my Lambda function. However, I'm having a bit of difficulty incorporating this.
const s3UploaderUrlLambda = new lambda.Function(
//defined my Lambda function
);
const api = new apigateway.LambdaRestApi(this, 's3uploader', {
handler: s3UploaderUrlLambda, //I believe this handler means that it will target this
//Lambda for every single route but I only want it for /uploads
proxy: false
});
const uploads = api.root.addResource('uploads');
uploads.addMethod('GET')
Can anyone help?

Define default integration for resource:
const uploads = api.root.addResource('uploads', {
defaultIntegration: new apigateway.LambdaIntegration(
s3UploaderUrlLambda
)
});
or directly for method:
uploads.addMethod(
'GET',
new apigateway.LambdaIntegration(
s3UploaderUrlLambda
)
);

Related

How to Reuse the api gate TokenAuthorizer

I want to reuse the TokenAuthorizer which I have created in another stack. If a do the below it gives an error that it already exists, and if I change the authorizerName it creates a new one.
Is there a way I can reuse this resource?
const authzHandler = lambda.Function.fromFunctionName(this, 'AuthHandlerLookup', 'auth-handler');
const authorizer = new apigateway.TokenAuthorizer(this, 'WebApiTokenAuthorizer', {
handler: authzHandler,
resultsCacheTtl: Duration.seconds(600),
authorizerName: 'test-Authorizer',
assumeRole: lambdaExecutionRole
});
test.addMethod('GET', new apigateway.LambdaIntegration(TestLambda , { proxy: true }),
{
authorizer
}
i am able to get the authorizer information in cli , but now sure how to do the same using cdk
aws apigateway get-authorizer --rest-api-id wrrt25mzme0m --authorizer-id vffawds

Get generated API key from AWS AppSync API created with CDK

I'm trying to access data from my stack where I'm creating an AppSync API. I want to be able to use the generated Stacks' url and apiKey but I'm running into issues with them being encoded/tokenized.
In my stack I'm setting some fields to the outputs of the deployed stack:
this.ApiEndpoint = graphAPI.url;
this.Authorization = graphAPI.graphqlApi.apiKey;
When trying to access these properties I get something like ${Token[TOKEN.209]} and not the values.
If I'm trying to resolve the token like so: this.resolve(graphAPI.graphqlApi.apiKey) I instead get { 'Fn::GetAtt': [ 'AppSyncAPIApiDefaultApiKey537321373E', 'ApiKey' ] }.
But I would like to retrieve the key itself as a string, like da2-10lksdkxn4slcrahnf4ka5zpeemq5i.
How would I go about actually extracting the string values for these properties?
The actual values of such Tokens are available only at deploy-time. Before then you can safely pass these token properties between constructs in your CDK code, but they are opaque placeholders until deployed. Depending on your use case, one of these options can help retrieve the deploy-time values:
If you define CloudFormation Outputs for a variable, CDK will (apart from creating it in CloudFormation), will, after cdk deploy, print its value to the console and optionally write it to a json file you pass with the --outputs-file flag.
// AppsyncStack.ts
new cdk.CfnOutput(this, 'ApiKey', {
value: this.api.apiKey ?? 'UNDEFINED',
exportName: 'api-key',
});
// at deploy-time, if you use a flag: --outputs-file cdk.outputs.json
{
"AppsyncStack": {
"ApiKey": "da2-ou5z5di6kjcophixxxxxxxxxx",
"GraphQlUrl": "https://xxxxxxxxxxxxxxxxx.appsync-api.us-east-1.amazonaws.com/graphql"
}
}
Alternatively, you can write a script to fetch the data post-deploy using the listGraphqlApis and listApiKeys commands from the appsync JS SDK client. You can run the script locally or, for advanced use cases, wrap the script in a CDK Custom Resource construct for deploy-time integration.
Thanks to #fedonev I was able to extract the API key and url like so:
const client = new AppSyncClient({ region: "eu-north-1" });
const command = new ListGraphqlApisCommand({ maxResults: 1 });
const res = await client.send(command);
if (res.graphqlApis) {
const apiKeysCommand = new ListApiKeysCommand({
apiId: res.graphqlApis[0].apiId,
});
const apiKeyResponse = await client.send(apiKeysCommand);
const urls = flatMap(res.graphqlApis[0].uris);
if (apiKeyResponse.apiKeys && res.graphqlApis[0].uris) {
sendSlackMessage(urls[1], apiKeyResponse.apiKeys[0].id || "");
}
}

I am learning to create AWS Lambdas. I want to create a "chain": S3 -> 4 Chained Lambda()'s -> RDS. I can't get the first lambda to call the second

I really tried everything. Surprisingly google has not many answers when it comes to this.
When a certain .csv file is uploaded to a S3 bucket I want to parse it and place the data into a RDS database.
My goal is to learn the lambda serverless technology, this is essentially an exercise. Thus, I over-engineered the hell out of it.
Here is how it goes:
S3 Trigger when the .csv is uploaded -> call lambda (this part fully works)
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DownloadCsv downloads the csv from S3 and finishes with essentially the plaintext of the file. It is then supposed to pass it to the next lambda. The way I am trying to do this is by putting the second lambda as destination. The function works, but the second lambda is never called and I don't know why.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_ParseCsv gets the plaintext as input and returns a javascript object with the parsed data.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DecryptRDSPass only connects to KMS, gets the encrcypted RDS password, and passes it along with the data it received as input to the last lambda.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_PutDataInRds then finally puts the data in RDS.
I created a custom VPC with custom subnets, route tables, gateways, peering connections, etc. I don't know if this is relevant but function 2. only has access to the s3 endpoint, 3. does not have any internet access whatsoever, 4. is the only one that has normal internet access (it's the only way to connect to KSM), and 5. only has access to the peered VPC which hosts the RDS.
This is the code of the first lambda:
// dependencies
const AWS = require('aws-sdk');
const util = require('util');
const s3 = new AWS.S3();
let region = process.env;
exports.handler = async (event, context, callback) =>
{
var checkDates = process.env.CheckDates == "false" ? false : true;
var ret = [];
var checkFileDate = function(actualFileName)
{
if (!checkDates)
return true;
var d = new Date();
var expectedFileName = 'Overframe_-_Analytics_by_Day_Device_' + d.getUTCFullYear() + '-' + (d.getUTCMonth().toString().length == 1 ? "0" + d.getUTCMonth() : d.getUTCMonth()) + '-' + (d.getUTCDate().toString().length == 1 ? "0" + d.getUTCDate() : d.getUTCDate());
return expectedFileName == actualFileName.substr(0, expectedFileName.length);
};
for (var i = 0; i < event.Records.length; ++i)
{
var record = event.Records[i];
try {
if (record.s3.bucket.name != process.env.S3BucketName)
{
console.error('Unexpected notification, unknown bucket: ' + record.s3.bucket.name);
continue;
}
if (!checkFileDate(record.s3.object.key))
{
console.error('Unexpected file, or date is not today\'s: ' + record.s3.object.key);
continue;
}
const params = {
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
};
var csvFile = await s3.getObject(params).promise();
var allText = csvFile.Body.toString('utf-8');
console.log('Loaded data:', {Bucket: params.Bucket, Filename: params.Key, Text: allText});
ret.push(allText);
} catch (error) {
console.log("Couldn't download CSV from S3", error);
return { statusCode: 500, body: error };
}
}
// I've been randomly trying different ways to return the data, none works. The data itself is correct , I checked with console.log()
const response = {
statusCode: 200,
body: { "Records": ret }
};
return ret;
};
While this shows how the lambda was set up, especially its destination:
I haven't posted on Stackoverflow in 7 years. That's how desperate I am. Thanks for the help.
Rather than getting each Lambda to call the next one take a look at AWS managed service for state machines, step functions which can handle this workflow for you.
By providing input and outputs you can pass output to the next function, with retry logic built into it.
If you haven't much experience AWS has a tutorial on setting up a step function through chaining Lambdas.
By using this you also will not need to account for configuration issues such as Lambda timeouts. In addition it allows your code to be more modular which improves testing the individual functionality, whilst also isolating issues.
The execution roles of all Lambda functions, whose destinations include other Lambda functions, must have the lambda:InvokeFunction IAM permission in one of their attached IAM policies.
Here's a snippet from Lambda documentation:
To send events to a destination, your function needs additional permissions. Add a policy with the required permissions to your function's execution role. Each destination service requires a different permission, as follows:
Amazon SQS – sqs:SendMessage
Amazon SNS – sns:Publish
Lambda – lambda:InvokeFunction
EventBridge – events:PutEvents

How to get the AWS IoT custom endpoint in CDK?

I want to pass the IoT custom endpoint as an env var to a lambda declared in CDK.
I'm talking about the IoT custom endpoint that lives here:
How do I get it in context of CDK?
You can ref AWS sample code:
https://github.com/aws-samples/aws-iot-cqrs-example/blob/master/lib/querycommandcontainers.ts
const getIoTEndpoint = new customResource.AwsCustomResource(this, 'IoTEndpoint', {
onCreate: {
service: 'Iot',
action: 'describeEndpoint',
physicalResourceId: customResource.PhysicalResourceId.fromResponse('endpointAddress'),
parameters: {
"endpointType": "iot:Data-ATS"
}
},
policy: customResource.AwsCustomResourcePolicy.fromSdkCalls({resources: customResource.AwsCustomResourcePolicy.ANY_RESOURCE})
});
const IOT_ENDPOINT = getIoTEndpoint.getResponseField('endpointAddress')
AFAIK the only way to recover is by using Custom Resources (Lambda), for example (IoTThing): https://aws.amazon.com/blogs/iot/automating-aws-iot-greengrass-setup-with-aws-cloudformation/

create request body and template API GATEWAY CDK

Please tell me two things:
1. How to configure request body via sdk
2. how to configure template, for pulling pass or query param, converting to json, and then passing it to lambda
This is all in the api gateway and via cdk
Assume you have the following setup
const restapi = new apigateway.RestApi(this, "myapi", {
// detail omit
});
const helloWorld = new lambda.Function(this, "hello", {
runtime: lambda.Runtime..PYTHON_3_8,
handler: 'index.handler',
code: Code.asset('./index.py')
})
restapi.root.addResource("test").addMethod("POST", new apigateway.LambdaIntegration(helloWorld))
and inside the lambda function (in python)
def handler(event, context):
request_body = event['body']
parameters = event[queryStringParameters]