How get path params in CDK + APIGateway + Lambda - amazon-web-services

So, turns out I had it all along but I was logging it out incorrectly. I had been doing a Object.keys(event).forEach and console logging each key and value. I guess this didn't display the value as it's a nested object. Using JSON.stringify, as per #robC3's answer shows all the nested objects and values properly and is easier too! TL;DR just use curly braces in your gateway paths and they will be present in event.pathParameters.whateverYouCalledThem
I'm used to express land where you just write /stuff/:things in your route and then req.params.things becomes available in your handler for 'stuff'.
I'm struggling to get the same basic functionality in CDK. I have a RestAPI called 'api' and resources like so...
const api = new apigateway.RestApi(this, "image-cache-api", { //options })
const stuff = api.root.addResource("stuff")
const stuffWithId = get.addResource("{id}")
stuffWithId.addMethod("GET", new apigateway.LambdaIntegration(stuffLambda, options))
Then I deploy the function and call it at https://<api path>/stuff/1234
Then in my lambda I check event.pathParameters and it is this: {id: undefined}
I've had a look through the event object and the only place I can see 1234 is in the path /stuff/1234 and while I could just parse it out of that I'm sure that's not how it's supposed to work.
:/
Most of the things I have turned up while googling mention "mapping templates". That seems overly complicated for such a common use case so I had been working to the assumption there would be some sort of default mapping. Now I'm starting to think there isn't. Do I really have to specify a mapping template just to get access to path params and, if so, where should it go in my CDK stack code?
I tried the following...
stuffWithId.addMethod("GET", new apigateway.LambdaIntegration(stuffLambda, {
requestTemplate: {
"id": "$input.params('id')",
}
}))
But got the error...
error TS2559: Type '{ requestTemplate: { id: string; }; }' has no properties in common with type 'LambdaIntegrationOptions'.
I'm pretty confused as to whether I need requestTemplate, requestParametes, or something else entirely as all the examples I have found so far are for the console rather than CDK.

This works fine, and you can see where the full path, path params, query params, etc., are in the event structure when you test it in a browser.
// lambdas/handler.ts
// This code uses #types/aws-lambda for typescript convenience.
// Build first, and then deploy the .js handler.
import { APIGatewayProxyHandler, APIGatewayProxyResult } from 'aws-lambda';
export const main: APIGatewayProxyHandler = async (event, context, callback) => {
return <APIGatewayProxyResult> {
body: JSON.stringify([ event, context ], null, 4),
statusCode: 200,
};
}
// apig-lambda-proxy-demo-stack.ts
import * as path from 'path';
import { aws_apigateway, aws_lambda, Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
export class ApigLambdaProxyDemoStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const stuffLambda = new aws_lambda.Function(this, 'stuff-lambda', {
code: aws_lambda.Code.fromAsset(path.join('dist', 'lambdas')),
handler: 'handler.main',
runtime: aws_lambda.Runtime.NODEJS_14_X,
});
const api = new aws_apigateway.RestApi(this, 'image-cache-api');
const stuff = api.root.addResource('stuff');
const stuffWithId = stuff.addResource('{id}');
stuffWithId.addMethod('GET', new aws_apigateway.LambdaIntegration(stuffLambda));
}
}
This sample query:
https://[id].execute-api.[region].amazonaws.com/prod/stuff/1234?q1=foo&q2=bar
gives this response (excerpt):
If you want to handle arbitrary paths at a certain point in your API, you'll want to explore the IResource.addProxy() CDK method. For example,
api.root.addProxy({
defaultIntegration: new aws_apigateway.LambdaIntegration(stuffLambda),
});
That creates a {proxy+} resource at the API root in the example and would forward all requests to the lambda. Rather than configuring every single endpoint in API Gateway, you could handle them all in your same handler.

First thing to note is that all cdk using LambdaIntegration module actually have to be Post - Get methods with LambdaIntegration don't function as you are sending data to the Lambda. If you want to do a get specifically you have to write custom methods in the api for it.
Now, I have only done this in Python, but hopefully you can get the idea:
my_rest_api = apigateway.RestApi(
self, "MyAPI",
retain_deployments=True,
deploy_options=apigateway.StageOptions(
logging_level=apigateway.MethodLoggingLevel.INFO,
stage_name="Dev
)
)
a_resource = apigateway.Resource(
self, "MyResource",
parent=my_rest_api.root,
path_part="Stuff"
)
my_method = apigateway.Method(
self, "MyMethod",
http_method="POST",
resource=quoting_resource,
integration=apigateway.AwsIntegration(
service="lambda",
integration_http_method="POST",
path="my:function:arn"
)
)
your Resource construct defines your path - you can chain multiple resources together if you want to have methods off each level, or just put them all together in path_part - so you could have resourceA defined, and use it as the parent in resourceB - which would get you resourceAPathPart/resourceBPathPart/ to access your lambda.
or you can put it all together in resourceA with path_part = stuff/path/ect
I used the AwsIntegration method here instead of LambdaIntegration because, in the full code, I'm using stage variables to dynamically pick different lambdas depending on what stage im in, but the effect is rather similar

Related

Can't update AWS lambda functions

I'm trying to update a method in a file in AWS lambda function. Previously, the method accepted URLs such as
https://yh74mgrokc.execute-api.us-west-1.amazonaws.com/dev/ga/answer/qn_type?tag=alexa
I updated the function to accept URL like the ones below, as well as new tags like to and from.
https://yh74mgrokc.execute-api.us-west-1.amazonaws.com/dev/ga/answer/qn_type?tag=alexa&from=2020-05-29&to=2021-05-29
So, I modified one of the methods as follows:
exports.getAnswerQnTypeSchema = [
query('tag')
.exists()
.withMessage('tag is required'),
query().custom((query) => {
const allowedKeys = [
"id",
"name",
"tag",
"from",
"to"
];
for (const key of Object.keys(query)) {
if (!allowedKeys.includes(key)) {
throw new Error(`Unknown property: ${key} please resolve`,query);
}
}
return true;
})
];
to allow for 'from' and 'to' tags. Then I redeployed the function and uploaded the updated lambda function. But still the function works as the previous version. when I send the request
https://yh74mgrokc.execute-api.us-west-1.amazonaws.com/dev/ga/answer/qn_type?tag=alexa&from=2020-05-29&to=2021-05-29
It does not support the from and to tags, and the error message is as follows:
I'm not sure why this is the case. I'm new to AWS services, and any assistance with this matter would be greatly appreciated.

Get generated API key from AWS AppSync API created with CDK

I'm trying to access data from my stack where I'm creating an AppSync API. I want to be able to use the generated Stacks' url and apiKey but I'm running into issues with them being encoded/tokenized.
In my stack I'm setting some fields to the outputs of the deployed stack:
this.ApiEndpoint = graphAPI.url;
this.Authorization = graphAPI.graphqlApi.apiKey;
When trying to access these properties I get something like ${Token[TOKEN.209]} and not the values.
If I'm trying to resolve the token like so: this.resolve(graphAPI.graphqlApi.apiKey) I instead get { 'Fn::GetAtt': [ 'AppSyncAPIApiDefaultApiKey537321373E', 'ApiKey' ] }.
But I would like to retrieve the key itself as a string, like da2-10lksdkxn4slcrahnf4ka5zpeemq5i.
How would I go about actually extracting the string values for these properties?
The actual values of such Tokens are available only at deploy-time. Before then you can safely pass these token properties between constructs in your CDK code, but they are opaque placeholders until deployed. Depending on your use case, one of these options can help retrieve the deploy-time values:
If you define CloudFormation Outputs for a variable, CDK will (apart from creating it in CloudFormation), will, after cdk deploy, print its value to the console and optionally write it to a json file you pass with the --outputs-file flag.
// AppsyncStack.ts
new cdk.CfnOutput(this, 'ApiKey', {
value: this.api.apiKey ?? 'UNDEFINED',
exportName: 'api-key',
});
// at deploy-time, if you use a flag: --outputs-file cdk.outputs.json
{
"AppsyncStack": {
"ApiKey": "da2-ou5z5di6kjcophixxxxxxxxxx",
"GraphQlUrl": "https://xxxxxxxxxxxxxxxxx.appsync-api.us-east-1.amazonaws.com/graphql"
}
}
Alternatively, you can write a script to fetch the data post-deploy using the listGraphqlApis and listApiKeys commands from the appsync JS SDK client. You can run the script locally or, for advanced use cases, wrap the script in a CDK Custom Resource construct for deploy-time integration.
Thanks to #fedonev I was able to extract the API key and url like so:
const client = new AppSyncClient({ region: "eu-north-1" });
const command = new ListGraphqlApisCommand({ maxResults: 1 });
const res = await client.send(command);
if (res.graphqlApis) {
const apiKeysCommand = new ListApiKeysCommand({
apiId: res.graphqlApis[0].apiId,
});
const apiKeyResponse = await client.send(apiKeysCommand);
const urls = flatMap(res.graphqlApis[0].uris);
if (apiKeyResponse.apiKeys && res.graphqlApis[0].uris) {
sendSlackMessage(urls[1], apiKeyResponse.apiKeys[0].id || "");
}
}

Can a CDK pipeline stack avoid referring to a specific repo and Github connection?

My CDK pipeline stack has this code:
const pipeline = new CodePipeline(this, id, {
pipelineName: id,
synth: new CodeBuildStep("Synth", {
input: CodePipelineSource.connection("user/example4-be", "main", {
connectionArn: "arn:aws:codestar-connections:us-east-1:111...1111:connection/1111-1111.....1111",
}),
installCommands: [],
commands: []
}
),
})
which makes the code tightly coupled to the repository it is in (user/example4-be) and the Github connection it's using to access it (arn:aws:codestar-connections:...). This would break if someone forks the repo and wants to have a parallel pipeline. I feel like these two values should be configuration and not part of the code.
Is there a way using CDK and CodePipeline for this to be external variables? I guess the variables should be per-pipeline if possible? I'm not entirely sure.
Subclass Stack and accept the source configuration input as a custom prop type.1
// SourceConfigPipelineStack.ts
interface SourceConfigPipelineStackProps extends cdk.StackProps {
source: pipelines.CodePipelineSource;
}
export class SourceConfigPipelineStack extends cdk.Stack {
constructor(
scope: Construct,
id: string,
props: SourceConfigPipelineStackProps
) {
super(scope, id);
const pipeline = new pipelines.CodePipeline(this, id, {
pipelineName: id,
synth: new pipelines.CodeBuildStep('Synth', {
input: props.source,
installCommands: [],
commands: [],
}),
});
}
}
Pipeline consumers then pass their own source as configuration:
// app.ts
new SourceConfigPipelineStack(app, 'MyPipelineStack', {
env,
source: pipelines.CodePipelineSource.connection('user/example4-be', 'main', {
connectionArn:
'arn:aws:codestar-connections:us-east-1:111...1111:connection/1111-1111.....1111',
}),
});
Edit: Is it "bad" to put ARN configuration in code?
Not according to AWS. The CDK "best practices" doc says it's reasonable to hardcode cross-stack ARNs:
When the two stacks are in different AWS CDK apps, use a static from method to import an externally-defined resource based on its ARN ... (for example, Table.fromArn() for a DynamoDB table). Use the CfnOutput construct to print the ARN or other required value in the output of cdk deploy, or look in the AWS console. Or the second app can parse the CloudFormation template generated by the first app and retrieve that value from the Outputs section.
Hardcoding ARNs in code is sometimes worse, sometimes better than the alternatives like Parameter, Secret or CfnOutput.
Edit: Handle multi-environment config with a Configuration Factory
All Apps have app-level config items (e.g. defaultInstanceSize), which often differ by environment. Prod accounts need full-powered resources, dev accounts don't. Consider encapsulating (non-secret) config in a Configuration Factory. The constructor receives an account and region and returns plaintext configuration object. Stacks receive the config as props.
// app.ts
const { env, isProd, retainOnDelete, enableDynamoCache, defaultInstanceSize, repoName, branchName, githubConnectionArn } =
// the config factory is using the account and region from the --profile flag
new EnvConfigurator('SuperApp', process.env.CDK_DEFAULT_ACCOUNT, process.env.CDK_DEFAULT_REGION).config;
new SourceConfigPipelineStack(app, 'MyPipelineStack', {
env,
source: pipelines.CodePipelineSource.connection(repoName, branchName, {
connectionArn: githubConnectionArn
}),
stackTerminationProtection: isProd,
});
The local config pattern has several advantages:
Config values are easily discoverable and centralised in a single place
Callers can be allowed to provide type-constrained overrides
Easily assert against configuration values
Config values are under version control
Pipeline-friendly: avoid cross-account permission headaches
Local config can be used alongside Parameter and CfnOutput and Secret, which have complimentary advantages. Apps typically use each one. Reasonable people can disagree about where exactly to draw the boundaries.
(1) The fundamental CDK pattern is Construct composition: "Composition is the key pattern for defining higher-level abstractions through constructs... In general, composition is preferred over inheritance when developing AWS CDK constructs." In this case, it makes sense to subclass Stack rather than the Construct base class, because the OP use case is a cloned repo with, presumably, the deploy stages non-optionally encapsulated in the stack.
If you want to keep this information out of the repo, you can create SSM parameters in a separate stack, deploy it and populate the parameters, then do a synth-time lookup in the pipeline.
Here's how it would look in python:
class ParametersStack(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs):
super().__init__(scope, construct_id, **kwargs)
codestar_connection = csc.CfnConnection(
self, "my_connection", connection_name="my_connection", provider_type="GitHub"
)
ssm.StringParameter(
self,
"codestar_arn",
string_value=codestar_connection.ref,
parameter_name="/codestar/connection_arn",
)
ssm.StringParameter(
self,
"repo_owner",
string_value="REPO_OWNER",
parameter_name="/github/repo_owner",
)
ssm.StringParameter(
self,
"main_repo_name",
string_value="MAIN_REPO_NAME",
parameter_name="/github/repo_name",
)
You'd then deploy this stack, set up the connection, and populate the repo owner and name parameters.
In the pipeline stack:
github_repo_owner = ssm.StringParameter.value_from_lookup(
self, "/github/repo_owner"
)
github_repo_name = ssm.StringParameter.value_from_lookup(
self, "/github/repo_name"
)
# The following is needed because during the first synth, the values will be # filled with dummy values that are incompatible, so just replace them with # dummy values that will synth
# See https://github.com/aws/aws-cdk/issues/8699
if "dummy" in github_repo_owner:
github_repo_owner = "dummy"
if "dummy" in github_repo_name:
github_repo_name = "dummy"
repo_string = f"{github_repo_owner}/{github_repo_name}"
codestar_connection_arn = ssm.StringParameter.value_from_lookup(
self, "/codestar/connection_arn"
)
source = pipelines.CodePipelineSource.connection(
repo_string=repo_string,
branch=branch_name,
connection_arn=codestar_connection_arn,
)
You also need to give the pipeline the right to perform the lookups during synth. You do this by allowing the role for the synth action to assume the lookup role
synth_step = pipelines.CodeBuildStep(
"synth",
install_commands=[
"npm install -g aws-cdk",
"pip install -r requirements.txt",
],
commands=[
"cdk synth",
],
input=source,
role_policy_statements=[
iam.PolicyStatement(
effect=iam.Effect.ALLOW,
actions=["sts:AssumeRole"],
resources=["*"],
conditions={
"StringEquals": {
"iam:ResourceTag/aws-cdk:bootstrap-role": "lookup"
}
},
),
],
)
The looked up values will be saved in cdk.context.json. If you don't commit it to your VCS, the pipeline will do the lookup and fetch the actual values every time.

AWS Lambda using Winston logging loses Request ID

When using console.log to add log rows to AWS CloudWatch, the Lambda Request ID is added on each row as described in the docs
A simplified example based on the above mentioned doc
exports.handler = async function(event, context) {
console.log("Hello");
return context.logStreamName
};
Would produce output such as
START RequestId: c793869b-ee49-115b-a5b6-4fd21e8dedac Version: $LATEST
2019-06-07T19:11:20.562Z c793869b-ee49-115b-a5b6-4fd21e8dedac INFO Hello
END RequestId: c793869b-ee49-115b-a5b6-4fd21e8dedac
REPORT RequestId: c793869b-ee49-115b-a5b6-4fd21e8dedac Duration: 170.19 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 73 MB
The relevant detail here regarding this question is the Request ID, c793869b-ee49-115b-a5b6-4fd21e8dedac which is added after the timestamp on the row with "Hello".
The AWS documentation states
To output logs from your function code, you can use methods on the console object, or any logging library that writes to stdout or stderr.
The Node.js runtime logs the START, END, and REPORT lines for each invocation, and adds a timestamp, request ID, and log level to each entry logged by the function.
When using Winston as a logger, the Request ID is lost. Could be an issued with formatters or transports. The logger is created like
const logger = createLogger({
level: 'debug',
format: combine(
timestamp(),
printf(
({ timestamp, level, message }) => `${timestamp} ${level}: ${message}`
)
),
transports: [new transports.Console()]
});
I also tried simple() formatter instead of printf(), but that has no effect on whether Request ID is present or not. Also removing formatting altogether still prints the plain text, i.e. no timestamp or request id.
I also checked the source code of Winston Console transport, and it uses either console._stdout.write if present, or console.log for writing, which is what the AWS documentation said to be supported.
Is there some way to configure Winston to keep the AWS Lambda Request ID as part of the message?
P.S. There are separate Winston Transports for AWS CloudWatch that I am aware of, but they require other setup functionality that I'd like to avoid if possible. And since the Request ID is readily available, they seem like an overkill.
P.P.S. Request ID can also be fetched from Lambda Context and custom logger object initialized with it, but I'd like to also avoid that, pretty much for the same reasons: extra work for something that should be readily available.
The issue is with the usage of console._stdout.write() / process._stdout.write(), which Winston built-in Console Transport uses when present.
For some reason lines written to stdout go to CloudWatch as is, and timestamp/request ID are not added to log rows as they are with console.log() calls.
There is a discussion on Github about making this a constructor option that could be selected on transport creation, but it was closed as a problem related to specific IDEs and how they handle stdout logs. The issue with AWS Lambdas is mentioned only as a side note in the discussion.
My solution was to make a custom transport for Winston, which always uses console.log() to write the messages and leave timestamp and request ID to be filled in by AWS Lambda Node runtime.
Addition 5/2020:
Below is an examples of my solution. Unfortunaly I cannot remember much of the details of this implementation, but I pretty much looked at Winston sources in Github and took the bare minimum implementation and forced use of console.log
'use strict';
const TransportStream = require('winston-transport');
class SimpleConsole extends TransportStream {
constructor(options = {}) {
super(options);
this.name = options.name || 'simple-console';
}
log(info, callback) {
setImmediate(() => this.emit('logged', info));
const MESSAGE = Symbol.for('message');
console.log(info[MESSAGE]);
if (callback) {
callback();
}
}
};
const logger = createLogger({
level: 'debug',
format: combine(
printf(({ level, message }) => `${level.toUpperCase()}: ${message}`)
),
transports: [new SimpleConsole()]
});
const debug = (...args) => logger.debug(args);
// ... And similar definition to other logging levels, info, warn, error etc
module.exports = {
debug
// Also export other logging levels..
};
Another option
As pointed out by #sanrodari in the comments, the same can be achieved by directly overriding the log method in built-in Console transport and force the use of console.log.
const logger = winston.createLogger({
transports: [
new winston.transports.Console({
log(info, callback) {
setImmediate(() => this.emit('logged', info));
if (this.stderrLevels[info[LEVEL]]) {
console.error(info[MESSAGE]);
if (callback) {
callback();
}
return;
}
console.log(info[MESSAGE]);
if (callback) {
callback();
}
}
})
]
});
See full example for more details
I know OP said they would like to avoid using the Lambda context object to add the request ID, but I wanted to share my solution with others who may not have this requirement. While the other answers require defining a custom transport or overriding the log method of the Console transport, for this solution you just need to add one line to the top of your handler function.
import { APIGatewayTokenAuthorizerEvent, Callback, Context } from "aws-lambda";
import { createLogger, format, transports } from "winston";
const logger = createLogger({
level: "debug",
format: format.json({ space: 2 }),
transports: new transports.Console()
});
export const handler = (
event: APIGatewayTokenAuthorizerEvent,
context: Context,
callback: Callback
): void => {
// Add this line to add the requestId to logs
logger.defaultMeta = { requestId: context.awsRequestId };
logger.info("This is an example log message"); // prints:
// {
// "level": "info",
// "message": "This is an example log message",
// "requestId": "ac1de841-ca30-4a09-9950-dd4fe7e37af8"
// }
};
Documentation for Lambda context object in Node.js
For other Winston formats like printf, you will need to add the requestId property to the format string. Not only is this more concise, but it has the benefit of allowing you to customize where the request ID appears in your log output, rather than always prepending the request ID like CloudWatch does.
As already mentioned by #kaskelloti AWS does not transforms messages logged by console._stdout.write() and console._stderr.write()
here is my modified solution which respects levels in AWS logs
const LEVEL = Symbol.for('level');
const MESSAGE = Symbol.for('message');
const logger = winston.createLogger({
transports: [
new winston.transports.Console({
log(logPayload, callback) {
setImmediate(() => this.emit('logged', logPayload));
const message = logPayload[MESSAGE]
switch (logPayload[LEVEL]) {
case "debug":
console.debug(message);
break
case "info":
console.info(message);
break
case "warn":
console.warn(message);
break
case "error":
console.error(message);
break
default:
//TODO: handle missing levels
break
}
if (callback) {
callback();
}
}
})
],
})
according to the AWS docs
To output logs from your function code, you can use methods on the console object, or any logging library that writes to stdout or stderr.
I ran a quick test using the following Winston setup in a lambda:
const path = require('path');
const { createLogger, format, transports } = require('winston');
const { combine, errors, timestamp } = format;
const baseFormat = combine(
timestamp({ format: 'YYYY-MM-DD HH:mm:ss' }),
errors({ stack: true }),
format((info) => {
info.level = info.level.toUpperCase();
return info;
})(),
);
const splunkFormat = combine(
baseFormat,
format.json(),
);
const prettyFormat = combine(
baseFormat,
format.prettyPrint(),
);
const createCustomLogger = (moduleName) => createLogger({
level: process.env.LOG_LEVEL,
format: process.env.PRETTY_LOGS ? prettyFormat : splunkFormat,
defaultMeta: { module: path.basename(moduleName) },
transports: [
new transports.Console(),
],
});
module.exports = createCustomLogger;
and in CloudWatch, I am NOT getting my Request ID. I am getting a timestamp from my own logs, so I'm less concerned about it. Not getting the Request ID is what bothers me

How do we access and respond to CloudFormation custom resources using an AWS Lambda function written in Java?

I have am AWS Lambda function written in Java that I would like to use as part of a response to an AWS CloudFormation function. Amazon provides two detailed examples on how to create a CloudFormation custom resource that returns its value based on an AWS Lambda function written in Node.js, however I have been having difficulty translating the Lambda examples into Java. How can we setup our AWS Java function so that it reads the value of the pre-signed S3 URL passed in as a parameter to the Lambda function from CloudFormation and send back our desired response to the waiting CloudFormation template?
After back and forth conversation with AWS, here are some code samples I've created that accomplish this.
First of all, assuming you want to leverage the predefined interfaces for creating Handlers, you can implement RequestsHandler and define the HandleRequest methods like so:
public class MyCloudFormationResponder implements RequestHandler<Map<String, Object>, Object>{
public Object handleRequest(Map<String,Object> input, Context context) {
...
}
}
The Map<String, Object>is a Map of the values sent from your CloudFormation resource to the Lambda function. An example CF resource:
"MyCustomResource": {
"Type" : "Custom::String",
"Version" : "1.0",
"Properties": {
"ServiceToken": "arn:aws:lambda:us-east-1:xxxxxxx:function:MyCloudFormationResponderLambdaFunction",
"param1": "my value1",
"param2": ["t1.micro", "m1.small", "m1.large"]
}
}
can be analyzed with the following code
String responseURL = (String)input.get("ResponseURL");
context.getLogger().log("ResponseURLInput: " + responseURL);
context.getLogger().log("StackId Input: " + input.get("StackId"));
context.getLogger().log("RequestId Input: " + input.get("RequestId"));
context.getLogger().log("LogicalResourceId Context: " + input.get("LogicalResourceId"));
context.getLogger().log("Physical Context: " + context.getLogStreamName());
#SuppressWarnings("unchecked")
Map<String,Object> resourceProps = (Map<String,Object>)input.get("ResourceProperties");
context.getLogger().log("param 1: " + resourceProps.get("param1"));
#SuppressWarnings("unchecked")
List<String> myList = (ArrayList<String>)resourceProps.get("param2");
for(String s : myList){
context.getLogger().log(s);
}
The key things to point out here, beyond what is explained in the NodeJS examples in the AWS documentation are
(String)input.get("ResponseURL") is the pre-signed S3 URL that you need to respond back to (more on this later)
(Map<String,Object>)input.get("ResourceProperties") returns the map of your CloudFormation custom resource "Properties" passed into the Lambda function from your CF template. I provided a String and ArrayList as two examples of object types that can be returned, though several others are possible
In order to respond back to the CloudFormation template custom resource instantiation, you need to execute an HTTP PUT call back to the ResponseURL previously mentioned and include most of the following fields in the variable cloudFormationJsonResponse. Below is how I've done this
try {
URL url = new URL(responseURL);
HttpURLConnection connection=(HttpURLConnection)url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(connection.getOutputStream());
JSONObject cloudFormationJsonResponse = new JSONObject();
try {
cloudFormationJsonResponse.put("Status", "SUCCESS");
cloudFormationJsonResponse.put("PhysicalResourceId", context.getLogStreamName());
cloudFormationJsonResponse.put("StackId", input.get("StackId"));
cloudFormationJsonResponse.put("RequestId", input.get("RequestId"));
cloudFormationJsonResponse.put("LogicalResourceId", input.get("LogicalResourceId"));
cloudFormationJsonResponse.put("Data", new JSONObject().put("CFAttributeRefName", "some String value useful in your CloudFormation template"));
} catch (JSONException e) {
e.printStackTrace();
}
out.write(cloudFormationJsonResponse.toString());
out.close();
int responseCode = connection.getResponseCode();
context.getLogger().log("Response Code: " + responseCode);
} catch (IOException e) {
e.printStackTrace();
}
Of particular note is the node "Data" above which references an additional com.amazonaws.util.json.JSONObject in which I include any attributes that are required in my CloudFormation template. In this case, it would be retrieved in CF template with something like { "Fn::GetAtt": [ "MyCustomResource", "CFAttributeRefName" ] }
Finally, you can simply return null since nothing would be returned from this function as it's the HTTPUrlConnection that actually responds to the CF call.
Neil,
I really appreciate your great documentation here. I would add a few things that I found useful:
input.get("RequestType") - This comes back as "Create", "Delete", etc. You can use this value to determine what to do when a stack is created, deleted, etc..
As far as security, I uploaded the Lambda Functions and set the VPC, subnets, and security group (default) manually so I can reuse it with several cloudformationn scripts. That seems to be working okay.
I created one Lambda function that gets called by the CF scripts and one I can run manually in case the first one fails.
This excellent gradle aws plugin makes it easy to upload Java Lambda functions to AWS.
Gradle AWS Plugin