I faced issue when trying to writs units for my CDK project.
Stack is creating, pretty simple one:
-> APIGateway (Rest)
-> POST endpoint pointing to lambda
-> Lambda
I have very simple unit:
describe("Test WebhookProxyStack", () => {
it("template must be defined", () => {
const app = new cdk.App();
const processorStack = new WebhookProxyStack(app, "dev" as never);
const template = Template.fromStack(processorStack);
expect(template).toBeDefined();
});
});
Lambda code is
const lambda = new NodejsFunction(scope, name, {
runtime: Runtime.NODEJS_14_X,
handler: `handler`,
entry: require.resolve(
"#webhook-proxy/src/XXX.ts",
),
});
When I deploy CDK, everything is bundling (via local esbuild) fine, no errors, but when trying to run this unit I am getting error like:
Error: Failed to bundle asset WebhookProxy-dev/lambda-name/Code/Stage, bundle output is located at /private/var/folders/g_/7s34q40s3rg40280qhrvx5fm0000gn/T/cdk.outQhujdM/bundling-temp-1309d84e2e3633714893bccef1ab36748a1c6088468eb4607da3325bcd2d7058-error: Error: bash -c yarn run esbuild --bundle "{OUTPUT_PATH}" --target=node14 --platform=node --outfile="/private/var/folders/g_/7s34q40s3rg40280qhrvx5fm0000gn/T/cdk.outQhujdM/bundling-temp-1309d84e2e3633714893bccef1ab36748a1c6088468eb4607da3325bcd2d7058/index.js" --external:aws-sdk run in directory {PROJECT_PATH} exited with status 127
at AssetStaging.bundle (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/core/lib/asset-staging.ts:395:13)
at AssetStaging.stageByBundling (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/core/lib/asset-staging.ts:243:10)
at stageThisAsset (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/core/lib/asset-staging.ts:134:35)
at Cache.obtain (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/core/lib/private/cache.ts:24:13)
at new AssetStaging (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/core/lib/asset-staging.ts:159:44)
at new Asset (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/aws-s3-assets/lib/asset.ts:72:21)
at AssetCode.bind (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/aws-lambda/lib/code.ts:180:20)
at new Function (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/aws-lambda/lib/function.ts:348:29)
at new NodejsFunction (/Users/XXX/Sites/XXX/webhook-proxy/node_modules/aws-cdk-lib/aws-lambda-nodejs/lib/function.ts:50:5)
Any hint\help what could be wrong with bundling this lambda when units and how to pass it ?
Another question is why this lambda must be build when executing units?
Info:
AWS CDK v2
Esbuild error in test when building Lambda
Not a fix, but a diagnostic/workaround: pass bundling: {forceDockerBundling: true } in the lambda props to use Docker instead of esbuild for local bundling.
Why is the lambda built when executing units?
Stacks need to be synth-ed to be tested. CDK generates a hash for assets as part of the synth process. The source hash is "used at construction time to determine whether the contents of an asset have changed."
Related
I have a CDK Pipelines pipeline that is handling the self mutation and deployment of my application on ECS and I am having a tough time figuring out how to implement database migrations.
My migration files as well as the migration command reside inside of the docker container that are built and deployed in the pipeline. Below are two things I've tried so far:
My first thought was just creating a pre step on the stage, but i believe there is a chicken/egg situation. Since the migration command requires database to exist (as well as having the endpoint and credentials) and the migration step is pre, the stack doesn't exist when this command would run...
const pipeline = new CodePipeline(this, "CdkCodePipeline", {
// ...
// ...
}
pipeline.addStage(applicationStage).addPre(new CodeBuildStep("MigrateDatabase", {
input: pipeline.cloudAssemblyFileSet,
buildEnvironment: {
environmentVariables: {
DB_HOST: { value: databaseProxyEndpoint },
// ...
// ...
},
privileged: true,
buildImage: LinuxBuildImage.fromAsset(this, 'Image', {
directory: path.join(__dirname, '../../docker/php'),
}),
},
commands: [
'cd /var/www/html',
'php artisan migrate --force',
],
}))
In the above code, databaseProxyEndpoint has been everything from a CfnOutput, SSM Parameter to a plain old typescript reference but all failed due to the value being empty, missing, or not generated yet.
I felt this was close, since it works perfectly fine until I try and reference databaseProxyEndpoint.
My second attempt was to create an init container in ECS.
const migrationContainer = webApplicationLoadBalancer.taskDefinition.addContainer('init', {
image: ecs.ContainerImage.fromDockerImageAsset(webPhpDockerImageAsset),
essential: false,
logging: logger,
environment: {
DB_HOST: databaseProxy.endpoint,
// ...
// ...
},
secrets: {
DB_PASSWORD: ecs.Secret.fromSecretsManager(databaseSecret, 'password')
},
command: [
"sh",
"-c",
[
"php artisan migrate --force",
].join(" && "),
]
});
// Make sure migrations run and our init container return success
serviceContainer.addContainerDependencies({
container: migrationContainer,
condition: ecs.ContainerDependencyCondition.SUCCESS,
});
This worked, but I am not a fan at all. The migration command should run once in the ci/cd pipeline on a deploy, not when the ECS service starts/restarts or scales... My migrations failed once and it locked up cloudformation because the health check failed both on the deploy and then naturally on the rollback as well causing a completely broken loop of pain.
Any ideas or suggestions on how to pull this off would save me from losing the remaining hair i have left!
You can run your migrations (1) within a stack's deployment with a Custom Resource construct, (2) after a stack's or stage's deployment with a post Step, (3) or after the pipeline has run with an EventBridge rule.
1. Within a stack: Migrations as a Custom Resource
One option is to define your migrations as a CustomResource. It's a CloudFormation feature for executing user-defined code (typically in a Lambda) during the stack deployment lifecycle. See #mchlfchr's answer for an example. Also consider the CDK Trigger construct, a higher-level Custom Resource implementation.
2. After a stack or stage: "post" Step
If you split your application into, say, a StatefulStack (database) and StatelessStack (application containers), you can run your migrations code as a post Step between the two. This is the approach attempted in the OP.
In your StatefulStack, the variable producer, expose a CfnOutput instance variable for the environment variable values: readonly databaseProxyEndpoint: CfnOutput. Then consume the variables in a pipeline migration action by passing them to a post step as envFromCfnOutputs. The CDK will synth them into CodePipeline Variables:
pipeline.addStage(myStage, { // myStage includes the StatefulStack and StatelessStack instances
stackSteps: [
{
stack: statefulStack,
post: [
new pipelines.CodeBuildStep("Migrate", {
commands: [ 'cd /var/www/html', 'php artisan migrate --force',],
envFromCfnOutputs: { TABLE_ARN: stack1.tableArn },
// ... other step config
}),
],
},
],
post: // steps to run after the stage
});
The addStage method's stackSteps option runs post steps after a specific stack in a stage. The post option work similarly, but runs after the stage.
3. After the Pipeline execution: EventBridge rule
Although it's likely not the best option, you could run migrations after the pipeline executes. CodePipeline emits events during pipeline execution. With an EventBridge rule, listen for CodePipeline Pipeline Execution State Change events where "state": "SUCCEEDED".
Note on failure modes: The three options have different failure modes. If the migrations fail as a Custom Resource, the StatefulStack deployment will fail (with changes rolled back) and the pipeline execution will fail. If the migrations are implemented as a step, the pipeline execution will fail but the StatefulStack won't roll back. Finally, if migrations are event-triggered, a failed migration will affect neither the stack nor execution, as they will already be finished when the migrations run.
I wouldn't solve it within a build step of a CDK Pipeline.
Rather I'd go for the CustomResource approach.
With Custom Resources, especially in CDK, you're always aware of the dependencies and when you need to run them.
This gets completely lost within a CDK Pipeline context and you need to find out/implement by yourself.
So, what does a Custom Resource look like?
// this lambda function is an example definition, where you would run your actual migration commands
const migrationFunction = new lambda.Function(this, 'MigrationFunction', {
runtime: lambda.Runtime.PROVIDED_AL2,
code: lambda.Code.fromAsset('path/to/migration.ts'),
layers: [
// find the layers here:
// https://bref.sh/docs/runtimes/#lambda-layers-in-details
// https://bref.sh/docs/runtimes/#layer-version-
lambda.LayerVersion.fromLayerVersionArn(this, 'BrefPHPLayer', 'arn:aws:lambda:us-east-1:209497400698:layer:php-80:21')
],
timeout: cdk.Duration.seconds(30),
memorySize: 256,
});
const migrationFunctionProvider = new Provider(this, 'MigrationProvider', {
onEventHandler: migrationFunction,
});
new CustomResource(this, 'MigrationCustomResource', {
serviceToken: migrationFunctionProvider.serviceToken,
properties: {
date: new Date(Date.now()).toUTCString(),
},
});
}
// grant your migration lambda the policies to read secrets for your DB connection etc.
// migration.ts
import child_process from 'child_process';
import AWS from 'aws-sdk';
const sm = new AWS.SecretsManager();
export const handler = async (event, context) => {
// an event provides more flexibility than env vars
const { dbName, secretName } = event;
// Retrieve the database credentials from AWS Secrets Manager
const secret = await sm.getSecretValue({ SecretId: secretName }).promise();
const { username, password } = JSON.parse(secret.SecretString);
// Run the migration command with the database credentials
const command = `php artisan migrate --database=mysql --host=your-database-host --port=3306 --database=${dbName} --username=${username} --password=${password}`;
child_process.exec(command, (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
console.log(`stdout: ${stdout}`);
console.error(`stderr: ${stderr}`);
});
};
The Custom-Resource takes your migration lambda function.
The Lambda runs the actual command to do your database migration.
The Custom Resource is applied every time when running a deployment.
This is applied via the date value.
You can control the execution by altering any property within the CustomResource.
I'm building an application with AWS CDK that uses CodePipeline. So there are essentially two stacks, one sets up the code pipeline and the other sets up the application (and it's triggered by the pipeline).
I'm working out of what is built in https://cdkworkshop.com/ so in my project I have a file cdk.json that has an entry app pointing to a specific TypeScript file (example4-be is the application name):
{
"app": "npx ts-node --prefer-ts-exts bin/example4-be.ts",
This file builds the CodePipeline stack:
#!/usr/bin/env node
import * as cdk from "aws-cdk-lib"
import {PipelineStack} from "../lib/pipeline-stack"
const app = new cdk.App()
new PipelineStack(app, "Example4BePipeline")
so when I try to use sam to run the application locally, it fails saying there are no Lambda functions. I believe it's because it's building the CodePipeline stack and not the application stack. If I change exampe4-be.ts to this:
#!/usr/bin/env node
import * as cdk from "aws-cdk-lib"
import {Example4BeStack} from "../lib/example4-be-stack";
const app = new cdk.App()
new Example4BeStack(app, "Example4BePipeline")
it works. Example4BeStack is the application stack. But obviously if I commit this, the CodePipeline will stop working.
How can I have both things working at the same time?
The commands I run to have sam run the application locally are:~
cdk synth --no-staging | out-file template.yaml -encoding utf8
sam local start-api
Create two cdk.App chains in your codebase, one for the pipeline and one for standalone development/testing with sam local or cdk deploy. Your "application" stacks will be part of both chains. Here's a simplified example of the pattern I use:
Pipeline deploy (app-pipeline.ts): ApiStack and DatabaseStack are children of a cdk.Stage, grandchildren of the PipelineStack, and great-granchildren of a cdk.App.
Development deploys (app.ts): ApiStack and DatabaseStack are children of a cdk.App. Use with sam local and cdk deploy for dev and testing.
bin/
app.ts # calls makeAppStacks to add the stacks; runs frequently during development
app-pipeline.ts # adds the PipelineStack to an App
lib/
ApiStack.ts
DatabaseStack.ts
PipelineStack.ts # adds DeployStage to the pipeline
DeployStage.ts # subclasses cdk.Stage; calls makeAppStacks.ts to add the stacks
makeAppStacks.ts # adds the Api and Db stacks to either an App or a Stage
A makeAppStacks wrapper function instantiates the actual stacks.
// makeAppStacks.ts
export const makeAppStacks = (scope: cdk.App | DeployStage, appName: string, account: string, region: string): void => {
const {table} = new DatabaseStack(scope, 'MyDb', ...)
new ApiStack(scope, 'MyApi', {table, ...})
};
makeAppStacks gets called in two places. DeployStage.ts and app.ts are generic and rarely change:
// DeployStage.ts
export class DeployStage extends cdk.Stage {
constructor(scope: Construct, id: string, props: DeployStageProps) {
super(scope, id, props);
makeAppStacks(this, props.appName, props.env.account, props.env.region);
}
}
// app.ts
const app = new cdk.App();
const account = process.env.AWS_ACCOUNT;
makeAppStacks(app, 'MyApp', account, 'us-east-1');
Add some scripts for convenience:
"scripts": {
"---- app (sandbox env) ----": "",
"deploy-sandbox:cdk": "AWS_ACCOUNT=<Sandbox Acct> npx cdk deploy '*' --app 'ts-node ./bin/app.ts' --profile sandbox --outputs-file cdk.outputs.json",
"deploy-sandbox": "build && test && deploy-sandbox:cdk",
"destroy-sandbox": ...,
"synth-sandbox": ...,
"---- app-pipeline (pipeline env) ----": "",
"deploy-pipeline:cdk": "npx cdk deploy '*' --app 'ts-node ./bin/app-pipeline.ts' --profile pipeline",
"deploy-pipeline": "build && deploy-pipeline:cdk",
}
I am trying to debug lambda function locally using SAM cli and AWS CDK. So I am getting error function module not found any idea why so? I have taken this project from github https://github.com/mavi888/cdk-serverless-get-started
function.js:
exports.handler = async function (event) {
console.log("request:", JSON.stringify(event));
// return response back to upstream caller
return sendRes(200, "HELLLOOO");
};
const sendRes = (status, body) => {
var response = {
statusCode: status,
headers: {
"Content-Type": "text/html",
},
body: body,
};
return response;
};
Inside lib folder
// lambda function
const dynamoLambda = new lambda.Function(this, "DynamoLambdaHandler", {
runtime: lambda.Runtime.NODEJS_12_X,
code: lambda.Code.asset("functions"),
handler: "function.handler",
environment: {
HELLO_TABLE_NAME: table.tableName,
},
});
I am using cdk synth > template.yaml command which generates cloud formation template.yaml file. Now I find function name with logicalID eg: myFunction12345678 and then trying to debug it locally using this command sam local invoke myFunction12345678 in my case it is DynamoLambdaHandler function. I get function module not found error. Any idea what I am missing?
Code is available on github: https://github.com/mavi888/cdk-serverless-get-started
The issue is that sam runs a Docker container with a Volume mount from the current directory. So, it's not finding the Lambda code because the path to the code from your CloudFormation template that CDK creates does not include the cdk.out directory in which cdk creates the assets.
You have two options:
Run your sam command with a defined volume mount sam local invoke -v cdk.out
Run the command from within the cdk.out directory and pass the JSON template as an argument since cdk writes a JSON template: sam local invoke -t <StackNameTemplate.json>
I'd recommend the latter because you're working within the framework that CDK creates and not creating additional files.
Is it possible to run an external build command as part of a CDK stack sequence? Intention: 1) create a rest API, 2) write rest URL to config file, 3) build and deploy a React app:
import apigateway = require('#aws-cdk/aws-apigateway');
import cdk = require('#aws-cdk/core');
import fs = require('fs')
import s3deployment = require('#aws-cdk/aws-s3-deployment');
export class MyStack extends cdk.Stack {
const restApi = new apigateway.RestApi(this, ..);
fs.writeFile('src/app-config.json',
JSON.stringify({ "api": restApi.deploymentStage.urlForPath('/myResource') }))
// TODO locally run 'npm run build', create 'build' folder incl rest api config
const websiteBucket = new s3.Bucket(this, ..)
new s3deployment.BucketDeployment(this, .. {
sources: [s3deployment.Source.asset('build')],
destinationBucket: websiteBucket
})
}
Unfortunately, it is not possible, as the necessary references are only available after deploy and therefore after you try to write the file (the file will contain cdk tokens).
I personally have solved this problem by telling cdk to output the apigateway URLs to a file and then parse it after the deploy to upload it so a S3 bucket, to do it you need:
deploy with the output file options, for example:
cdk deploy -O ./cdk.out/deploy-output.json
In ./cdk.out/deploy-output.json you will find a JSON object with a key for each stack that produced an output (e.g. your stack that contains an API gateway)
manually parse that JSON to get your apigateway url
create your configuration file and upload it to S3 (you can do it via aws-sdk)
Of course, you have the last steps in a custom script, which means that you have to wrap your cdk deploy. I suggest to do so with a nodejs script, so that you can leverage aws-sdk to upload your file to S3 easily.
Accepting that cdk doesn't support this, I split logic into two cdk scripts, accessed API gateway URL as cdk output via the cli, then wrapped everything in a bash script.
AWS CDK:
// API gateway
const api = new apigateway.RestApi(this, 'my-api', ..)
// output url
const myResourceURL = api.deploymentStage.urlForPath('/myResource');
new cdk.CfnOutput(this, 'MyRestURL', { value: myResourceURL });
Bash:
# deploy api gw
cdk deploy --app (..)
# read url via cli with --query
export rest_url=`aws cloudformation describe-stacks --stack-name (..) --query "Stacks[0].Outputs[?OutputKey=='MyRestURL'].OutputValue" --output text`
# configure React app
echo "{ \"api\" : { \"invokeUrl\" : \"$rest_url\" } }" > src/app-config.json
# build React app with url
npm run build
# run second cdk app to deploy React built output folder
cdk deploy --app (..)
Is there a better way?
I solved a similar issue:
Needed to build and upload react-app as well
Supported dynamic configuration reading from react-app - look here
Released my react-app with specific version (in a separate flow)
Then, during CDK deployment of my app, it took a specific version of my react-app (version retrieved from local configuration) and uploaded its zip file to S3 bucket using CDK BucketDeployment
Then, using AwsCustomResource I generated a configuration file with references to Cognito and API-GW and uploaded this file to S3 as well:
// create s3 bucket for react-app
const uiBucket = new Bucket(this, "ui", {
bucketName: this.stackName + "-s3-react-app",
blockPublicAccess: BlockPublicAccess.BLOCK_ALL
});
let confObj = {
"myjsonobj" : {
"region": `${this.region}`,
"identity_pool_id": `${props.CognitoIdentityPool.ref}`,
"myBackend": `${apiGw.deploymentStage.urlForPath("/")}`
}
};
const dataString = JSON.stringify(confObj, null, 4);
const bucketDeployment = new BucketDeployment(this, this.stackName + "-app", {
destinationBucket: uiBucket,
sources: [Source.asset(`reactapp-v1.zip`)]
});
bucketDeployment.node.addDependency(uiBucket)
const s3Upload = new custom.AwsCustomResource(this, 'config-json', {
policy: custom.AwsCustomResourcePolicy.fromSdkCalls({resources: custom.AwsCustomResourcePolicy.ANY_RESOURCE}),
onCreate: {
service: "S3",
action: "putObject",
parameters: {
Body: dataString,
Bucket: `${uiBucket.bucketName}`,
Key: "app-config.json",
},
physicalResourceId: PhysicalResourceId.of(`${uiBucket.bucketName}`)
}
});
s3Upload.node.addDependency(bucketDeployment);
As others have mentioned, this isn't supported within CDK. So this how we solved it in SST: https://github.com/serverless-stack/serverless-stack
On the CDK side, allow defining React environment variables using the outputs of other constructs.
// Create a React.js app
const site = new sst.ReactStaticSite(this, "Site", {
path: "frontend",
environment: {
// Pass in the API endpoint to our app
REACT_APP_API_URL: api.url,
},
});
Spit out a config file while starting the local environment for the backend.
Then start React using sst-env -- react-scripts start, where we have a simple CLI that reads from the config file and loads them as build-time environment variables in React.
While deploying, replace these environment variables inside a custom resource based on the outputs.
We wrote about it here: https://serverless-stack.com/chapters/setting-serverless-environments-variables-in-a-react-app.html
And here's the source for the ReactStaticSite and StaticSite constructs for reference.
In my case, I'm using the Python language for CDK. I have a Makefile which I invoke directly from my app.py like this:
os.system("make"). I use the make to build up a layer zip file per AWS Docs. Technically you can invoke whatever you'd like. You must import the os package of course. Hope this helps.
I have tried with following code.
var exec = require('child_process').execFile;
var runCmd = 'java -jar ' + process.env.LAMBDA_TASK_ROOT + '/src/' + 'myjar.jar'
exec(runCmd,
function (err, resp) {
if (err) {
cb(null, { err: err})
} else {
cb(null, { resp: resp})
}
)
Here, I have put my jar file in the root folder and src folder also.
but it is giving my following error. I have already added the.jar file with the code.but i got following error.
"err": {
"code": "ENOENT",
"errno": "ENOENT",
"syscall": "spawn java -jar /var/task/src/myjar.jar",
"path": "java -jar /var/task/src/myjar.jar",
"spawnargs": [],
"cmd": "java -jar /var/task/src/myjar.jar"
}
So How, Can I execute this .jar file in AWS Lambda environment?
Please help me.
With Lambda Layers you can now bring in multiple runtimes.
https://github.com/lambci/yumda and https://github.com/mthenw/awesome-layers both have a lot of prebuilt packages that you can use to create a layer so you have a second runtime available in your environment.
For instance, I'm currently working on a project that uses the Ruby 2.5 runtime on top of a custom layer built from lambci/yumbda to provide Java.
mkdir dependencies
docker run --rm -v "$PWD"/dependencies:/lambda/opt lambci/yumda:1 yum install -y java-1.8.0-openjdk-devel.x86_64
cd dependencies
zip -yr ../javaLayer .
upload javaLayer.zip to aws lambda as a layer
add layer to your function
within your function, java will be located at /opt/lib/jvm/{YOUR_SPECIFIC_JAVA_VERSION}/jre/bin/java
AWS Lambda lets you select a runtime at the time of creation of that lambda function, or later you can change it again.
So, as you are running the Lambda function with NodeJs runtime, the container will not have Java runtime available to it.
You can only have one type of runtime in one container in case of AWS Lambda.
So, Create a separate Lambda with the Jar file that you want to run having Java as the runtime and then you can trigger that lambda function from your current NodeJS lambda function if that's what you ultimately want.
Following is an example of how you can call another Lambda function using NodeJS
var aws = require('aws-sdk');
var lambda = new aws.Lambda({
region: 'put_your_region_here'
});
lambda.invoke({
FunctionName: 'lambda_function_name',
Payload: JSON.stringify(event, null, 2)
}, function(error, data) {
if (error) {
context.done('error', error);
}
if(data.Payload){
context.succeed(data.Payload)
}
});
You can refer to the official documentation for more details.
In addition to the other answers: Since 2020 December, Lambda supports container images: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/
Ex.: I created a container image using AWS's open-source base image for python, adding a line to install java. One thing my python code did was execute a .jar file using a sys call.