AWS CDK - post deployment actions - amazon-web-services

is anyone aware of a method to execute post-deploy functionality. Follwing is a sample of a casual CDK app.
app = core.App()
Stack(app, ...)
app.synth()
What I am looking for is a way to apply some logic after template is deployed. The thing is the app completes before cdk tool starts deploying template.
thanks

Edit: CDK now has https://github.com/cdklabs/cdk-triggers, which allows calling Lambda functions before/after resource/stack creation
You can't do that from CDK at the moment. See https://github.com/awslabs/aws-cdk/issues/2849. Maybe add your +1 there, let them know you'd like to see this feature.
What you can do is wrap cdk deploy in a shell script that will run whatever you need after the CDK is done. Something like:
#!/bin/sh
cdk deploy "$#"
success=$?
if [ $success != 0 ]; then
exit $success
fi
run_post_deploy_with_arguments.sh "$#"
will run deploy with the given arguments, then call a shell scripts passing it the same arguments if deployment was successful. This is a very crude example.

Instead of wrapping the cdk deploy command in a bash script I find it more convenient to add a pre and post deployment script under a cdk_hooks.sh file and call it before and after the CDK deployment command via the cdk.json file. In this way you can keep using the cdk deploy command without calling custom scripts manually.
cdk.json
{
"app": "sh cdk_hooks.sh pre && npx ts-node bin/stacks.ts && sh cdk_hooks.sh post"
,
"context": {
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true"
}
}
and cdk_hooks.sh
#!/bin/bash
PHASE=$1
case "$PHASE" in
pre)
# Do something
;;
post)
# Do something
;;
*)
echo "Please provide a valid cdk_hooks phase"
exit 64
esac

You can use CustomResource to run some code in a lambda (which you will also need to deploy unfortunately). The lambda will get the event of the custom resource (create, update delete), so you will be able to handle different scenarios (let's say you want to seed some table after deploy, this way you will be able to clean the data at a destroy for instance).
Here is a pretty good post about it.
Personally I couldn't find a more elegant way to do this.

Short answer: you can't. I've been waiting for this feature as well.
What you can do is wrap your deployment in a custom script that performs all your other logic, which also makes sense given that what you want to do is probably not strictly a "deploy thing" but more like "configure this and that now that the deploy is finished".
Another solution would be to rely on codebuild to perform your deploys and define there all your steps and which custom scripts to run after a deploy (I personally use this solution, with a specific stack to deploy this particular codedeploy project).

Related

Watching TypeScript CDK Builds

I need to be able to watch for my TypeScript lambda function changes within my CDK app. I'm using SAM to locally invoke the API and do not want to deploy to the cloud each time changes happen. So using something such as SAM Accelerate, for example, is not an option.
Currently, I must run cdk build and sam local start-api manually each time I change a single line in my function code, and it painfully takes a long time to start.
Any solutions or workarounds for this?
You need a Typescript watch feature with a hook to run arbitrary post-compile commands.* Typescript's tsc --watch can't do it (open issue), but the tsc-watch package can:
tsc-watch --onSuccess "./start-api.sh"
tsc-watch will call start-api.sh after each each successful compile, synthing a sam-friendly template version and starting the local testing api:
# start-api.sh
STACK_NAME=MyStack
npx cdk synth $STACK_NAME -a 'ts-node ./bin/app.ts' --no-staging --no-validation --quiet --output cdk.local
sam local start-api --template cdk.local/$STACK_NAME.template.json
* cdk watch (an alias of cdk deploy --watch) won't work in your case, because you don't want to deploy on each change.

Run command conditionally while deploying with amazon ECS

I am having a django app deployed on ECS. I need to run fixtures, a management command needed to be run inside the container on my first deployment.
I want to have access to run fixtures conditionally, not fixed for first deployment only. One way I was thinking is to maintain a variable in my env, and to run fixtures in entrypoint.sh accordingly.
Is this a good way to go about it? Also, what are some other standard ways to do the same.
Let me know if I have missed some details you might need to understand my problem.
You probably need to handle it in your entrypoint.sh script only.As far as my experience goes, you won't be able to conditionally run commands without a script in case of ECS.

AWS Deploy ECS with Updated Image

It appears that one must provide a new full task definition for each service update. Even though most of the time new deployments exclusively consists of updates to one of the underlying docker images
While this is understandable as a core architectural choice. It is quite cumbersome. Is there a command-line option that makes this easier as the full JSON spec for task definitions are quite complex?
Right now the developers needs to provide complex scripts and deployment orchestrations to achieve this relatively routine task in their CI/CD processes
I see attempts at this Here and Here. These solutions do not appear to work in all cases (for example, for Fargate launches)
I know that if the updated image uses the re-use the same tag this problem is made easier, however in dev cultures that values reproducibility and audibility that is simply not an reasonable option
Is there no other option than to leverage both the AWS API & JSON manipulation libraries?
EDIT It appears this project does a fairly good job https://github.com/fabfuel/ecs-deploy
I found a few approaches
As mentioned in my comment, use ecs-deploy script per the Github link
Create a task definition via the --generate-cli-skeleton option on awscli.
Fill out all details except for execution-rule-arn, task-role-arn, image
These cannot be filled out because they will change per commit or per environment you want to deploy to
Commit this skeleton to git, so it is part of your workspace on the CI
Then use a JSON traversing/parsing library or utility such as https://jqplay.org/ to replace at build time on the CI the roleArn and image name
Use https://github.com/fabfuel/ecs-deploy.
If you want to update only the tag of an existent task:
ecs deploy <CLUSTER NAME> <SERVICE NAME> --region <REGION NAME> --tag <NEW TAG>
e.g. ecs deploy default web-service --region us-east-1 --tag v2.0
In your ci/cd you use git hash:
Using git rev-parse HEAD, will return a hash like: d63c16cd4d0c9a30524c682fe4e7d417faae98c9
docker build -t image-name:$(git rev-parse HEAD) .
docker push image-name:$(git rev-parse HEAD)
And use the same tag on task:
ecs deploy default web-service --region us-east-1 --tag $(git rev-parse HEAD)

How do I run my CDK app?

I created and built a new CDK project:
mkdir myproj
cd myproj
cdk init --language typescript
npm run build
If I try to run the resulting javascript, I see the following:
PS C:\repos\myproj> node .\bin\myproj.js
CloudExecutable/1.0
Usage:
C:\repos\myproj\bin\myproj.js REQUEST
REQUEST is a JSON-encoded request object.
What is the right way to run my app?
You don't need to run your CDK programs directly, but rather use the CDK Toolkit instead.
To synthesize an AWS CloudFormation from your app:
cdk synth --app "node .\bin\myproj.js"
To avoid re-typing the --app switch every time, you can setup a cdk.json file with:
{ "app": "node .\app\myproj.js" }
Note: A default cdk.json is created by cdk init, so you should already see it under C:\repos\myproj.
You can also use the toolkit to deploy your app into an AWS environment:
cdk deploy
Or list all the stacks in your app:
cdk ls
The CDK application expects a request to be provided as a positional CLI argument when you're using the low-level API (aka running the app directly), for example:
node .\bin\myproj.js '{"type":"list"}'
It can also be passed as a Base64-encoded blob instead (that can make quoting the JSON less painful in a number of cases) - the Base64 needs to be prefixed with base64: in this case.
node .\bin\myproj.js base64:eyAidHlwZSI6ICJsaXN0IiB9Cg==
In order to determine what are the APIs that are available, and what arguments they expect, you can refer to the #aws-cdk/cx-api specification.

Running 'git' in AWS lambda

I am trying to run git in AWS lambda to make a checkout of a repository.
This is my setup:
I am using nodejs 4.3
I am not using nodegit because I want to use the "--depth=1" parameter, which is not supported by nodegit.
I have copied the git and ssh executable from the correct AWS AMI and placed then in a "bin" folder in the zip I upload.
I added them to PATH with this:
->
process.env['PATH'] = process.env['LAMBDA_TASK_ROOT'] + "/bin:" + process.env['PATH'];
The input variables are set like this:
"checkout_url": "git#...",
"branch":"master
Now I do this (for brevity, I mixed some pseudo-code in):
downloadDeploymentKeyFromS3Sync('/tmp/ssh_key');
fs.chmodSync("/tmp/ssh_key",0600);
process.env['GIT_SSH_COMMAND'] = 'ssh -o StrictHostKeyChecking=no -i /tmp/ssh_key';
execSync("git clone --depth=1 " + checkout_url + " --branch " + branch + " /tmp/checkout");
Running this in my local computer using lambda-local everything works fine! But when I test it in lambda, I get:
warning: templates not found /usr/share/git-core/templates
PRIV_END: seteuid: Operation not permitted\r
fatal: Could not read from remote repository.
The "warning" is of course, because I did not install git but just copied the binary. Is that a reason why this should not work?
Why is git needing "setuid"? I read that in some shells, that is disabled for security reasons. So it makes sense that it does not work in lambda. Can git somehow be instructed to not "need" this command?
Yep, this is definitely possible, I've created a Lambda Layer that achieves just this. No need to mess with any env variables, should work out of the box:
https://github.com/lambci/git-lambda-layer
As stated in the README, all you need to do is add a layer with the following ARN:
arn:aws:lambda:<region>:553035198032:layer:git:<version>
(replace <region> and <version>, check README for latest version)
The issue is that you cannot copy just the git binary. You need a portable version of git and even with that you're going to have a bad time because you cannot guarantee that the os the lambda function runs on is going to be compatible with the binary.
Stepping back, I would just walk away from this approach completely. I would clone and build a package that I would just download pretty much the same way you do downloadDeploymentKeyFromS3Sync.
You might consider this a non-answer, but I've found the easiest way to run arbitrary binaries from Lambda is... not to. If I cannot do the work from within a platform-independent, non-binary approach, I integrate Docker into the workflow, managing Docker containers from the Lambda function.
On AWS one way to do this is to use the Elastic Container Service (ECS) to spawn a task that runs git.
If you stand up a Docker Swarm instance or integrate another Docker-API compatible service such as Rackspace Carina or Joyent's Triton, then you could use a project I personally put together specifically for integrating AWS Lambda with Docker: "Dockaless".
Good luck!