AWS-CDK Resources - amazon-web-services

Using the CDK to create KMS Keys (and other resources for that matter) for my project and want to ensure I'm handling the resource properly.
During my development stage I might do a deploy, do some development work then issue a cdk destroy to cleanup the project as I know I'll not be back to it for some days.
If I don't wrap the code in an import I find duplicate keys are being created or for some resources like DynamoDB it will fail with the resource already existing:
try {
const keyRef = kms.Alias.fromAliasName(this, 'SomeKey', 'SomeKey');
} catch {
const keyRef = new kms.Key(this, 'SomeKey', {
description: 'Some descriptive text',
enableKeyRotation: true,
trustAccountIdentities: true
});
keyRef .grantEncryptDecrypt(lambdaFunc);
}
Can anyone suggest a better way of handling this or is this expected?
While developing my projects I don't like to leave resources in play until the solution is at least at Alpha stage.

When creating a KMS, you can define a RemovalPolicy:
The default value for it is RETAIN, meaning that the KMS key will stay in your account even after you delete your stack. This is useful for production environment, where you would normally want to keep keys that might be used by resources outside your stack.
In your dev environment you can set it to DESTROY and it will be deleted with your stack.
You should capture this logic in your code. Something like
const keyRef = new kms.Key(this, 'SomeKey', {
description: 'Some descriptive text',
enableKeyRotation: true,
trustAccountIdentities: true,
// define a method to check if it's a dev environment
// and set removalPolicy accordingly
removalPolicy: isDevEnv() ? cdk.RemovalPolicy.DESTROY : cdk.RemovalPolicy.RETAIN,
});

Related

Lambda Snapstart with Serverless framework

So AWS announced Lambda Snapstart very recently, I tried to give it a go since my application has cold start time ~4s.
I was able to do this by adding the following under resources:
- extensions:
NodeLambdaFunction:
Properties:
SnapStart:
ApplyOn: PublishedVersions
Now, when I actually go to the said lambda, this is what I see :
So far so good!
But, the issue is that when I check my Cloudwatch Logs, there's no trace of Restore Time instead the good old Init Duration for cold starts which means Snapstart isn't working properly.
I dug deeper, so Snapstart only works for versioned ARNs. But the thing is Serverless already claims that :
By default, the framework creates function versions for every deploy.
And on checking the logs, I see that the logStreams have the prefix : 2022/11/30/[$LATEST].
When I check the Versions tab in console, I see version number 240. So I would expect that 240 is the latest version of this lambda function and this is the function version being invoked everytime.
However, clicking on the version number open a lambda function with 240 attached to its ARN and testing that function with Snapstart works perfectly fine.
So I am confused if the LATEST version and version number 240 ( in my case ), are these different?
If no, then why isn't Snapstart automatically activated for LATEST?
If yes, how do I make sure they are same?
SnapStart is only available for published versions of a Lambda function. It cannot be used with $LATEST.
Using Versions is pretty hard for Serverless Framework, SAM, CDK, and basically any other IaC tool today, because by default they will all use $LATEST to integrate with API Gateway, SNS, SQS, DynamoDB, EventBridge, etc.
You need to update the integration with API Gateway (or whatever service you're using) to point to the Lambda Version you publish, after that Lambda deployment has completed. This isn't easy to do using Serverless Framework (and other tools). You may be able to achieve this using this traffic-shifting plugin.
In case you use stepFuntions to call your lambda function, you can set useExactVersion: true in
stepFunctions:
stateMachines:
yourStateMachine:
useExactVersion: true
...
definition:
...
This will reference the latest version of your function you just deployed
This has got to be one of the worst feature launches that I have seen in a long time. How the AWS team could put in all the time and effort required to bring this feature to market, while simultaneously rendering it useless, because we can't script the thing is beyond me.
We were ready to jump on this and start migrating apps to lambda, but now we are back in limbo. Even knowing there was a fix coming down the line would be something. Hopefully somebody from the AWS lambda team can provide some insights...
Here is a working POC of a serverless plugin that updates the lambda references to use the most recent version. This fixed the resulting cloud formation code and was tested with both SQS and API-Gateway.
'use strict'
class SetCycle {
constructor (serverless, options) {
this.hooks = {
// this is where we declare the hook we want our code to run
'before:package:finalize': function () { snapShotIt(serverless) }
}
}
}
function traverse(jsonObj,functionVersionMap) {
if( jsonObj !== null && typeof jsonObj == "object" ) {
Object.entries(jsonObj).forEach(([key, value]) => {
if(key === 'Fn::GetAtt' && value.hasOwnProperty('length') && value.length === 2 && value[1] === "Arn" && functionVersionMap.get(value[0])){
console.log(jsonObj);
let newVersionedMethod = functionVersionMap.get(value[0]);
delete jsonObj[key];
jsonObj.Ref = newVersionedMethod;
console.log('--becomes');
console.log(jsonObj);
}else{
// key is either an array index or object key
traverse(value,functionVersionMap);
}
});
}
else {
// jsonObj is a number or string
}
}
function snapShotIt(serverless){
resetLambdaReferencesToVersionedVariant (serverless)
}
function resetLambdaReferencesToVersionedVariant (serverless) {
const functionVersionMap = new Map();
let rsrc = serverless.service.provider.compiledCloudFormationTemplate.Resources
// build a map of all the lambda methods and their associated versioned resource
for (let key in rsrc) {
if (rsrc[key].Type === 'AWS::Lambda::Version') {
functionVersionMap.set(rsrc[key].Properties.FunctionName.Ref,key);
}
}
// loop through all the resource and replace the non-verioned with the versioned lambda arn reference
for (let key in rsrc) {
if (! (rsrc[key].Type === 'AWS::Lambda::Version' || rsrc[key].Type === 'AWS::Lambda::Function')) {
console.log("--" + key);
traverse(rsrc[key],functionVersionMap);
}
}
// add the snapshot syntax
for (let key in rsrc) {
if (rsrc[key].Type === 'AWS::Lambda::Function') {
console.log(rsrc[key].Properties);
rsrc[key].Properties.SnapStart = {"ApplyOn": "PublishedVersions"};
console.log("--becomes");
console.log(rsrc[key].Properties);
}
}
// prints the method map
//for(let [key,value] of functionVersionMap){
//console.log(key + " : " + value);
//}
}
// now we need to make our plugin object available to the framework to execute
module.exports = SetCycle
I was able to achieve this by updating my serverless version to 3.26.0 and adding the property snapStart: true to the functions that i have created. currently serverless creates version numbers and as soon as the new version is published the SnapStart gets enabled to latest version.
ApiName:
handler: org.springframework.cloud.function.adapter.aws.SpringBootApiGatewayRequestHandler
events:
- httpApi:
path: /end/point
method: post
environment:
FUNCTION_NAME: ApiName
runtime: java11
memorySize: 4096
snapStart: true

Using the CfnOutput created inside a LambdaRestApi in AWS CDK

I'm creating a LambdaRestApi as follows
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
})
and I'd like to get to the CfnOutput it generates, is it possible? I want to pass it to other functions and I want to avoid creating a new one.
Specifically the situation I'm tackling is this: I have have a post stage that verifies things are working at it uses the CfnOutput:
deployStage.addPost(
new CodeBuildStep("VerifyAPIGatewayEndpoint", {
envFromCfnOutputs: {
ENDPOINT_URL: deploy.hcEndpoint
},
commands: [
"curl -Ssf $ENDPOINT_URL",
"curl -Ssf $ENDPOINT_URL/hello",
"curl -Ssf $ENDPOINT_URL/test"
]
})
)
That deploy.hcEndpoint is a CfnOutput that I'm manually creating after the LambdaRestApi is created:
const gateway = new LambdaRestApi(this, "Endpoint", {handler: hello})
this.hcEndpoint = new CfnOutput(this, "GatewayUrl", {value: gateway.url})
and then making sure that every construct makes it available to its parent.
Using CfnOutputs in the post-deployment step makes sense. I am trying to learn the proper way of doing things, and also have clean stacks. With only one Lambda function it's no big deal, but with tens or hundreds it might. And since LambdaRestApi already creates the output, it does feel like I'm repeating myself by creating an identical one.
Assuming you are using the following code for your LambdaRestApi:
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
});
Referencing in same stack as LambdaRestApi
const outputValue = this.gateway.urlForPath("/");
Looking at the source code, the output value is just a call to urlForPath. The method is public, so you can use it directly.
Referencing from another stack
You can use cross stack references to get a reference to the output value of the stack.
import { Fn } from 'aws-cdk-lib';
const outputValue = Fn.importValue("MainURL");
If you try to use the first method in another stack, CDK will just generate a cross stack reference dynamically by adding extra outputs, so it is better to import the value directly.
I'd like to get to the CfnOutput it generates, is it possible?
Yes. Use the escape hatch syntax to get a reference to the CfnOutput that RestApi creates for the endpointExportName:
const urlCfnOutput = this.gateway.node.findChild('Endpoint') as cdk.CfnOutput;
console.log(urlCfnOutput.exportName);
// MainURL
console.log(urlCfnOutput.value);
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/
Prefer standard CDK
As their name suggests, "escape hatches" are for "emergencies" when the CDK's standard solutions fail. Your use case may be one such instance, I don't know. But as #Kaustubh Khavnekar points out, you don't need the CfnOutput to get the url token value.
console.log(this.gateway.url)
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/

How can I disable transition in codepipeline via CDK?

I am using nodejs CDK to deploy codepipeline to AWS. Below is the code:
const pipeline = new codepipeline.Pipeline(this, this.projectName, {
pipelineName: this.projectName,
role: this.pipelineRole,
stages,
artifactBucket: s3.Bucket.fromBucketName(
this,
'deploymentS3Bucket',
cdk.Fn.importValue(this.s3Bucket)
),
});
It has all stages defined inside stages array. The question I have is how to disable transition in one of the stage on this pipeline?
I tried below code:
const primaryDeployStage: codepipeline.CfnPipeline = pipeline.node.findChild('Approve') as codepipeline.CfnPipeline;
const stageTransitionProperty: codepipeline.CfnPipeline.StageTransitionProperty = {
reason: 'reason',
stageName: 'stageName',
};
primaryDeployStage. addPropertyOverride('DisableInboundStageTransitions', stageTransitionProperty);
but it says no such method addOverride error.
As of CDK v2.1, the codepipeline.Pipeline class does not expose this property, but the Level1 CfnPipeline class it builds on does (github issue).
Option 1: Quick and dirty workaround: reach into codepipeline.Pipeline's implementation to get a reference to its CfnPipeline (this is the approach you tried):
// pipeline is a codepipeline.Pipeline
// DANGER - 'Resource' is the CfnPipeline construct's id, assigned in the Pipeline's constructor implementation
const cfnPipeline = pipeline.node.findChild('Resource') as codepipeline.CfnPipeline;
cfnPipeline.addPropertyOverride('DisableInboundStageTransitions', [
{
StageName: 'Stage2',
Reason: 'No particular reason',
},
]);
Option 2: instantiate a Level1 CfnPipeline, which accepts a disableInboundStageTransitions prop.
// CfnPipelineProps
disableInboundStageTransitions: [{
reason: 'reason',
stageName: 'stageName',
}],
Edit: Explain that Resource is the name of the CfnPipeline child node
We disable stage transitions by passing stage names to a L1 CfnPipeline. Approach #2 does this directly by creating one.
But we'd rather use a L2 Pipeline, because it's easier. This is Approach #1, the one you are taking. Lucky for us, our pipeline has a CfnPipeline child node named 'Resource'. How do we know this? We look in the
Pipeline constructor's source code on github.
Once we have a reference to the CfnPipeline using pipeline.node.findChild('Resource'), we add the disabled stages to it as a property override, in the same {StageName: Reason:} format as in #2.

Pass CDK context values per deployment environment

I am using context to pass values to CDK. Is there currently a way to define project context file per deployment environment (dev, test) so that when the number of values that I have to pass grow, they will be easier to manage compared to passing the values in the command-line:
cdk synth --context bucketName1=my-dev-bucket1 --context bucketName2=my-dev-bucket2 MyStack
It would be possible to use one cdk.json context file and only pass the environment as the context value in the command-line, and depending on it's value select the correct values:
{
...
"context": {
"devBucketName1": "my-dev-bucket1",
"devBucketName2": "my-dev-bucket2",
"testBucketName1": "my-test-bucket1",
"testBucketName2": "my-test-bucket2",
}
}
But preferably, I would like to split it into separate files, f.e. cdk.dev.json and cdk.test.json which would contain their corresponding values, and use the correct one depending on the environment.
According to the documentation, CDK will look for context in one of several places. However, there's no mention of defining multiple/additional files.
The best solution I've been able to come up with is to make use of JSON to separate context out per environment:
"context": {
"dev": {
"bucketName": "my-dev-bucket"
}
"prod": {
"bucketName": "my-prod-bucket"
}
}
This allows you to access the different values programmatically depending on which environment CDK is deploying to.
let myEnv = dev // This could be passed in as a property of the class instead and accessed via props.myEnv
const myBucket = new s3.Bucket(this, "MyBucket", {
bucketName: app.node.tryGetContext(myEnv).bucketName
})
You can also do so programmatically in your code:
For instance, I have a context variable of deploy_tag cdk deploy Stack\* -c deploy_tag=PROD
then in my code, i have retrieved that deploy_tag variable and I make the decisions there, such as: (using python, but the idea is the same)
bucket_name = BUCKET_NAME_PROD if deploy_tag == 'PROD' else BUCKET_NAME_DEV
this can give you a lot more control, and if you set up a constants file in your code you can keep that up to date with far less in your cdk.json that may become very cluttered with larger stacks and multiple environments. If you go this route then you can have your Prod and Dev constants file, and your context variable can inform your cdk which file to load for a given deployment.
i also tend to create a new class object with all my deployment properties either assigned or derived, and pass that object into each stack, retrieving what i need out of there.

Enable AWS Glue Continuous Logging from create_job

I'm creating a Glue Job using boto3 create_job. I was interested to pass a parameter to enable Continuous Logging (no filter) for this new Job.
Unfortunately neither here or here I can find any useful parameter to enable it.
Any suggestions?
Found a way to do it, it's similar to other arguments, you just need to pass these log related arguments in Default Arguments like -
glueClient.create_job(
Name="testBoto",
Role="Role_name",
Command={
'Name': "some_name",
'ScriptLocation' : "some_location"
},
DefaultArguments={
"--enable-continuous-cloudwatch-log": "true",
"--enable-continuous-log-filter": "true"
}
)