Pass CDK context values per deployment environment - amazon-web-services

I am using context to pass values to CDK. Is there currently a way to define project context file per deployment environment (dev, test) so that when the number of values that I have to pass grow, they will be easier to manage compared to passing the values in the command-line:
cdk synth --context bucketName1=my-dev-bucket1 --context bucketName2=my-dev-bucket2 MyStack
It would be possible to use one cdk.json context file and only pass the environment as the context value in the command-line, and depending on it's value select the correct values:
{
...
"context": {
"devBucketName1": "my-dev-bucket1",
"devBucketName2": "my-dev-bucket2",
"testBucketName1": "my-test-bucket1",
"testBucketName2": "my-test-bucket2",
}
}
But preferably, I would like to split it into separate files, f.e. cdk.dev.json and cdk.test.json which would contain their corresponding values, and use the correct one depending on the environment.

According to the documentation, CDK will look for context in one of several places. However, there's no mention of defining multiple/additional files.
The best solution I've been able to come up with is to make use of JSON to separate context out per environment:
"context": {
"dev": {
"bucketName": "my-dev-bucket"
}
"prod": {
"bucketName": "my-prod-bucket"
}
}
This allows you to access the different values programmatically depending on which environment CDK is deploying to.
let myEnv = dev // This could be passed in as a property of the class instead and accessed via props.myEnv
const myBucket = new s3.Bucket(this, "MyBucket", {
bucketName: app.node.tryGetContext(myEnv).bucketName
})

You can also do so programmatically in your code:
For instance, I have a context variable of deploy_tag cdk deploy Stack\* -c deploy_tag=PROD
then in my code, i have retrieved that deploy_tag variable and I make the decisions there, such as: (using python, but the idea is the same)
bucket_name = BUCKET_NAME_PROD if deploy_tag == 'PROD' else BUCKET_NAME_DEV
this can give you a lot more control, and if you set up a constants file in your code you can keep that up to date with far less in your cdk.json that may become very cluttered with larger stacks and multiple environments. If you go this route then you can have your Prod and Dev constants file, and your context variable can inform your cdk which file to load for a given deployment.
i also tend to create a new class object with all my deployment properties either assigned or derived, and pass that object into each stack, retrieving what i need out of there.

Related

Using the CfnOutput created inside a LambdaRestApi in AWS CDK

I'm creating a LambdaRestApi as follows
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
})
and I'd like to get to the CfnOutput it generates, is it possible? I want to pass it to other functions and I want to avoid creating a new one.
Specifically the situation I'm tackling is this: I have have a post stage that verifies things are working at it uses the CfnOutput:
deployStage.addPost(
new CodeBuildStep("VerifyAPIGatewayEndpoint", {
envFromCfnOutputs: {
ENDPOINT_URL: deploy.hcEndpoint
},
commands: [
"curl -Ssf $ENDPOINT_URL",
"curl -Ssf $ENDPOINT_URL/hello",
"curl -Ssf $ENDPOINT_URL/test"
]
})
)
That deploy.hcEndpoint is a CfnOutput that I'm manually creating after the LambdaRestApi is created:
const gateway = new LambdaRestApi(this, "Endpoint", {handler: hello})
this.hcEndpoint = new CfnOutput(this, "GatewayUrl", {value: gateway.url})
and then making sure that every construct makes it available to its parent.
Using CfnOutputs in the post-deployment step makes sense. I am trying to learn the proper way of doing things, and also have clean stacks. With only one Lambda function it's no big deal, but with tens or hundreds it might. And since LambdaRestApi already creates the output, it does feel like I'm repeating myself by creating an identical one.
Assuming you are using the following code for your LambdaRestApi:
this.gateway = new apigw.LambdaRestApi(this, "Endpoint", {
handler: hello,
endpointExportName: "MainURL"
});
Referencing in same stack as LambdaRestApi
const outputValue = this.gateway.urlForPath("/");
Looking at the source code, the output value is just a call to urlForPath. The method is public, so you can use it directly.
Referencing from another stack
You can use cross stack references to get a reference to the output value of the stack.
import { Fn } from 'aws-cdk-lib';
const outputValue = Fn.importValue("MainURL");
If you try to use the first method in another stack, CDK will just generate a cross stack reference dynamically by adding extra outputs, so it is better to import the value directly.
I'd like to get to the CfnOutput it generates, is it possible?
Yes. Use the escape hatch syntax to get a reference to the CfnOutput that RestApi creates for the endpointExportName:
const urlCfnOutput = this.gateway.node.findChild('Endpoint') as cdk.CfnOutput;
console.log(urlCfnOutput.exportName);
// MainURL
console.log(urlCfnOutput.value);
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/
Prefer standard CDK
As their name suggests, "escape hatches" are for "emergencies" when the CDK's standard solutions fail. Your use case may be one such instance, I don't know. But as #Kaustubh Khavnekar points out, you don't need the CfnOutput to get the url token value.
console.log(this.gateway.url)
// https://${Token[TOKEN.258]}.execute-api.us-east-1.${Token[AWS.URLSuffix.3]}/${Token[TOKEN.277]}/

How can I fix: Terraform error refreshing state: state snapshot was created by Terraform v0.14.5, which is newer than current v0.13.0

I am trying to upgrade my terraform version from 0.12 to 0.13 but had previously init and planned with globally install terraform 0.14.5.
I'm struggling to understand how this effects the snapshot and/or I can remove this error, remote state hasn't changed so where is it getting this from? I have removed any .terraform in the directory.
Terraform is holding its state either in a remote backend or in a local one.
If you have no configuration that looks like this in your configuration files, minding that the backend type might differ based on the one used, so the name in "..." might vary:
terraform {
backend "..." {
}
}
Then it would be safe to assume you have a local JSON state file named terraform.tfsate, and also, since your project existed before the upgrade, a file terraform.tfsate.backup.
If you peak into those files, you will see the version of terraform used to create the said state in the beginning of the file.
For example:
{
"version": 4,
"terraform_version": "0.14.5",
}
From there, and with all the caution in the world, ensuring you indeed didn't change anything in the remote state, you have some options:
if your file terraform.tfsate.backup still have "terraform_version": "0.13.0", you could just make a rollback by removing the terraform.tfsate and renaming terraform.tfsate.backup to terraform.tfsate
you can try to "hack" into the actual terraform.tfsate and change the version there by adapting the line "terraform_version": "0.14.5"
As advised in the below link, you could create a state version using the API, so, overriding the state by manually specifying your expected version terraform_version
My advise still, would be to make a diff of terraform.tfsate against terraform.tfsate.backup to see what have possibly changed, or use a versioning tool if your terraform.tfsate is under version control.
Useful read: https://support.hashicorp.com/hc/en-us/articles/360001147287-Downgrading-Terraform-Version-in-the-State

AWS-CDK Resources

Using the CDK to create KMS Keys (and other resources for that matter) for my project and want to ensure I'm handling the resource properly.
During my development stage I might do a deploy, do some development work then issue a cdk destroy to cleanup the project as I know I'll not be back to it for some days.
If I don't wrap the code in an import I find duplicate keys are being created or for some resources like DynamoDB it will fail with the resource already existing:
try {
const keyRef = kms.Alias.fromAliasName(this, 'SomeKey', 'SomeKey');
} catch {
const keyRef = new kms.Key(this, 'SomeKey', {
description: 'Some descriptive text',
enableKeyRotation: true,
trustAccountIdentities: true
});
keyRef .grantEncryptDecrypt(lambdaFunc);
}
Can anyone suggest a better way of handling this or is this expected?
While developing my projects I don't like to leave resources in play until the solution is at least at Alpha stage.
When creating a KMS, you can define a RemovalPolicy:
The default value for it is RETAIN, meaning that the KMS key will stay in your account even after you delete your stack. This is useful for production environment, where you would normally want to keep keys that might be used by resources outside your stack.
In your dev environment you can set it to DESTROY and it will be deleted with your stack.
You should capture this logic in your code. Something like
const keyRef = new kms.Key(this, 'SomeKey', {
description: 'Some descriptive text',
enableKeyRotation: true,
trustAccountIdentities: true,
// define a method to check if it's a dev environment
// and set removalPolicy accordingly
removalPolicy: isDevEnv() ? cdk.RemovalPolicy.DESTROY : cdk.RemovalPolicy.RETAIN,
});

Enable AWS Glue Continuous Logging from create_job

I'm creating a Glue Job using boto3 create_job. I was interested to pass a parameter to enable Continuous Logging (no filter) for this new Job.
Unfortunately neither here or here I can find any useful parameter to enable it.
Any suggestions?
Found a way to do it, it's similar to other arguments, you just need to pass these log related arguments in Default Arguments like -
glueClient.create_job(
Name="testBoto",
Role="Role_name",
Command={
'Name': "some_name",
'ScriptLocation' : "some_location"
},
DefaultArguments={
"--enable-continuous-cloudwatch-log": "true",
"--enable-continuous-log-filter": "true"
}
)

Set or modify an AWS Lambda environment variable with Python boto3

i want to set or modify an environment variable in my lambda script.
I need to save a value for the next call of my script.
For exemple i create an environment variable with the aws lambda console and don't set value. After that i try this :
import boto3
import os
if os.environ['ENV_VAR']:
print(os.environ['ENV_VAR'])
os.environ['ENV_VAR'] = "new value"
In this case my value will never print.
I tried with :
os.putenv()
but it's the same result.
Do you know why this environment variable is not set ?
Thank you !
Consider using the boto3 lambda command, update_function_configuration to update the environment variable.
response = client.update_function_configuration(
FunctionName='test-env-var',
Environment={
'Variables': {
'env_var': 'hello'
}
}
)
I need to save a value for the next call of my script.
That's not how environment variables work, nor is it how lambda works. Environment variables cannot be set in a child process for the parent - a process can only set environment variables in its own and child process environments.
This may be confusing to you if you set environment variables at the shell, but in that case, the shell is the long running process setting and getting your environment variables, not the programs it calls.
Consider this example:
from os import environ
print environ['A']
environ['A'] = "Set from python"
print environ['A']
This will only set env A for itself. If you run it several times, the initial value of A is always the shell's value, never the value python sets.
$ export A="set from bash"
$ python t.py
set from bash
Set from python
$ python t.py
set from bash
Set from python
Further, even if that wasn't the case, it wouldn't work reliably with aws lambda. Lambda runs your code on whatever compute resources are available at the time; it will typically cache runtimes for frequently executed functions, so in these cases data could be written to the filesystem to preserve it. But if the next invocation wasn't run in that runtime, your data would be lost.
For your needs, you want to preserve your data outside the lambda. Some obvious options are: write to s3, write to dynamo, or, write to sqs. The next invocation would read from that location, achieving the desired result.
AWS Lambda just executes the piece of code with given set of inputs. Once executed, it returns the output and that's all. If you want to preserve the output for your next call, then you probably need to store that in DB or Queue as Dan said. I personally use SQS in conjunction with SNS that sends me notifications about current state. You can even store the end result like success or failure in SQS which you can use for next trigger. Just throwing the options here, rest all depends on your requirements.