I'm importing an ARN from another stack with the cdk.Fn.importValue method. This works fine if I know that the output value is always present, but I don't know how to handle the case when the value I try to import is optional.
How can I get something similar to: (checking if the value exists before importing it)
if(value exists) {
cdk.Fn.importValue("value")
}
AFAIK there currently is no way in CDK to perform a lookup of a CloudFormation exports during synthesis time.
If you don't want to fiddle around with performing CloudFormation API calls with the aws-sdk before creating the CDK stack, in my opinion the most elegant way to share conditional values between stacks, is to use SSM parameters instead of CloudFormation exports.
SSM parameters can be looked up during synthesis time. See docs: https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
So, with StringParameter.valueFromLookup you are then able to only use the value if it exists (IIRC the method throws an error if the parameter doesn't exist, so try-catch is your friend here, but not 100% sure).
Related
In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command
In CloudFormation we have the ability to output some values from a template so that they can be retrieved by other processes, stacks, etc. This is typically the name of something, maybe a URL or something generated during stack creation (deployment), etc.
We also have the ability to 'export' from a template. What is the difference between returning a value as an 'output' vs as an 'export'?
Regular output values can't be references from other stacks. They can be useful when you chain or nest your stacks and their scope/visibility is local. Exported outputs are visible globally within account and region, and can be used by any future stack you are going to deploy.
Chaining
When you chain your stacks, you deploy one stack, take it outputs, and use as input parameters to the second stack you are going to deploy.
For example, let's say you have two templates called instance.yaml and eip.yaml. The instance.yaml outputs its instance-id (no export), while eip.yaml takes instance id as an input parameter.
To deploy them both, you have to chain them:
Deploy instance.yaml and wait for its completion.
Note it outputs values (i.e. instance-id) - usually done programmatically, not manually.
Deploy eip.yaml and pass instance-id as its input parameter.
Nesting
When you nest stacks you will have a parent template and a child template. Child stack will be created from inside of the parent stack. In this case the child stack will produce some outputs (not exports) for the parent stack to use.
For example, lets use again instance.yaml and eip.yaml. But this time eip.yaml will be parent and instance.yaml will be child. Also eip.yaml does not take any input parameters, but instance.yaml outputs its instance-id (not export)
In this case, to deploy them you do the following:
Upload parrent template (eip.yaml) to s3
In eip.yaml create the child instance stack using AWS::CloudFormation::Stack and the s3 url from step 1.
This way eip.yaml will be able to access the instance-id from the outputs of the nested stack using GetAtt.
Cross-referencing
When you cross-reference stacks, you have one stack that exports it outputs so that they can be used by any other stack in the same region and account.
For example, lets use again instance.yaml and eip.yaml. instance.yaml is going to export its output (instance-id). To use the instance-id eip.yaml will have to use ImportValue in its template without the need for any input parameters or nested stacks.
In this case, to deploy them you do the following:
Deploy instance.yaml and wait till it completes.
Deploy eip.yaml which will import the instance-id.
Altough cross-referencing seems very useful, it has one major issue, which is that its very difficult to update or delete cross-referenced stacks:
After another stack imports an output value, you can't delete the stack that is exporting the output value or modify the exported output value. All of the imports must be removed before you can delete the exporting stack or modify the output value.
This is very problematic if you are starting your design and your templates can change often.
When to use which?
Use cross-references (exported values) when you have some global resources that are going to be shared among many stacks in a given region and account. Also they should not change often as they are difficult to modify. Common examples are: a global bucket for centralized logging location, a VPC.
Use nested stack (not exported outputs) when you have some common components that you often deploy, but each time they can be a bit different. Examples are: ALB, a bastion host instance, vpc interface endpoint.
Finally, chained stacks (not exported outputs) are useful for designing loosely-coupled templates, where you can mix and match templates based on new requirements.
Short answer from here, use export between stacks, and use output with nested stacks.
Export
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values.
Output
With nested stacks, you deploy and manage all resources from a single
stack. You can use outputs from one stack in the nested stack group as
inputs to another stack in the group. This differs from exporting
values.
I'm trying in vain to do this. Here is the scenario:
We are creating a CloudFormation Stack that will generate a CodePipeline, that will pull another stack definition from git and deploy it, using the CloudFormationCreateUpdateStackAction
The repo and branch etc. are provided as CfnParamaters and the subsequent stack name we would like to base off a concatenation of the repo name and branch name.
However in some cases the repo might be named with an underscore or other special character that is invalid for a stackName.
I've attempted to manipulate the parameter strings using Macro's but I didnt get anywhere near something useful and running a .replace() function on the repoStr.valueAsString property modifies the CDK's "Token" pointer, not the resulting paramter which is declared at runtime.
Any ideas?
There are two options I can see:
First is to read the actual parameter value at synthesis time, then you can use the .replace function on parameter value. https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ssm.StringParameter.html#static-value-wbr-from-wbr-lookupscope-parametername
Second is to use CloudFormation intristic function to split the parameter and join the fragments back in allowed manner. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-split.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html
Using the AWS CDK, I have a CodePipeline that produces build artifacts for 5 different Lambda functions, and then passes those artifacts as parameters to a CloudFormation template. The basic setup is the same as this example, and the CloudFormation deploy action looks basically like this:
new CloudFormationCreateUpdateStackAction({
actionName: 'Lambda_CFN_Deploy',
templatePath: cdkBuildOutput.atPath('LambdaStack.template.json'),
stackName: 'LambdaDeploymentStack',
adminPermissions: true,
parameterOverrides: {
...props.lambdaCode.assign(lambdaBuildOutput.s3Location),
// more parameter overrides here
},
extraInputs: [lambdaBuildOutput],
})
However, when I try to deploy, I get this error:
1 validation error detected: Value at 'pipeline.stages.3.member.actions.1.member.configuration' failed to satisfy constraint:
Map value must satisfy constraint:
[Member must have length less than or equal to 1000, Member must have length greater than or equal to 1]
The CodePipeline documentation specifies that values in the Configuration property of the ActionDeclaration can be up to 1000 characters. If I look at the YAML output from cdk synth, the ParameterOverrides property comes out to 1351 characters. So that's a problem.
How can I work around this issue? I may need to add more Lambda functions in the future, so this problem will only get worse. Part of the problem is that the CDK code inserts 'LambdaSourceBucketNameParameter' and 'LambdaSourceObjectKeyParameter' in each bucket/object pair name in the configuration output, putting me at 61 * 5 = 305 characters lost just to being verbose. Could I get part of the way there by overriding those generated names?
I got some assistance from a CDK maintainer here, which let me get well under the 1000-character limit. Reproducing the workaround here:
LambdaSourceBucketNameParameter and LambdaSourceObjectKeyParameter are just the default parameter names. You can create your own:
lambda.Code.fromCfnParameters({
bucketNameParam: new CfnParameter(this, 'A'),
objectKeyParam: new CfnParameter(this, 'B'),
});
You can also name Artifacts explicitly, thus saving a lot of characters over the defaults:
const sourceOutput = new codepipeline.Artifact('S');
EDIT 10-Jan-2020
I finally got a response from AWS Support regarding the issue:
I've queried the CodePipeline team and searched though the current development workflows and couldn't find any current activity related to increasing the limit for parameters passed to a CloudFormation stack or any alternative method for this action, so we have issued a feature request based on your request for our development team.
I'm not able to provide an estimated time for this feature to be available, but you can follow the release on new features through the CloudFormation and CodePipeline official pages to check when the new feature will be available.
So for now, it looks like the CfnParameter workaround is the best option.
Would like know if CFT API's or logs from Cloudtrail can provide any intermediate CFT before or while creating the resources. When I mean intermediate CFT(s) , We know the CFT can be kind of dynamic in the form of parameters /condition/mappings/functions those are to be evaluated at run time. I would like to know if this CFT can generate the processed CFT (with all the processed rules/parameters from input/functions...) as though it looks like static for the resource creation process. This approach really helps us in validating the real CFT that is going to be executed with all the values replaced. I'm just looking for the another CFT API. something like ,
String staticCFT = cftClient.getActualCFT("cft_location\cft.json","parameters"...);
If this feature is available , it really saves time and don't have to wait until all the resources are created with wrong values because of wrong logic in CFT.
What you can actually create is what I call a "dummy template". I use it at work as a stand-in to an actual template with real resources which would take time to execute. The dummy template has only one resource which does not actually do anything. I use a CustomResource to invoke a "HelloWorld" Lambda function. This is to get around the restriction that a CFT must have at least one resource. The template also has a bunch of parameters and all those parameters are simply supplied directly to the Outputs section. The execution of this template will hardly take a few seconds and based on the parameters and outputs, you can figure out whether your top-level template is passing in the expected values of the parameters. You can invoke the dummy template from within the top-level template.