Best Method to Manipulate a CfnParameter String in aws-cdk - amazon-web-services

I'm trying in vain to do this. Here is the scenario:
We are creating a CloudFormation Stack that will generate a CodePipeline, that will pull another stack definition from git and deploy it, using the CloudFormationCreateUpdateStackAction
The repo and branch etc. are provided as CfnParamaters and the subsequent stack name we would like to base off a concatenation of the repo name and branch name.
However in some cases the repo might be named with an underscore or other special character that is invalid for a stackName.
I've attempted to manipulate the parameter strings using Macro's but I didnt get anywhere near something useful and running a .replace() function on the repoStr.valueAsString property modifies the CDK's "Token" pointer, not the resulting paramter which is declared at runtime.
Any ideas?

There are two options I can see:
First is to read the actual parameter value at synthesis time, then you can use the .replace function on parameter value. https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ssm.StringParameter.html#static-value-wbr-from-wbr-lookupscope-parametername
Second is to use CloudFormation intristic function to split the parameter and join the fragments back in allowed manner. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-split.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html

Related

AWS CDK conditional ImportValue

I'm importing an ARN from another stack with the cdk.Fn.importValue method. This works fine if I know that the output value is always present, but I don't know how to handle the case when the value I try to import is optional.
How can I get something similar to: (checking if the value exists before importing it)
if(value exists) {
cdk.Fn.importValue("value")
}
AFAIK there currently is no way in CDK to perform a lookup of a CloudFormation exports during synthesis time.
If you don't want to fiddle around with performing CloudFormation API calls with the aws-sdk before creating the CDK stack, in my opinion the most elegant way to share conditional values between stacks, is to use SSM parameters instead of CloudFormation exports.
SSM parameters can be looked up during synthesis time. See docs: https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
So, with StringParameter.valueFromLookup you are then able to only use the value if it exists (IIRC the method throws an error if the parameter doesn't exist, so try-catch is your friend here, but not 100% sure).

Where do I tell AWS SAM which file to chose depending on the stage/environment?

In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command

How to use optional Parameter Store parameters in a CloudFormation template?

I'd like to make CloudFormation look in Parameter Store for a particular parameter - and if not found, look for a different one.
The actual use case is that stacks are deployed for different branches and we'd like to have top-level parameters for all branches that can optionally be overridden by creating a branch-specific parameter. Something like this:
dev-param = 120 <-- top-level, applies if branch-specific parameter doesn't exist
dev-param.mybranch = 60 <-- branch-specific parameter
Have tried a couple of ways but got an error when deploying the stack for both - see below.
When attempting to use dynamic references:
Parameters: [ssm:dev-param.mybranch] cannot be found.
When attempting to use CloudFormation SSM Parameter Types.
Template format error: Every Default member must be a string.
For the latter, the Default: field specifies the Parameter Store key name. This needs to be generated dynamically from other CloudFormation parameters, e.g. there is a parameter for the environment type so development key names begin with dev- and production keys begin with prod-.
Is there another way to achieve this?
You can't do that without a custom resource or a macro.

Difference between an Output & an Export

In CloudFormation we have the ability to output some values from a template so that they can be retrieved by other processes, stacks, etc. This is typically the name of something, maybe a URL or something generated during stack creation (deployment), etc.
We also have the ability to 'export' from a template. What is the difference between returning a value as an 'output' vs as an 'export'?
Regular output values can't be references from other stacks. They can be useful when you chain or nest your stacks and their scope/visibility is local. Exported outputs are visible globally within account and region, and can be used by any future stack you are going to deploy.
Chaining
When you chain your stacks, you deploy one stack, take it outputs, and use as input parameters to the second stack you are going to deploy.
For example, let's say you have two templates called instance.yaml and eip.yaml. The instance.yaml outputs its instance-id (no export), while eip.yaml takes instance id as an input parameter.
To deploy them both, you have to chain them:
Deploy instance.yaml and wait for its completion.
Note it outputs values (i.e. instance-id) - usually done programmatically, not manually.
Deploy eip.yaml and pass instance-id as its input parameter.
Nesting
When you nest stacks you will have a parent template and a child template. Child stack will be created from inside of the parent stack. In this case the child stack will produce some outputs (not exports) for the parent stack to use.
For example, lets use again instance.yaml and eip.yaml. But this time eip.yaml will be parent and instance.yaml will be child. Also eip.yaml does not take any input parameters, but instance.yaml outputs its instance-id (not export)
In this case, to deploy them you do the following:
Upload parrent template (eip.yaml) to s3
In eip.yaml create the child instance stack using AWS::CloudFormation::Stack and the s3 url from step 1.
This way eip.yaml will be able to access the instance-id from the outputs of the nested stack using GetAtt.
Cross-referencing
When you cross-reference stacks, you have one stack that exports it outputs so that they can be used by any other stack in the same region and account.
For example, lets use again instance.yaml and eip.yaml. instance.yaml is going to export its output (instance-id). To use the instance-id eip.yaml will have to use ImportValue in its template without the need for any input parameters or nested stacks.
In this case, to deploy them you do the following:
Deploy instance.yaml and wait till it completes.
Deploy eip.yaml which will import the instance-id.
Altough cross-referencing seems very useful, it has one major issue, which is that its very difficult to update or delete cross-referenced stacks:
After another stack imports an output value, you can't delete the stack that is exporting the output value or modify the exported output value. All of the imports must be removed before you can delete the exporting stack or modify the output value.
This is very problematic if you are starting your design and your templates can change often.
When to use which?
Use cross-references (exported values) when you have some global resources that are going to be shared among many stacks in a given region and account. Also they should not change often as they are difficult to modify. Common examples are: a global bucket for centralized logging location, a VPC.
Use nested stack (not exported outputs) when you have some common components that you often deploy, but each time they can be a bit different. Examples are: ALB, a bastion host instance, vpc interface endpoint.
Finally, chained stacks (not exported outputs) are useful for designing loosely-coupled templates, where you can mix and match templates based on new requirements.
Short answer from here, use export between stacks, and use output with nested stacks.
Export
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values.
Output
With nested stacks, you deploy and manage all resources from a single
stack. You can use outputs from one stack in the nested stack group as
inputs to another stack in the group. This differs from exporting
values.

Intermediate Cloud formation template with values filled from Dynamic CFT

Would like know if CFT API's or logs from Cloudtrail can provide any intermediate CFT before or while creating the resources. When I mean intermediate CFT(s) , We know the CFT can be kind of dynamic in the form of parameters /condition/mappings/functions those are to be evaluated at run time. I would like to know if this CFT can generate the processed CFT (with all the processed rules/parameters from input/functions...) as though it looks like static for the resource creation process. This approach really helps us in validating the real CFT that is going to be executed with all the values replaced. I'm just looking for the another CFT API. something like ,
String staticCFT = cftClient.getActualCFT("cft_location\cft.json","parameters"...);
If this feature is available , it really saves time and don't have to wait until all the resources are created with wrong values because of wrong logic in CFT.
What you can actually create is what I call a "dummy template". I use it at work as a stand-in to an actual template with real resources which would take time to execute. The dummy template has only one resource which does not actually do anything. I use a CustomResource to invoke a "HelloWorld" Lambda function. This is to get around the restriction that a CFT must have at least one resource. The template also has a bunch of parameters and all those parameters are simply supplied directly to the Outputs section. The execution of this template will hardly take a few seconds and based on the parameters and outputs, you can figure out whether your top-level template is passing in the expected values of the parameters. You can invoke the dummy template from within the top-level template.