Would like know if CFT API's or logs from Cloudtrail can provide any intermediate CFT before or while creating the resources. When I mean intermediate CFT(s) , We know the CFT can be kind of dynamic in the form of parameters /condition/mappings/functions those are to be evaluated at run time. I would like to know if this CFT can generate the processed CFT (with all the processed rules/parameters from input/functions...) as though it looks like static for the resource creation process. This approach really helps us in validating the real CFT that is going to be executed with all the values replaced. I'm just looking for the another CFT API. something like ,
String staticCFT = cftClient.getActualCFT("cft_location\cft.json","parameters"...);
If this feature is available , it really saves time and don't have to wait until all the resources are created with wrong values because of wrong logic in CFT.
What you can actually create is what I call a "dummy template". I use it at work as a stand-in to an actual template with real resources which would take time to execute. The dummy template has only one resource which does not actually do anything. I use a CustomResource to invoke a "HelloWorld" Lambda function. This is to get around the restriction that a CFT must have at least one resource. The template also has a bunch of parameters and all those parameters are simply supplied directly to the Outputs section. The execution of this template will hardly take a few seconds and based on the parameters and outputs, you can figure out whether your top-level template is passing in the expected values of the parameters. You can invoke the dummy template from within the top-level template.
Related
I'm importing an ARN from another stack with the cdk.Fn.importValue method. This works fine if I know that the output value is always present, but I don't know how to handle the case when the value I try to import is optional.
How can I get something similar to: (checking if the value exists before importing it)
if(value exists) {
cdk.Fn.importValue("value")
}
AFAIK there currently is no way in CDK to perform a lookup of a CloudFormation exports during synthesis time.
If you don't want to fiddle around with performing CloudFormation API calls with the aws-sdk before creating the CDK stack, in my opinion the most elegant way to share conditional values between stacks, is to use SSM parameters instead of CloudFormation exports.
SSM parameters can be looked up during synthesis time. See docs: https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
So, with StringParameter.valueFromLookup you are then able to only use the value if it exists (IIRC the method throws an error if the parameter doesn't exist, so try-catch is your friend here, but not 100% sure).
I am putting together a POC to see if AWS step-functions would be a good choice to orchestrate a flow we have.
the basic idea is that we will have a flow with a number of steps in and then at various points we will trigger a aub-workflow (a separate state machine). I am comfortable with how to do that.
however, we will have multiple versions of each sub-workflow (which would each exist as a separate set of steps), and depending on where the initial request came from we would want the flow to trigger a specific version of the sub-workflow.
I thought this might be possible by naming each version of the workflow in a way that would enable us to have a lambda that could build up the Arn of the state machine to trigger based on the incoming request, store it as a variable and pass that variable into the statemachineArn field, something like this...
"Parameters": {
"StateMachineArn": $.arnToTrigger
but when I tried this it didn't work. is anyone able to advise me whether or not what I want to do is possible? I would like to avoid using a choice step as there will be a lot of possibilities and one of the requirements is to add more without having to alter the step functions config
Would be better with more code example, but I guess that you should check the way you are passing the parameter from the previous step.
In this case you wrote:
"Parameters": { "StateMachineArn": $.arnToTrigger
But you should put a ".$" before the key, like:
"Parameters": { "StateMachineArn.$": $.arnToTrigger
This could be causing your issues.
In CloudFormation we have the ability to output some values from a template so that they can be retrieved by other processes, stacks, etc. This is typically the name of something, maybe a URL or something generated during stack creation (deployment), etc.
We also have the ability to 'export' from a template. What is the difference between returning a value as an 'output' vs as an 'export'?
Regular output values can't be references from other stacks. They can be useful when you chain or nest your stacks and their scope/visibility is local. Exported outputs are visible globally within account and region, and can be used by any future stack you are going to deploy.
Chaining
When you chain your stacks, you deploy one stack, take it outputs, and use as input parameters to the second stack you are going to deploy.
For example, let's say you have two templates called instance.yaml and eip.yaml. The instance.yaml outputs its instance-id (no export), while eip.yaml takes instance id as an input parameter.
To deploy them both, you have to chain them:
Deploy instance.yaml and wait for its completion.
Note it outputs values (i.e. instance-id) - usually done programmatically, not manually.
Deploy eip.yaml and pass instance-id as its input parameter.
Nesting
When you nest stacks you will have a parent template and a child template. Child stack will be created from inside of the parent stack. In this case the child stack will produce some outputs (not exports) for the parent stack to use.
For example, lets use again instance.yaml and eip.yaml. But this time eip.yaml will be parent and instance.yaml will be child. Also eip.yaml does not take any input parameters, but instance.yaml outputs its instance-id (not export)
In this case, to deploy them you do the following:
Upload parrent template (eip.yaml) to s3
In eip.yaml create the child instance stack using AWS::CloudFormation::Stack and the s3 url from step 1.
This way eip.yaml will be able to access the instance-id from the outputs of the nested stack using GetAtt.
Cross-referencing
When you cross-reference stacks, you have one stack that exports it outputs so that they can be used by any other stack in the same region and account.
For example, lets use again instance.yaml and eip.yaml. instance.yaml is going to export its output (instance-id). To use the instance-id eip.yaml will have to use ImportValue in its template without the need for any input parameters or nested stacks.
In this case, to deploy them you do the following:
Deploy instance.yaml and wait till it completes.
Deploy eip.yaml which will import the instance-id.
Altough cross-referencing seems very useful, it has one major issue, which is that its very difficult to update or delete cross-referenced stacks:
After another stack imports an output value, you can't delete the stack that is exporting the output value or modify the exported output value. All of the imports must be removed before you can delete the exporting stack or modify the output value.
This is very problematic if you are starting your design and your templates can change often.
When to use which?
Use cross-references (exported values) when you have some global resources that are going to be shared among many stacks in a given region and account. Also they should not change often as they are difficult to modify. Common examples are: a global bucket for centralized logging location, a VPC.
Use nested stack (not exported outputs) when you have some common components that you often deploy, but each time they can be a bit different. Examples are: ALB, a bastion host instance, vpc interface endpoint.
Finally, chained stacks (not exported outputs) are useful for designing loosely-coupled templates, where you can mix and match templates based on new requirements.
Short answer from here, use export between stacks, and use output with nested stacks.
Export
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values.
Output
With nested stacks, you deploy and manage all resources from a single
stack. You can use outputs from one stack in the nested stack group as
inputs to another stack in the group. This differs from exporting
values.
I'm trying in vain to do this. Here is the scenario:
We are creating a CloudFormation Stack that will generate a CodePipeline, that will pull another stack definition from git and deploy it, using the CloudFormationCreateUpdateStackAction
The repo and branch etc. are provided as CfnParamaters and the subsequent stack name we would like to base off a concatenation of the repo name and branch name.
However in some cases the repo might be named with an underscore or other special character that is invalid for a stackName.
I've attempted to manipulate the parameter strings using Macro's but I didnt get anywhere near something useful and running a .replace() function on the repoStr.valueAsString property modifies the CDK's "Token" pointer, not the resulting paramter which is declared at runtime.
Any ideas?
There are two options I can see:
First is to read the actual parameter value at synthesis time, then you can use the .replace function on parameter value. https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ssm.StringParameter.html#static-value-wbr-from-wbr-lookupscope-parametername
Second is to use CloudFormation intristic function to split the parameter and join the fragments back in allowed manner. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-split.html https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html
I am trying to create a nested topology from 4 existing templates. These templates do the following:
1: deploys a policy and a role.
2: deploys an EC2 instance.
3: deploys an ELB.
4: deploys an RDS instance.
All of them are "linked" by using outputs. All of the parameters are also contained within these.
Now I want to create a fifth template (master) and treat the other 4 templates as child.
However I am not too sure about the minimum code that I need in the master template:
Parameters: these are defined within the child so I don't need them here, do I?
Resources: point to the 4 child templates by providing the S3 URL where they're stored.
DependsOn clause: I need this as the child templates need to be deployed in sequential order.
Outputs: not too sure what to include here, shall I leave the outputs on the child and define here only the master's?
The master I think it should be small but not too sure if I am missing something. Another question, do I need to change anything on the child templates?
Any help would be much appreciated.
A handful of questions here, so I'll address what I can :)
For the master, or parent template, I'd recommend including all Parameters that the child stacks will need.
When you want to make any updates in the future to any of the child stacks, you'll want to initiate that from the parent stack.
According to the docs:
Certain stack operations, such as stack updates, should be initiated
from the root stack rather than performed directly on nested stacks
themselves.
So your parent template could have a lot of parameters depending on how many parameters need to be passed directly to the child templates.
Depending on how the child stacks use the Outputs from the other child stacks, you may not need to use the DependsOn to enforce ordering, since Cloudformation is smart enough to figure out Implicit Dependencies (see docs discussing DependsOn). It certainly won't hurt to include these, but the DependsOn attribute isn't needed for most situations.
You'll want to make sure the child stacks have an Outputs section so that other child stacks can use them. Pay close attention to the Return values for AWS::CloudFormation::Stack
If you have many dependent stacks, it is much easier to run everything for example from Ansible. Add outputs in each CF template, then just write simple playbook that will run your templates in desired order. Please take a look at https://docs.ansible.com/ansible/devel/modules/cloudformation_module.html