CloudFormation nested stack name - amazon-web-services

I need to set nested stack name explicitly in a CloudFormation template, but don't see such option in AWS documentation. Is there way to achieve this?
I can specify stack name, when running a parent stack, but all nested stacks, got a randomly generated stack name, based on a resource name created, like:
VPC:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3-eu-west-1.amazonaws.com/cf-templates-wtmg/vpc.yaml
Parameters:
EnvironmentName: !Ref AWS::StackName
Which will generate nested stack name in form parent_stack_name-VPC-random_hash.

Yes. I was looking for the same thing also but currently it's not available.
I think the reason of you wanted a specific stack name is to use it for output referral?
What you can do/I did was:
1) For those in the same parent stack, you need to output from nested stack and then refer directly from the stack like !GetAtt NestedStack1.Outputs.Output1
2) For those which are outside for parent stack, you will need to output twice. Once in the nested stack and once in the parent stack. Then you can refer to the parent stack output.
Hope this will help.

I ran into the same thing just today.
From the official AWS documentation, supporting the original answer to this question:
You can add output values from a nested stack within the containing template. You use the GetAtt function with the nested stack's logical name and the name of the output value in the nested stack in the format Outputs.NestedStackOutputName
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html
Looks like we still cannot reference the stack name more easily. The first answer to the question on this page still stands.

Related

Does Cloudformation support update links?

To help people create a Cloudformation stack, I can generate links to parameterized Cloudformation templates:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-create-stacks-quick-create-links.html
Question: Is there a way to generate another link to update the previously created Cloudformation stack?
What I tried and what doesn't work:
I generated a link to create stack "ABC" using template Template1.
​I generated a link to create stack "ABC", with a slightly modified template Template2.
What happens is that Cloudformation creates the first "ABC" stack based on Template1, but then complains that the "ABC" stack already exists when trying to create a stack with the same name using Template2.
I kind of expected this behavior, but I assumed that -- since stack names are unique -- that it would offer to apply updates instead of trying to create (and fail) to create the stack.

Difference between an Output & an Export

In CloudFormation we have the ability to output some values from a template so that they can be retrieved by other processes, stacks, etc. This is typically the name of something, maybe a URL or something generated during stack creation (deployment), etc.
We also have the ability to 'export' from a template. What is the difference between returning a value as an 'output' vs as an 'export'?
Regular output values can't be references from other stacks. They can be useful when you chain or nest your stacks and their scope/visibility is local. Exported outputs are visible globally within account and region, and can be used by any future stack you are going to deploy.
Chaining
When you chain your stacks, you deploy one stack, take it outputs, and use as input parameters to the second stack you are going to deploy.
For example, let's say you have two templates called instance.yaml and eip.yaml. The instance.yaml outputs its instance-id (no export), while eip.yaml takes instance id as an input parameter.
To deploy them both, you have to chain them:
Deploy instance.yaml and wait for its completion.
Note it outputs values (i.e. instance-id) - usually done programmatically, not manually.
Deploy eip.yaml and pass instance-id as its input parameter.
Nesting
When you nest stacks you will have a parent template and a child template. Child stack will be created from inside of the parent stack. In this case the child stack will produce some outputs (not exports) for the parent stack to use.
For example, lets use again instance.yaml and eip.yaml. But this time eip.yaml will be parent and instance.yaml will be child. Also eip.yaml does not take any input parameters, but instance.yaml outputs its instance-id (not export)
In this case, to deploy them you do the following:
Upload parrent template (eip.yaml) to s3
In eip.yaml create the child instance stack using AWS::CloudFormation::Stack and the s3 url from step 1.
This way eip.yaml will be able to access the instance-id from the outputs of the nested stack using GetAtt.
Cross-referencing
When you cross-reference stacks, you have one stack that exports it outputs so that they can be used by any other stack in the same region and account.
For example, lets use again instance.yaml and eip.yaml. instance.yaml is going to export its output (instance-id). To use the instance-id eip.yaml will have to use ImportValue in its template without the need for any input parameters or nested stacks.
In this case, to deploy them you do the following:
Deploy instance.yaml and wait till it completes.
Deploy eip.yaml which will import the instance-id.
Altough cross-referencing seems very useful, it has one major issue, which is that its very difficult to update or delete cross-referenced stacks:
After another stack imports an output value, you can't delete the stack that is exporting the output value or modify the exported output value. All of the imports must be removed before you can delete the exporting stack or modify the output value.
This is very problematic if you are starting your design and your templates can change often.
When to use which?
Use cross-references (exported values) when you have some global resources that are going to be shared among many stacks in a given region and account. Also they should not change often as they are difficult to modify. Common examples are: a global bucket for centralized logging location, a VPC.
Use nested stack (not exported outputs) when you have some common components that you often deploy, but each time they can be a bit different. Examples are: ALB, a bastion host instance, vpc interface endpoint.
Finally, chained stacks (not exported outputs) are useful for designing loosely-coupled templates, where you can mix and match templates based on new requirements.
Short answer from here, use export between stacks, and use output with nested stacks.
Export
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values.
Output
With nested stacks, you deploy and manage all resources from a single
stack. You can use outputs from one stack in the nested stack group as
inputs to another stack in the group. This differs from exporting
values.

AWS CDK generated resource identifiers are horrible and not readable. Any way to fix this?

Anyone, that has used AWS CDK suffers from horrible resource identifiers.
Examples of Stacks/Nested Stacks names:
Or examples of resource names:
These identifiers are horrible to read. Is there any work-around to override these identifiers?
I have tried to set ids / names / identifiers / alies of the resources. However it seems that cdk or cloudformation itself is generating these strings.
Thank you for suggestions!
All of resources(or at least for most that I know) could be named manually.
For AWS::EC2::SecurityGroup that would be Properties -> GroupName
AWS::CloudWatch::Alarm - Properties -> AlarmName
AWS::Lambda::Function - Properties -> FunctionName
etc.
But for some of them that would lead to consequences - you won't be able to update some of them, because they might need recreation (and the name is already occupied). So in general it's not a good practice.
And obviously you won't be able to create a full env duplicate not changing some parameter for the generated name like this:
FunctionName: !Sub '${InstanceName}-your-resourse-constant-name-${Environment}'
If you don't specify the naming it would create a name like this:
${stackName}-${resourceNameInCF}-${someHashCode}, but in your case it seems you have nested stacks and it becomes pretty unreadable, especially with long names because of the names chaining.
Yeah, this is a good question. There's two types of IDs here:
Logical ID
Physical ID
Typically the Physical ID can be specified as a property on the resources. For instance, if you are using the CDK, you can set the functionName property when creating your Lambda (as below).
The Logical ID is also added when creating the resource and as you mentioned, however, the Logical ID is derived from a combination of what you specify and where it is within your stack. So, for example, if you have a stack that uses constructs, then this ID will be prefixed with the construct's Logical ID as well... and it's definitely not very readable.
I'd be very careful changing these IDs, especially if you have already deployed the stack, but if you really want to override them then you could do something like this in the CDK (TypeScript):
import {
CfnResource,
} from "#aws-cdk/core";
import {
Function,
Runtime,
Code,
} from "#aws-cdk/aws-lambda";
const consumerLambda = new Function(this, 'LogicalIdOnResource', {
runtime: Runtime.NODEJS_12_X,
handler: 'index.handler',
code: Code.fromAsset(path.join(__dirname, 'lambda-handler')),
functionName: 'ds-di-kafka-consumer-lambda' // PhysicalIdOnResource
});
// Override Logical ID
(consumerLambda.node.defaultChild as CfnResource).overrideLogicalId(
'Consumer'
);
Which looks like this on CloudFormation:

Google Deployment Manager - BigTable example

I have been trying this example provided in the Google's Deployment Manager GitHub project.
It works, yet I am not sure what is the purpose of creating three instances named instance_create, instance_update and instance_delete.
For example, taken from the link:
instance_create = {
'name':
'instance_create',
'action':
'gcp-types/bigtableadmin-v2:bigtableadmin.projects.instances.create',
'properties': {
'parent': project_path,
'instanceId': instance_name,
'clusters': copy.deepcopy(initial_cluster),
'instance': context.properties['instance']
},
'metadata': {
'runtimePolicy': ['CREATE']
}
}
What is the purpose of `action` and `metadata`.`runtimePolicy`? I have tried to find it in the documentation but failed miserably.
Why there are three `BigTable` instances there?
You are right, the documentation is missing the information, which would answer your questions regarding these parameters.
However, it helps knowing what's going on in the Depoyment Manager example you linked.
First of all, the following line in the config.yaml is where the things get tricky:
resources:
- name: my-bigtable
type: bigtable.py
This line will do a call to the bigtable.py python file, which sets the resource type of the deployment to that which are in it, under the GenerateConfig function. See how this is done here.
The resources are returned as {'resources': resources} at the end of it, being the resources variable a list of templates created there.
These templates have different name identifiers, which are set by the "name" tag.
So you are not creating three different instances with the name of instance_create, instance_update and instance_delete in this file, but you are creating three templates with those names, that will later be appended to the resources list, and later returned to the config.yaml resources.type tag.
These templates then will be sequentially build and executed by the deployment manager, once the create command is used. Note that they might appear out of order, this is due not using a schema.
It's easier to see this structure in a .yaml file format, for example, built with jinja, the template you posted would be:
resources:
- action: gcp-types/bigtableadmin-v2:bigtableadmin.projects.instances.create
name: instance_create
metadata:
runtimePolicy:
- CREATE
properties:
clusters:
initial:
defaultStorageType: HDD
location: projects/<PROJECT_ID>/locations/<PROJECT_LOCATION>
serveNodes: 4
instance:
displayName: My BigTable Instance.
type: PRODUCTION
instanceId: my-instance
parent: projects/<PROJECT_ID>
Notice that the parameters under properties are the fields in the request body to bigtableadmin.projects.instances.create (which is nesting a clusters object parameters and a instance object parameters). Note that the InstanceId under properties is always the same, hence the BigTable instance, on which the templates do the calls, is always the same one.
The thing is that, not only the example you linked creates various templates to be run in the same script, but that the resource type for each template is a call to the BigTable API.
Normally the template resources are specified with the type tag, but since you are calling a resource that is directly running an API call (i.e. instead of just specifying gcp-types/bigtableadmin-v2, you are specifying bigtableadmin-v2:bigtableadmin.projects.instances.create), the action tag is used. I haven't found this difference on usage documented anywhere, but it needs to be specified like that.
You will know if you are calling an API 'endpoint' directly if the resource ends with either create/update/delete.
Finally, the I have investigated in my side, and the metadata.runtimePolicy is tied to the fact that the resource type is an API call (like in the previous point). Once again, I haven't found this documented anywhere.
However, since this is a requirement, you will always have to set the correct value in this field. It basically boils down to have metadata.runtimePolicy set to this values, depending on which type of API call you do:
create -> ['CREATE']
update -> ['UPDATE_ON_CHANGE']
delete -> ['DELETE']
Summarizing:
You are not creating three different instances, but three different templates, which do the work on the same BigTable instance.
You need to change the resource type flag to action if you are calling an API endpoint (create/update/delete), instead of just naming the base API.
The metadata.runtimePolicy value is a requirement when doing a call to one of the aforenamed endpoints.

CloudFormation: conditional parameters

Building a CloudFormation stack template, I have a setup constellation where upon instantiation I want to reference either the name of another CloudFormation stack or a non-CloudFormation-managed database as a parameter.
Is there a way to represent this constellation in my template? I.e. "Parameter DatabaseHost is mandatory if Parameter DatabaseStack is blank"?
Maybe it wasn't possible at the time of the question, but now, you can include conditions on a CloudFormation template. See docs.
In this example, I use one value or another depending on the environment:
InfrastructurePipelineStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Sub "https://<masked>.yml"
Parameters:
ProjectName: !Ref ProjectName
...
LambdaNotifications: !If [isDev, !GetAtt NotificationsStack.Outputs.LambdaNotifications, !Ref LambdaNotifications]
If the environment is Development ("isDev" condition), I use the output of other CloudFormation Stack as the value. If not, I use a provided fixed value (non-CloudFormation value).
In this case "isDev" is acting as "parameter DatabaseStack is blank" in the OP question.
I'm not aware of a native option in CloudFormation to make one template parameter conditional on a second template parameter.
Possible workarounds might be:
make both optional, and tell user to supply one of them
use two templates, one for each of the two use cases
programmatically generate your template after asking the user for parameters