Here is readme about serverless-plugin-nested-stacks plugin. It makes possible to include nested stacks into main one. But how to pass values between stacks? For example I create a resouce in one nested stack - how to path it arn to another stack (nested or main one)?
First you will need to export the resources from the corresponding nested stack like this:
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
...
Resources:
...
Outputs:
o1:
Description: ...
Value: <your_resource_arn>
Export:
Name: <your_export_name>
To import the resource in other stack, you will need to use the intrinsic function Fn::ImportValue, like this:
Fn::ImportValue: <your_export_name>
For more information check the AWS documentation
Related
I'm trying to figure out how to pass output of a CloudFormation stack as a parameter to another CloudFormation stack, particularly via the Parameters section of the CloudFormation definition.
Say StackA is exporting an output :-
Outputs:
TargetGroupArn:
Description: "Target Group ARN"
Export: {Name: TargetGroupArn}
Value: {Ref: TargetGroup}
Can StackB contain a parameter in it's definition :-
Parameters:
TargetGroupArn:
Type: String
Default:
Fn::ImportValue: TargetGroupArn
Note: I'm aware that TargetGroupArn can be fetched wherever required in the Resources section via Fn::ImportValue. I'm specifically interested in importing in the parameters section.
No, you cannot import the value as the parameter default.
As per the documentation (emphasis added),
You can use intrinsic functions only in specific parts of a template.
Currently, you can use intrinsic functions in resource properties,
outputs, metadata attributes, and update policy attributes. You can
also use intrinsic functions to conditionally create stack resources.
Parameters are not one of the parts that allow the use of intrinsic functions; and as Fn::ImportValue is an intrinsic function, a parameter value cannot be imported.
I'm facing a decision to Use Cross-Stack References to Export Shared Resources or to Use Nested Stacks to Reuse Common Template Patterns following AWS CloudFormation best practices.
However, they seem the same to me apart from a few differences:
cross-stack uses Fn::ImportValue, templates are in one folder.
nested-stack must be in S3, uses type AWS::CloudFormation::Stack and TemplateURL.
There's no clear pros and cons between them as far as I could search.
My goal is to create a parent stack that passes some core variables like stackName to the child stacks, then the child stacks create the resources sharing some variables between them like ARN or Policies, using the stackName to name their resources like stackNameDynamoDBTable.
You should use cross-stack references as it was created for your use case of passing between stacks.
Whereas nested stacks would work, it’s primary purpose is for reuse of modular components, like a template of a resource you use in lots of stacks to save copy pasting and updating the stacks independently.
Nested stacks: if you need to manage your stacks from a single point, you should use nested stacks.
example: assume that you have load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates you can create a dedicated template for load balancer.
cross-stack : Alternatively, if you need to manage your stacks as separate entities, you should use cross-stack references.(AWS limits the number of VPCs you can create in an AWS region to five.)
example : You might have a network stack that includes a VPC, a security group, and a subnet. You want all public web apps to use these resources. By exporting the resources, you allow all stacks with public web applications to use them.
There is a way to get the best of both worlds. The trick is to use cross-stack resource sharing but make it depend on a parameter that is passed using Nested stack.
Here's an example from how I used this, consider two stacks IAMRoleStack and ComputeStack. The former contains all the necessary IAM roles and the latter contains a bunch of Lambda functions that those roles are applied to.
Resources:
IAMCustomAdminRoleForLambda:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Policies:
Output:
IAMRoleArnForLambda:
Description: Returns the Amazon Resource Name for the newly created IAM Custom
Role for Lambda function
Value: !GetAtt 'IAMCustomAdminRoleForLambda.Arn'
Export:
Name: !Sub '${AWS::StackName}-IAMRoleArnForLambda'
StackName:
Description: Returns name of stack after deployment
Value: !Sub ${AWS::StackName}
As you can see I've exported the IAM role but it's Name depends on the stack name that is calculated once the stack is deployed. You can read more about exporting outputs in the docs.
In the ComputeStack, I use this role by importing it.
Resources:
LambdaForCompute:
Type: AWS::Lambda::Function
Properties:
Role: !ImportValue
Fn::Sub: ${StackNameOfIAMRole}-IAMRoleArnForLambda
The parent stack that "nests" both ComputeStack and IAMRoleStack orchestrates passing the stack name parameter.
Resources:
IAMRoleStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Ref IAMRoleStackURL
ComputeStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Ref ComputeStackURL
Parameters:
StackNameOfIAMRole: !GetAtt IAMRoleStack.Outputs.StackName
I can't attest to best practice but this style allows me to pick and choose where I want orchestrated deployment and where I want to do the deployments individually.
I also want to point out that this kind of modularization based on type of resources is not very feasible for nested stacks. For e.g. in this scenario, if I had 10 different roles for 10 different Lambda functions, I would have to pass each of those 10 roles through parameters. Using this hybrid style, I only need to pass one parameter the stack name.
With cross stacks, you pass a reference to a bunch existing components X to stacks A and B when you want A and B to reuse these very same existing components. With nested stacks, when you nest a nested stack Y in stacks C and D, Y shall create a new set of components Y is describing individually for C and for D.
It is similar to concepts 'passing by reference' and 'passing by value' in programming.
I'm new to AWS and SAM, so this may be an obvious question, but I just can't find an answer to it. I'm trying to construct a SAM template that allows the user to inject a parameter that will affect the names of all resources within. Specifically, an "environment" parameter can be passed in, which will then be used to qualify all resource names:
AWSTemplateFormatVersion: "2010-09-09"
Transform: "AWS::Serverless-2016-10-31"
Parameters:
EnvironmentParameter:
Type: "String"
Default: "default"
Resources:
GetTermsAndConditionsFunction:
Type: "AWS::Serverless::Function"
Properties:
# TODO: prepend the environment somehow so I get "$ENVIRONMENT_MyFunction" instead
FunctionName: "MyFunction"
Handler: "..."
...
How can I dynamically construct a FunctionName using EnvironmentParameter?
All the Cloudformation functions work in SAM templates as well. So you would use the Fn::Sub function to replace the EnvironmentParameter in your FunctionName
FunctionName: !Sub "${EnvironmentParameter}_MyFunction"
Link for more details on Fn::Sub function.
Hope this helps!
Is there a way to access auto-generated URLs for deployed resources before the deployment is finished? (like db host, lambda function URL, etc.)
I can access them after the deployment is finished, but sometimes I need to access them while building my stack. (E.g. use them in other resources).
What is a good solution to handle this use-case? I was thinking about outputting them into the SSM parameter store from CloudFormation template, but I'm not sure if this is even possible.
Thanks for any suggestion or guidance!
If "use them in other resources" means another serverless service or another CloudFormation stack, then use CloudFormation Outputs to export the values you are interested in. Then use CloudFormation ImportValue function to reference that value in another stack.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/outputs-section-structure.html and https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html
Within Serverless Framework, you can access a CloudFormation Output value using https://serverless.com/framework/docs/providers/aws/guide/variables/#reference-cloudformation-outputs
If you want to use the autogenerated value within the same stack, then just use CloudFormation GetAtt function. See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html.
For example, I have a CloudFormation stack that outputs the URL for an ElasticSearch cluster.
Resources:
Search:
Type: AWS::Elasticsearch::Domain
Properties: <redacted>
Outputs:
SearchUrl:
Value: !GetAtt Search.DomainEndpoint
Export:
Name: myapp:search-url
Assuming that the CloudFormation stack name is "mystack", then in my Serverless service, I can reference the SearchUrl by:
custom:
searchUrl: ${cf:mystack.SearchUrl}
To add to bwinant's answer, ${cf:<stack name>.<output name>} does not work if you want to reference a variable in another stack which is located in another region. There is a plugin to achieve this called serverless-plugin-cloudformation-cross-region-variables. You can use it like so
plugins:
- serverless-plugin-cloudformation-cross-region-variables
custom:
myVariable: ${cfcr:ca-central-1:my-other-stack:MyVariable}
I have a yml cloudformation template (A) for an AWS codepipeline build that I want to make a variation of it in an another template (B).
The original (A) has a repository as one of its resources which it created when initially run thru cloudformation. I'd like the variation (B) template to use the same ECR repository generated in the original (A), for the codebuild.
Is there a way I can have (B) template use the ECR resource created in A by passing in the repository resource value as a parameter or something?
For example the resource in A that I want to reuse (not recreate) in B is something like :
Repository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Sub comp/${ServiceName}
RepositoryPolicyText:
Version: 2012-10-17
Statement:
...
I am not sure from your question what resources you are referring to. But in general you can export any value from one stack into another, using the Export property of the Output section
From Exporting Stack Output Values
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values. For example, you might have a single networking
stack that exports the IDs of a subnet and security group for public
web servers. Stacks with a public web server can easily import those
networking resources. You don't need to hard code resource IDs in the
stack's template or pass IDs as input parameters.