I'm trying to figure out how to pass output of a CloudFormation stack as a parameter to another CloudFormation stack, particularly via the Parameters section of the CloudFormation definition.
Say StackA is exporting an output :-
Outputs:
TargetGroupArn:
Description: "Target Group ARN"
Export: {Name: TargetGroupArn}
Value: {Ref: TargetGroup}
Can StackB contain a parameter in it's definition :-
Parameters:
TargetGroupArn:
Type: String
Default:
Fn::ImportValue: TargetGroupArn
Note: I'm aware that TargetGroupArn can be fetched wherever required in the Resources section via Fn::ImportValue. I'm specifically interested in importing in the parameters section.
No, you cannot import the value as the parameter default.
As per the documentation (emphasis added),
You can use intrinsic functions only in specific parts of a template.
Currently, you can use intrinsic functions in resource properties,
outputs, metadata attributes, and update policy attributes. You can
also use intrinsic functions to conditionally create stack resources.
Parameters are not one of the parts that allow the use of intrinsic functions; and as Fn::ImportValue is an intrinsic function, a parameter value cannot be imported.
Related
I have a CloudFormation stack that creates an EC2 instance and gives it a name tag.
I want to create a CloudWatch alarm, and reference the EC2 instance's name in the alarm's name - something like AlarmName: !Sub "Status Check Alarm - ${EC2InstanceName}".
!Ref will allow me to reference the CloudFormation script's parameters, but I don't want to parameterize the EC2 instance name - I don't want or need that to be customizable, and I don't want users to have the ability to choose a custom name for the server.
I tried outputting the EC2 instance name so I could !Ref that, but I got an Invalid template resource property 'Outputs' error, so I don't know if my approach even works:
EC2Instance:
Properties: ...
Type: AWS::EC2::Instance
Outputs:
EC2InstanceName:
Description: The server's name.
Value: !GetAtt EC2Instance.Tags.Name
Export:
Name: "EC2InstanceName"
How do I reference the EC2 instance's name without parameterizing the name at the top-level of the script?
EDIT:
I ended up using parameters anyway so I could !Ref them. I guess you could also set up an "allowed values" list containing only a single value that matches the default. It's lame but it works, I guess.
Parameters:
EC2InstanceName:
Type: String
Default: "web-server-blah"
Description: The name of the EC2 instance.
You can use !GetAtt only for attributes which are specifically named in the documentation https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html
Tags are not among them.
But if you provide a different tag for your instance, then you can refer it without even exporting it (providing that it is a constant value).
I see what you are trying to do, but AWS does not support everything you would like to work out of the box. One way how I imagine it can be done - and you may not like it - is either via a macro or a custom resource (lambda function).
Can't use just use !Ref EC2Instance? I realize it won't be the friendly "Name" tag value, but it could be more useful, especially if you have duplicates of the same "Name". It would make your alarm be something like "Status Check Alarm - i-123456789".
Whereas if you use the name it might be something more like 10 alarms that read "Status Check Alarm - WWWServer", but now which WWWServer?
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: hello
Resources:
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
StageName: stage
TracingEnabled: true
FunctionA:
...
Environment:
Variables:
TEST: !Ref ApiGatewayApi
Events:
GetUsers:
Type: Api
Properties:
Path: /account
Method: get
RestApiId:
Ref: ApiGatewayApi
FunctionB:
...
Environment:
Variables:
API_URL: !GetAtt ApiGatewayApi.RootResourceId
Events:
OrderEvent:
Type: SQS
Properties:
Queue: !GetAtt OrderServiceQueue.Arn
This leads to a circular dependency. IF I do !Ref in a function that does not have an event with API, it does not complain about it. I read the premium support article from aws, blogs and other stack overflow questions but they are not similar to my question.
FunctionB successfully refers to the API gateway id while FunctionA does not.
I create the api outside the function, so I think it SHOULD !Ref the endpoint in it. Is there something else?
The circular reference is created by how AWS SAM uses the events in order to create the definition of the API. This basically means that it needs the ARNs of the lambda functions to construct this definition before it can create the API. But since you are needing IDs of the API in order to create the lambda, you end up with a circular reference since neither can be created without the other one already existing.
The first way to solve this problem is by deploying your stacks in multiple steps. You could first deploy an empty API, which would allow you to reference the API IDs when adding the lambdas. The significant drawback of this approach is of course if you want to easily replicate this stack on another account or redeploying the API for some reason, which means you'd have to use this trick again each time.
Another way, if you really want to have this value as an environment variable, would be to manually create the definition body for the API (in which you construct the ARNs of the lambda, not reference them) and presumably, you'll also need to manually create the permissions in order to allow your API Gateway resource to execute the lambda functions.
However, a better way I feel would be to use the lambda proxy integration (which is used by default I think, but I could not find any documentation to verify this). When using the lambda proxy integration, the incoming event in lambda contains all the information about the API. You could easily extract the information you need from that event instead of having it as an environment variable, but this depends on your exact use case.
Hi i have 2 AWS::ElasticLoadBalancingV2::Listener name Listener1 and Listener2. I have a condition in which either listner1 is deployed or listner1
I have created a ecs service which i want to be depended on Listener.
Service:
Type: AWS::ECS::Service
DependsOn: !If [Condition, Listener1, Listener2]
Properties:
When deployed its giving me error Template format error: DependsOn must be a string or list of strings.
Sadly, you can't do the following:
DependsOn: !If [Condition, Listener1, Listener2]
As the error message says, DependsOn takes only a string value or a list of strings, not a function, e.g:
DependsOn: [SomeExistingResource1, SomeExistingResource2]
Also Fn::If can only be used in metadata attribute, update policy attribute, and property values. From docs:
Currently, AWS CloudFormation supports the Fn::If intrinsic function in the metadata attribute, update policy attribute, and property values in the Resources section and Outputs sections of a template.
Thus you can't use Fn::If in DependsOn.
Here is readme about serverless-plugin-nested-stacks plugin. It makes possible to include nested stacks into main one. But how to pass values between stacks? For example I create a resouce in one nested stack - how to path it arn to another stack (nested or main one)?
First you will need to export the resources from the corresponding nested stack like this:
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
...
Resources:
...
Outputs:
o1:
Description: ...
Value: <your_resource_arn>
Export:
Name: <your_export_name>
To import the resource in other stack, you will need to use the intrinsic function Fn::ImportValue, like this:
Fn::ImportValue: <your_export_name>
For more information check the AWS documentation
I'm facing a decision to Use Cross-Stack References to Export Shared Resources or to Use Nested Stacks to Reuse Common Template Patterns following AWS CloudFormation best practices.
However, they seem the same to me apart from a few differences:
cross-stack uses Fn::ImportValue, templates are in one folder.
nested-stack must be in S3, uses type AWS::CloudFormation::Stack and TemplateURL.
There's no clear pros and cons between them as far as I could search.
My goal is to create a parent stack that passes some core variables like stackName to the child stacks, then the child stacks create the resources sharing some variables between them like ARN or Policies, using the stackName to name their resources like stackNameDynamoDBTable.
You should use cross-stack references as it was created for your use case of passing between stacks.
Whereas nested stacks would work, it’s primary purpose is for reuse of modular components, like a template of a resource you use in lots of stacks to save copy pasting and updating the stacks independently.
Nested stacks: if you need to manage your stacks from a single point, you should use nested stacks.
example: assume that you have load balancer configuration that you use for most of your stacks. Instead of copying and pasting the same configurations into your templates you can create a dedicated template for load balancer.
cross-stack : Alternatively, if you need to manage your stacks as separate entities, you should use cross-stack references.(AWS limits the number of VPCs you can create in an AWS region to five.)
example : You might have a network stack that includes a VPC, a security group, and a subnet. You want all public web apps to use these resources. By exporting the resources, you allow all stacks with public web applications to use them.
There is a way to get the best of both worlds. The trick is to use cross-stack resource sharing but make it depend on a parameter that is passed using Nested stack.
Here's an example from how I used this, consider two stacks IAMRoleStack and ComputeStack. The former contains all the necessary IAM roles and the latter contains a bunch of Lambda functions that those roles are applied to.
Resources:
IAMCustomAdminRoleForLambda:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Policies:
Output:
IAMRoleArnForLambda:
Description: Returns the Amazon Resource Name for the newly created IAM Custom
Role for Lambda function
Value: !GetAtt 'IAMCustomAdminRoleForLambda.Arn'
Export:
Name: !Sub '${AWS::StackName}-IAMRoleArnForLambda'
StackName:
Description: Returns name of stack after deployment
Value: !Sub ${AWS::StackName}
As you can see I've exported the IAM role but it's Name depends on the stack name that is calculated once the stack is deployed. You can read more about exporting outputs in the docs.
In the ComputeStack, I use this role by importing it.
Resources:
LambdaForCompute:
Type: AWS::Lambda::Function
Properties:
Role: !ImportValue
Fn::Sub: ${StackNameOfIAMRole}-IAMRoleArnForLambda
The parent stack that "nests" both ComputeStack and IAMRoleStack orchestrates passing the stack name parameter.
Resources:
IAMRoleStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Ref IAMRoleStackURL
ComputeStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: !Ref ComputeStackURL
Parameters:
StackNameOfIAMRole: !GetAtt IAMRoleStack.Outputs.StackName
I can't attest to best practice but this style allows me to pick and choose where I want orchestrated deployment and where I want to do the deployments individually.
I also want to point out that this kind of modularization based on type of resources is not very feasible for nested stacks. For e.g. in this scenario, if I had 10 different roles for 10 different Lambda functions, I would have to pass each of those 10 roles through parameters. Using this hybrid style, I only need to pass one parameter the stack name.
With cross stacks, you pass a reference to a bunch existing components X to stacks A and B when you want A and B to reuse these very same existing components. With nested stacks, when you nest a nested stack Y in stacks C and D, Y shall create a new set of components Y is describing individually for C and for D.
It is similar to concepts 'passing by reference' and 'passing by value' in programming.