I have a situation where I want to pass the output parameter from one stack to the Parameters of another stack. I know about the Outputs area and the Fn::ImportValue but I have to declair a separate variable in stack #2 or else I get errors.
In Stack 1:
Parameters:
EnvironmentName:
Type: String
Default: production
Description: "A friendly environment name that will be used for namespacing all cluster resources. Example: staging, qa, or production"
...
Outputs:
EnvironmentName:
Description: The deployment mode of this and subsequent stacks.
Value: !Ref EnvironmentName
Export:
Name: !Ref EnvironmentName
In stack #2 I have:
Parameters:
EnvironmentName:
Type: String
Default: production
Description: "A friendly environment name that will be used for namespacing all cluster resources. Example: staging, qa, or production"
I would like to have the Environment name not having to be declared twice. But I can't do:
Parameters:
EnvironmentName:
Type: String
Default: Fn::ImportValue {$EnvironmentName}
Description: "A friendly environment name that will be used for namespacing all cluster resources. Example: staging, qa, or production"
There are a few things I could try, one post suggested using FindInMap and put the env names in the map. That just seems odd and again, I've not tried that. The other method is Nested Stacks but I don't want to end up with one huge template file.
You can't do that. It simply not possible and CFN does not support functions in Parameter defaults or Mappings.
Related
I'm trying to define some common resources (specifically, a couple of IAM roles) that will be shared between two environments, via a nested stack. The first environment to use the nested stack's resources creates ok, but the second one fails when trying to run the nested stack. Am I not understanding something about how nested stacks work, or am I just doing something wrong?
My nested stack is defined as:
AWSTemplateFormatVersion: '2010-09-09'
Description: Defines common resources shared between environments.
Parameters:
ParentStage:
Type: String
Description: The Stage or environment name.
Default: ""
ParentVpcId:
Type: "AWS::EC2::VPC::Id"
Description: VpcId of your existing Virtual Private Cloud (VPC)
Default: ""
Resources:
LambdaFunctionSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Identifies Lamdba functions to VPC resources
GroupName: BBA-KTDO-SG-LambdaFunction
VpcId: !Ref ParentVpcId
RdsAccessSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allows Lambda functions access to RDS instances
GroupName: BBA-KTDO-SG-RDS
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 3306
ToPort: 3306
SourceSecurityGroupId: !Ref LambdaFunctionSG
VpcId: !Ref ParentVpcId
I've uploaded that YAML file to an S3 bucket, and I'm then trying to use it in two separate stack files (i.e. app_dev.yaml and app_prod.yaml) as:
Resources:
CommonResources:
Type: 'AWS::CloudFormation::Stack'
Properties:
TemplateURL: "https://my-buildbucket.s3-eu-west-1.amazonaws.com/common/common.yaml"
Parameters:
ParentStage: !Ref Stage
ParentVpcId: !Ref VpcId
And referring to its outputs as (e.g):
VpcConfig:
SecurityGroupIds:
- !GetAtt [ CommonResources, Outputs.LambdaFunctionSGId ]
The first environment creates fine, including the nested resources. When I try to run the second environment, it fails with error:
Embedded stack
arn:aws:cloudformation:eu-west-1:238165151424:stack/my-prod-stack-CommonResources-L94ZCIP0UD9W/f9d06dd0-994d-11eb-9802-02554f144c21
was not successfully created: The following resource(s) failed to
create: [LambdaExecuteRole, LambdaFunctionSG].
Is it not possible to share a single resource definition between two separate stacks like this, or have I just missed something in the implementation?
As #jasonwadsworth mentioned that's correct names of the stacks are always amended with a random string at the end AWS::CloudFormation::Stack check the return values. Use GetAtt to get the name of the stack and construct the output. How do I pass values between nested stacks within the same parent stack in AWS CloudFormation?
Plus use aws cloudformation package command for packaging the nested stacks, no need to manually upload them to s3 bucket.
someting like
aws cloudformation package \
--template-file /path_to_template/template.json \
--s3-bucket bucket-name \
--output-template-file packaged-template.json
Take a look on the cloudformation output exports as well in case you are curious Difference between an Output & an Export
I have been refactoring what has become a rather large stack because it is brushing up against size limits for CloudFormation scripts on AWS. In doing so I have had to resolve some dependencies (typically using Outputs) but I've run into a situation that I have never run into before...
How do I use a resource created in one nested stack (A) in another nested stack (B) when using DependsOn?
This question is a duplicate question but the answer does not fit because it doesn't actually resolve the issue I have, it takes a different approach based on that particular user's needs.
Here is the resource in nested stack A:
EndpointARestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Body:
Fn::Transform:
Name: 'AWS::Include'
Parameters:
Location: !Join ['/', [ 's3:/', !Ref SharedBucketName, !Ref WorkspacePrefix, 'endpoint.yaml' ]]
And here is the DependsOn request in stack B:
EndpointUserPoolResourceServer:
Type: Custom::CognitoUserPoolResourceServer
DependsOn:
- EndpointARestApi
- CustomResource ## this resource is in the same stack and resolves properly
This occurs with one other resource I have in this stack so I am hoping that I can do this easily. If not, I believe I would have to refactor some more.
As suggested in comments I moved the DependsOn statement up to the primary CFN script in the resource requiring the dependency and made sure the dependency was on the other resource, not the nested resource, like this:
Primary
ResourceA
ResourceB
DependsOn: ResourceA
Which ends up looking like this in the CloudFormation script:
EndpointUserPoolResourceServer:
Type: "AWS::CloudFormation::Stack"
DependsOn:
- EndpointARestApiResource
Properties:
Parameters:
AppName: !Ref AppName
Environment: !Ref Environment
DeveloperPrefix: !Ref DeveloperPrefix
DeployPhase: !Ref DeployPhase
I want to use a macro in my Cloudformation template, however I would like to use the macro only in production environment.
I see I can use a conditional statement, however the Transform requires a key name.
This is my template:
AWSTemplateFormatVersion: "2010-09-09"
Transform: [
"AWS::Serverless-2016-10-31",
!If [IsProduction, AddCloudWatchAlarms, !Ref "AWS::NoValue" ]
]
Parameters:
Environment:
Type: String
AllowedValues:
- prod
- stag
- dev
ConstraintDescription: invalid environment, only [prod, stag, dev] are allowed
Conditions:
IsProduction: !Equals [ !Ref Environment, prod ]
As you can see I am passing AWS::NoValue.
This is working perfectly when the I am deploying prod environment, however when I try to deploy in dev I get this error.
Error: Failed to create changeset for the stack: server-hosting-rust-dev, An error occurred (ValidationError) when calling the CreateChangeSet operation: Transforms defined as maps require Name key.
How can I achieve this?
I could modify the code of the macro to skip its processing if the environment is dev, but it is third party open source, it would take quite a while for my pull request to get approved or even create a fork (Which I will eventually do), so I am asking if there is a faster method to achieve this.
Thanks
I have a parameter in an aws cloudformation template
Parameters:
ExecRole:
Type: String
Description: Required. Lambda exec role ARN
Default: arn:aws:iam::123456789:role/lambdaExecRole
Assuming the 123456789 is the AcountId I want to use the pseudo parameter reference but I cannot do it, I try the followings without success
Default: arn:aws:iam::!Ref{AWS::AccountId}:role/exLambdaExecRole
Default: !Sub 'arn:aws:iam::${AWS::AccountId}:role/exLambdaExecRole'
In the last case is throwing me an error
Default member must be a string.
It seems like functions (ex. !Sub) are not supported in default values of Parameters.
Here's a workaround we're using.
We have a separate stack called Parameters which exports whatever parameters needed in other stacks. For instance:
Outputs:
VpcId:
Description: Id of the VPC.
Value: !Ref VpcId
Export:
Name: !Sub 'stk-${EnvType}-${EnvId}-VpcId'
In other stacks we simply import these exported values:
VpcId: !ImportValue
'Fn::Sub': 'stk-${EnvType}-${EnvId}-VpcId'
EnvType and EnvId are the same for all the stacks of one environment.
With roles you might want to do the following. Create a separate Roles template, implement your roles there and export their ARNs:
Outputs:
LambdaExecutionRoleArn:
Description: ARN of the execution role for the log-and-pass function.
Value: !GetAtt
- LambdaExecutionRole
- Arn
Export:
Name: !Sub 'stk-${EnvType}-${EnvId}-roles-LambdaExecutionRole-Arn'
Again, in other stack you could simply ImportValue:
Role: !ImportValue
'Fn::Sub': 'stk-${EnvType}-${EnvId}-roles-LogAndPassFunctionExecutionRole-Arn'
Assuming this will always be role, why can't you as a parameter ask for the nae to be passed in then use the Sub intrinsic function to replace in the Resources section of your CloudFormation template.
That way your arn:aws:iam::${AWS::AccountId}:role part of the arn would not need to be part of the parameter.
I have a CloudFormation template with that creates a Launch Configuration:
Resources:
# Launch Configuration for the instances in the Atoscaling Group
LaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
AssociatePublicIpAddress: false
ImageId: !Ref EC2AMI
InstanceType: !Ref EC2InstanceType
KeyName: !Ref EC2Key
IamInstanceProfile: !ImportValue EC2RoleInstanceProfileARN
LaunchConfigurationName: jxt-private-asg-launch-config
SecurityGroups:
- !ImportValue PrivateSecurityGroupId
When I try to update the stack I get the below error:
CloudFormation cannot update a stack when a custom-named resource
requires replacing
I am running this script via TeamCity so it is not possible for the user to change the Launch Confoiguration's name each time. What can I do to get rid of this error?
One solution can be to omit the LaunchConfigurationName since it is not mandatory.
Copied from the AWS::AutoScaling::LauncConfiguration documentation:
The name of the launch configuration. This name must be unique per Region per account. [...]
Update requires: Replacement
The problem you are facing is that you have made a change which requires the replacement of the launch configuration. Typically, CloudFormation creates a new resource (in case the existing resource cannot be updated), points any dependant resources to the new resource and then deletes the old resource. However, this operation fails if the resource uses a static name because then it conflicts with the unique name constraint mentioned in the docs.
You can either:
Do what #matsev recommended and not use names for resources that don't require it (probably the best option) - names will be generated based on stack name.
Add a variable into your Resource name, such as a parameter which passes in commit-id or date or something along those lines. This will ideally make your Resource name unique.