I'm trying to define some common resources (specifically, a couple of IAM roles) that will be shared between two environments, via a nested stack. The first environment to use the nested stack's resources creates ok, but the second one fails when trying to run the nested stack. Am I not understanding something about how nested stacks work, or am I just doing something wrong?
My nested stack is defined as:
AWSTemplateFormatVersion: '2010-09-09'
Description: Defines common resources shared between environments.
Parameters:
ParentStage:
Type: String
Description: The Stage or environment name.
Default: ""
ParentVpcId:
Type: "AWS::EC2::VPC::Id"
Description: VpcId of your existing Virtual Private Cloud (VPC)
Default: ""
Resources:
LambdaFunctionSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Identifies Lamdba functions to VPC resources
GroupName: BBA-KTDO-SG-LambdaFunction
VpcId: !Ref ParentVpcId
RdsAccessSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allows Lambda functions access to RDS instances
GroupName: BBA-KTDO-SG-RDS
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 3306
ToPort: 3306
SourceSecurityGroupId: !Ref LambdaFunctionSG
VpcId: !Ref ParentVpcId
I've uploaded that YAML file to an S3 bucket, and I'm then trying to use it in two separate stack files (i.e. app_dev.yaml and app_prod.yaml) as:
Resources:
CommonResources:
Type: 'AWS::CloudFormation::Stack'
Properties:
TemplateURL: "https://my-buildbucket.s3-eu-west-1.amazonaws.com/common/common.yaml"
Parameters:
ParentStage: !Ref Stage
ParentVpcId: !Ref VpcId
And referring to its outputs as (e.g):
VpcConfig:
SecurityGroupIds:
- !GetAtt [ CommonResources, Outputs.LambdaFunctionSGId ]
The first environment creates fine, including the nested resources. When I try to run the second environment, it fails with error:
Embedded stack
arn:aws:cloudformation:eu-west-1:238165151424:stack/my-prod-stack-CommonResources-L94ZCIP0UD9W/f9d06dd0-994d-11eb-9802-02554f144c21
was not successfully created: The following resource(s) failed to
create: [LambdaExecuteRole, LambdaFunctionSG].
Is it not possible to share a single resource definition between two separate stacks like this, or have I just missed something in the implementation?
As #jasonwadsworth mentioned that's correct names of the stacks are always amended with a random string at the end AWS::CloudFormation::Stack check the return values. Use GetAtt to get the name of the stack and construct the output. How do I pass values between nested stacks within the same parent stack in AWS CloudFormation?
Plus use aws cloudformation package command for packaging the nested stacks, no need to manually upload them to s3 bucket.
someting like
aws cloudformation package \
--template-file /path_to_template/template.json \
--s3-bucket bucket-name \
--output-template-file packaged-template.json
Take a look on the cloudformation output exports as well in case you are curious Difference between an Output & an Export
Related
i'm starting my AWS journey and today got a chance to Create cloudformation stack for creating a filesystem on the AWS, i was able to spun the filesystem, however I have few
doubts about some values and functions/attributes as those were given by someone in the team and he on long vacations so, asking here for help.
Below is cloudfoemation Stack which works Just fine.
Cloudformaton Stack:
---
Description: "Create FSxN filesystem"
Resources:
MytestCluster:
Type: "AWS::FSx::FileSystem"
Properties:
FileSystemType: "ONTAP"
StorageCapacity: "1024"
SubnetIds: ['subnet-0f349h6eee098b0pg']
OntapConfiguration:
DeploymentType: "SINGLE_AZ_1"
PreferredSubnetId: "subnet-0f349h6eee098b0pg"
ThroughputCapacity: "128"
FsxAdminPassword: '{{resolve:secretsmanager:fsx_admin_password}}'
SecurityGroupIds:
- !ImportValue 'KPCL-FSxforONTAPsgID'
Tags:
- Key: "Backup"
Value: "None"
MytestSVM:
Type: "AWS::FSx::StorageVirtualMachine"
Metadata:
cfn-lint:
config:
ignore_checks:
- E3001
Properties:
FileSystemId: !Ref MytestCluster
Name: svmdemo
RootVolumeSecurityStyle: "UNIX"
SvmAdminPassword: '{{resolve:secretsmanager:svm_admin_password}}'
Tags:
- Key: "Backup"
Value: "None"
fsxndemovolume:
Type: "AWS::FSx::Volume"
Metadata:
cfn-lint:
config:
ignore_checks:
- E3001
Properties:
Name: myTestVol001
OntapConfiguration:
JunctionPath: /myVolume001
SizeInMegabytes: 1536000
StorageEfficiencyEnabled: true
StorageVirtualMachineId: !Ref MytestSVM
VolumeType: "ONTAP"
Tags:
- Key: "Backup"
Value: "None"
Outputs:
FileSystemId:
Value: !Ref "MytestCluster"
SvmId:
Value: !Ref "MytestSVM"
...
I would like Understand:
I have few doubts to myself to clear which i tried to understand from document but couldn't comprehend well, hence though taking expert suggestion..
First one: below under SecurityGroupIds what does - !ImportValue mean here.
SecurityGroupIds:
- !ImportValue 'KPCL-FSxforONTAPsgID'
Second one: What is outputs means here.
Outputs:
FileSystemId:
Value: !Ref "MytestCluster"
SvmId:
Value: !Ref "MytestSVM"
Last one: what is ignore_checks: and its value - E3001 here.
ignore_checks:
- E3001
Please help me to understand.
First one: below under SecurityGroupIds what does - !ImportValue mean here.
The following:
SecurityGroupIds:
- !ImportValue 'KPCL-FSxforONTAPsgID'
means that in the current stack your are going to import security group ID which was exported by some other stack.
This export/import functionality allows you to decouple and reuse your infrastructure. Instead of having everything in one stack, you can make one stack with network resources (its a common setup), such as security groups, subnets, VPCs, and other stacks that actual use those resources.
Second one: What is outputs means here.
Outputs allow you to return values from your stacks. You can think of them as a type of return values from functions in common programming languages.
Output values have lots of use-cases. Examples are: they can be exported, and imported in other stacks. They can also be queried programmatically, in case your stacks are part of some CI/CD pipelines or other application. They can be used as input parameters to other stacks, again as port of some CI/CD pipeline. This is alternative to export/import functionality.
Last one: what is ignore_checks: and its value - E3001 here.
This is some extra code not related to CloudFormation itself. It is actually a hint to Visual Studio Code
cfn-lint-visual-studio-code editor to ignore some auto checks it does.
Outputs in stack creates exports in cloudformation which can be listed in AWS Console, !Import directive is used to reference to export from another stack.
cfn-lint section in metadata is used to silent errors in CloudFormation Linter tool and has no impact to the resource itself.
I have two Cloudformation files and I want to reference already created Ressources from one template, in another template. For Example: In the first one I create an ECS Cluster. In the second one I want to reference this cluster and build a Service in it. How can I do it?
To do this you have to exporting stack output values from the first template. Presumably this would be ECS Cluster name and/or its arn:
MyCluster:
Type: AWS::ECS::Cluster
Properties:
#....
Outputs:
MyClusterName:
Value: !Ref MyCluster
Export:
Name: ECSClusterName
Then in the second template you would use ImportValue to reference the exported output:
MyESSService:
Type: AWS::ECS::Service
Properties:
Cluster: !ImportValue ECSClusterName
I have been refactoring what has become a rather large stack because it is brushing up against size limits for CloudFormation scripts on AWS. In doing so I have had to resolve some dependencies (typically using Outputs) but I've run into a situation that I have never run into before...
How do I use a resource created in one nested stack (A) in another nested stack (B) when using DependsOn?
This question is a duplicate question but the answer does not fit because it doesn't actually resolve the issue I have, it takes a different approach based on that particular user's needs.
Here is the resource in nested stack A:
EndpointARestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Body:
Fn::Transform:
Name: 'AWS::Include'
Parameters:
Location: !Join ['/', [ 's3:/', !Ref SharedBucketName, !Ref WorkspacePrefix, 'endpoint.yaml' ]]
And here is the DependsOn request in stack B:
EndpointUserPoolResourceServer:
Type: Custom::CognitoUserPoolResourceServer
DependsOn:
- EndpointARestApi
- CustomResource ## this resource is in the same stack and resolves properly
This occurs with one other resource I have in this stack so I am hoping that I can do this easily. If not, I believe I would have to refactor some more.
As suggested in comments I moved the DependsOn statement up to the primary CFN script in the resource requiring the dependency and made sure the dependency was on the other resource, not the nested resource, like this:
Primary
ResourceA
ResourceB
DependsOn: ResourceA
Which ends up looking like this in the CloudFormation script:
EndpointUserPoolResourceServer:
Type: "AWS::CloudFormation::Stack"
DependsOn:
- EndpointARestApiResource
Properties:
Parameters:
AppName: !Ref AppName
Environment: !Ref Environment
DeveloperPrefix: !Ref DeveloperPrefix
DeployPhase: !Ref DeployPhase
I have a parameter in an aws cloudformation template
Parameters:
ExecRole:
Type: String
Description: Required. Lambda exec role ARN
Default: arn:aws:iam::123456789:role/lambdaExecRole
Assuming the 123456789 is the AcountId I want to use the pseudo parameter reference but I cannot do it, I try the followings without success
Default: arn:aws:iam::!Ref{AWS::AccountId}:role/exLambdaExecRole
Default: !Sub 'arn:aws:iam::${AWS::AccountId}:role/exLambdaExecRole'
In the last case is throwing me an error
Default member must be a string.
It seems like functions (ex. !Sub) are not supported in default values of Parameters.
Here's a workaround we're using.
We have a separate stack called Parameters which exports whatever parameters needed in other stacks. For instance:
Outputs:
VpcId:
Description: Id of the VPC.
Value: !Ref VpcId
Export:
Name: !Sub 'stk-${EnvType}-${EnvId}-VpcId'
In other stacks we simply import these exported values:
VpcId: !ImportValue
'Fn::Sub': 'stk-${EnvType}-${EnvId}-VpcId'
EnvType and EnvId are the same for all the stacks of one environment.
With roles you might want to do the following. Create a separate Roles template, implement your roles there and export their ARNs:
Outputs:
LambdaExecutionRoleArn:
Description: ARN of the execution role for the log-and-pass function.
Value: !GetAtt
- LambdaExecutionRole
- Arn
Export:
Name: !Sub 'stk-${EnvType}-${EnvId}-roles-LambdaExecutionRole-Arn'
Again, in other stack you could simply ImportValue:
Role: !ImportValue
'Fn::Sub': 'stk-${EnvType}-${EnvId}-roles-LogAndPassFunctionExecutionRole-Arn'
Assuming this will always be role, why can't you as a parameter ask for the nae to be passed in then use the Sub intrinsic function to replace in the Resources section of your CloudFormation template.
That way your arn:aws:iam::${AWS::AccountId}:role part of the arn would not need to be part of the parameter.
I have a simple question. I am testing export/import of values in cloud formation.
Question is: How to create resources based on linked conditions from another stack?
I think I should import the value from other stack, but don't know how....
This is my "export-test-stack"
AWSTemplateFormatVersion: '2010-09-09'
Description: Export
Parameters:
EnvType:
Description: How many Instances you want to deploy?
Default: two
Type: String
AllowedValues:
- two
- three
ConstraintDescription: must specify number of deployed Instances
Conditions:
Deploy3EC2: !Equals [ !Ref EnvType, three ]
Resources:
Ec2Instance1:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-5d011027
ImageId: ami-0b33d91d
Ec2Instance2:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-5d011027
ImageId: ami-0b33d91d
Ec2Instance3:
Type: AWS::EC2::Instance
Condition: Deploy3EC2
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-5d011027
ImageId: ami-0b33d91d
Outputs:
EC2Conditions:
Description: Number of deployed instances
Value: !Ref EnvType
Export:
Name: blablabla
This is my "import-test-stack"
AWSTemplateFormatVersion: '2010-09-09'
Description: Import
Resources:
Ec2Instance1:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-7309dd0a
ImageId: ami-70edb016
Ec2Instance2:
Type: AWS::EC2::Instance
Condition: ?????? <<<<<<<<<
Properties:
InstanceType: t2.micro
SecurityGroupIds:
- sg-7309dd0a
ImageId: ami-70edb016
It's about cross stack reference, so I want to deploy Ec2Instance2 in "import-test-stack" only if I choose to deploy three Instances in previous "export-test-stack". How to do this?
So if I choose to deploy three instances, I want to use condition in "import stack" to deploy another two instances, if I choose to deploy two, it will deploy only one instance in "import-stack"
I know how conditions working, but still not able to find the way, how to use in cross reference stacks.
I know it's stupid example, but I just wanted to test that on as simple template as possible.
You have two choices: continue with separated stacks or combine them to create a nested stack.
With nested stacks you can use outputs from one stack as inputs to another stack.
If you want to keep using separated stacks use Fn::ImportValue function to import output values exported from another stack.
The both angles have been covered in Exporting Stack Output Values page. Also, the cross-stack reference walkthrough might help you if you choose to use Fn::ImportValue.
This will get you to import the correct value:
Fn::ImportValue: EC2Conditions
You can also use rules. You can make the rule be based on the value of your output.
we cannot use import value here as cloudformation does not allow to use intrinsic values in the parameter. But there is an option of using SSM (AWS System Management parameter store ) parameters in AWS which allows us to use the parameter in stack B which is created in stack A
Please check the link below article from AWS knowledge center
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-systems-manager-parameter/