I have a CloudFormation stack which is made up of 3 nested stacks:
Resources:
ParamsSetup:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: a-params.yaml
ResourcePrep:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: b-prep.yaml
Services:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: c-service.yaml
I realized the ResourcePrep nested stack was unnecessary, so I moved the only important resource in that stack into the Services stack and removed the stack from my main template:
Resources:
ParamsSetup:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: a-params.yaml
Services:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: c-service.yaml
Now I have a problem. CloudFormation updates fail because the resource in Services already belongs to ResourcePrep, but ResourcePrep shouldn't exist anymore.
I had expected that CloudFormation would be smart enough to delete the removed stack, but it isn't. The removed stack is still there, and I don't know how to get rid of it. Everything I've read says you should never delete a nested stack manually.
You have a couple options here, none of which are particularly elegant like what your hoping.
Delete out the stack and leave the resource you want commented out (or deleted) for the cloudformation update/rebuild. After successfully updating with the stack removed, readd the resource you wanted/uncomment.
If the resource needs to be persisted, add a deletion retain parameter onto the resource, run the update, then delete the entire stack. After update complete re-add/reassociate the existing resource with the stack of your choosing.
Create an identical resource in the stack of your choosing with a different name and delete the odd stack.
Related
When using cfn-lint in nested stacks, it seems that the parameters of the root stack are not passed over to the child stack, so the check doesn't work properly.
cfn-lint will not find any errors if you enter the wrong parameters in the ROOT stack.
root stack
Resources:
VPC:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: dir/child.yml
Parameters:
If you put the S3 path in the templateURL, it will not be checked,
Local path returns error W3002.
This stack deploys normally.
Can you help me ?
I have been refactoring what has become a rather large stack because it is brushing up against size limits for CloudFormation scripts on AWS. In doing so I have had to resolve some dependencies (typically using Outputs) but I've run into a situation that I have never run into before...
How do I use a resource created in one nested stack (A) in another nested stack (B) when using DependsOn?
This question is a duplicate question but the answer does not fit because it doesn't actually resolve the issue I have, it takes a different approach based on that particular user's needs.
Here is the resource in nested stack A:
EndpointARestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Body:
Fn::Transform:
Name: 'AWS::Include'
Parameters:
Location: !Join ['/', [ 's3:/', !Ref SharedBucketName, !Ref WorkspacePrefix, 'endpoint.yaml' ]]
And here is the DependsOn request in stack B:
EndpointUserPoolResourceServer:
Type: Custom::CognitoUserPoolResourceServer
DependsOn:
- EndpointARestApi
- CustomResource ## this resource is in the same stack and resolves properly
This occurs with one other resource I have in this stack so I am hoping that I can do this easily. If not, I believe I would have to refactor some more.
As suggested in comments I moved the DependsOn statement up to the primary CFN script in the resource requiring the dependency and made sure the dependency was on the other resource, not the nested resource, like this:
Primary
ResourceA
ResourceB
DependsOn: ResourceA
Which ends up looking like this in the CloudFormation script:
EndpointUserPoolResourceServer:
Type: "AWS::CloudFormation::Stack"
DependsOn:
- EndpointARestApiResource
Properties:
Parameters:
AppName: !Ref AppName
Environment: !Ref Environment
DeveloperPrefix: !Ref DeveloperPrefix
DeployPhase: !Ref DeployPhase
I have a SAM cloudformation template:
Transform: AWS::Serverless-2016-10-31
Description: Create SNS with a sub
Parameters:
NotificationEmail:
Type: String
Description: Email address to subscribe to SNS topic
Resources:
NotificationTopic:
Type: AWS::SNS::Topic
DeletionPolicy: Retain
Properties:
TopicName: sam-test-sns
Subscription:
- Endpoint: !Ref NotificationEmail
Protocol: email
Outputs:
SNSTopic:
Value: !Ref NotificationTopic
So I want to keep the topic sam-test-sns around since there are several subscribers already, and I don't want subscribers to tediously re-subscribe if I tear down the service and bring it back up.
Tearing down the service with Retain keeps the topic around, so that's fine. But when I try deploy the template, it fails because it already exists.
So what is the right approach to use an existing SNS topic?
Keeping the "Ec2NotificationTopic" resource in the template after removing the stack but keeping the topic around, will instruct CloudFormation to also create the topic when (re)creating the stack, which will always fail.
Since you are just referencing an existing topic, you should remove the resource from the template and replace the references to it with the ARN/name.
With the output done you are exporting the variable. I am going to assume you want this resource in another stack.
First you need to export the value so for example
Outputs:
SNSTopic:
Value: !Ref NotificationTopic
Export:
Name: Fn::Sub: "${AWS::StackName}-SNSTopic"
Add a parameter to your new stack of SNSStackName, where you would pass in the SNS stacks name (within the current region).
Then from within your new stack to reference you would need to call the output value like below:
Fn::ImportValue:
Fn::Sub: "${SNSStackName}-SNSTopic"
I have a lambda which has a log group, say LG-1, for which retention is set to Never Expire (default). I need to change this Never Expire to 1 month. I am doing this using CloudFormation. As the log group already exists, when I am trying to deploy my lambda again with the changes in template as :
LambdaFunctionLogGroup:
Type: 'AWS::Logs::LogGroup'
DependsOn: MyLambda
Properties:
RetentionInDays: 30
LogGroupName: !Join
- ''
- - /aws/lambda/
- !Ref MyLambda
the update is failing with error :
[LogGroup Name] already exists.
One possible solution is to delete the log group and then again create it with new changes as shown above which works perfectly well.
But I need to do it without deleting the log group as it will result in the deletion of all the previous logs that I have.
Is there any workaround which is possible ?
#ttulka answered:
".. it is impossible to manipulate resources from CF which already exist out of the stack."
But actually the problem is more general than that and applies to resources created inside of the stack. It has to do with AWS CloudFormation resource "Replacement policy". For some resources the way CloudFormation "updates" the resource is to create a new resource, then delete the old resource (this is called the "Replacement" update policy). This means there is a period of time where you've got two resources of the same type with many of the same properties existing at the same time. But if a certain resource property has to be unique, the two resource can't exist at the same time if they have the same value for this property, so ... CloudFormation blows up.
AWS::Logs::LogGroup.LogGroupName property is one such property. AWS::CloudWatch::Alarm.AlarmName is another example.
A work around is to unset the name so that a random name is used, perform an update, then set the name back to it's predictable fixed value and update again.
Rant: It's an annoying problem that really shouldn't exist. I.e. AWS CF should be smart enough to not have to use this weird clunky resource replacement implementation. But ... that's AWS CF for you ...
I think it is impossible to manipulate resources from CF which already exist out of the stack.
One workaround would be to change the name of the Lambda like my-lambda-v2 to keep the old log group together with the new one.
After one month you can delete the old one.
Use customresource Backed lambda within your cloudformation template. The custom resource would be triggered automatically the first time and update your retention policy of the existing log group. If you need it you custom resource lambda to be triggered every time, then use a templating engine like jinja2.
import boto3
client = boto3.client('logs')
response = client.put_retention_policy(
logGroupName='string',
retentionInDays=123
)
You can basically make your CF template do (almost) anything you want using Custom Resource
More information (Boto3, you can find corresponding SDK for the language you use) - https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/logs.html#CloudWatchLogs.Client.put_retention_policy
EDIT: Within the CloudFormation Template, it would look something like the following:
LogRetentionSetFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src
Handler: set_retention_period.handler
Role: !GetAtt LambdaRole.Arn
DeploymentPreference:
Type: AllAtOnce
PermissionForLogRetentionSetup:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:invokeFunction
FunctionName:
Fn::GetAtt: [ LogRetentionSetFunction, Arn ]
Principal: lambda.amazonaws.com
InvokeLambdaFunctionToSetLogRetention:
DependsOn: [PermissionForLogRetentionSetup]
Type: Custom::SetLogRetention
Properties:
ServiceToken: !GetAtt LogRetentionSetFunction.Arn
StackName: !Ref AWS::StackName
AnyVariable: "Choose whatever you want to send"
Tags:
'owner': !Ref owner
'task': !Ref task
The lambda function would have the code which sets up the log retention as per the code which I already specified before.
For more information, please google "custom resource backed lambda". Also to get you a head start I have added the ink below:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
I'm looking to use CloudFormation to build my AWS stack which includes an API Gateway with Usage Plans. I'd like to specify my usage plans in my main CloudFormation template, rather than having to add them as a change-set after the initial stack create. The problem is that the stack fails to create when I include the usage plan because (I think) the API Gateway is not finished deploying when it tries to create the usage plans since I get an error saying that the stage "prod" does not exist. My CloudFormation template (extract) looks like this:
Api:
Properties:
CacheClusterEnabled: true
CacheClusterSize: '0.5'
DefinitionUri: {MYS3URL}
StageName: prod
Type: AWS::Serverless::Api
ApiFreeUsagePlan:
DependsOn: Api
Properties:
ApiStages:
- ApiId:
Ref: Api
Stage: prod
Description: Free usage plan
UsagePlanName: Free
Type: AWS::ApiGateway::UsagePlan
I thought adding DependsOn: Api to the usage plan definition would work but it doesn't so I'm out of ideas?
It seems like my DependsOn statement should be on the ApiDeployment which I can see in the stack create events is still in progress when it tries to create the usage plan
The only way I've found that can do this is by setting the DependsOn property of the Usage plan to the logical Api Stage name which is {LogicalApiName}{StageName}Stage for example in my case:
Api:
Properties:
CacheClusterEnabled: true
CacheClusterSize: '0.5'
DefinitionUri: {MYS3URL}
StageName: prod
Type: AWS::Serverless::Api
ApiFreeUsagePlan:
DependsOn: ApiprodStage
I don't like this as it relies on the logical stage naming convention which I don't believe is officially documented in the AWS CloudFromation docs, however it appears to be the only reliable option