Update existing Log Group using CloudFormation - amazon-web-services

I have a lambda which has a log group, say LG-1, for which retention is set to Never Expire (default). I need to change this Never Expire to 1 month. I am doing this using CloudFormation. As the log group already exists, when I am trying to deploy my lambda again with the changes in template as :
LambdaFunctionLogGroup:
Type: 'AWS::Logs::LogGroup'
DependsOn: MyLambda
Properties:
RetentionInDays: 30
LogGroupName: !Join
- ''
- - /aws/lambda/
- !Ref MyLambda
the update is failing with error :
[LogGroup Name] already exists.
One possible solution is to delete the log group and then again create it with new changes as shown above which works perfectly well.
But I need to do it without deleting the log group as it will result in the deletion of all the previous logs that I have.
Is there any workaround which is possible ?

#ttulka answered:
".. it is impossible to manipulate resources from CF which already exist out of the stack."
But actually the problem is more general than that and applies to resources created inside of the stack. It has to do with AWS CloudFormation resource "Replacement policy". For some resources the way CloudFormation "updates" the resource is to create a new resource, then delete the old resource (this is called the "Replacement" update policy). This means there is a period of time where you've got two resources of the same type with many of the same properties existing at the same time. But if a certain resource property has to be unique, the two resource can't exist at the same time if they have the same value for this property, so ... CloudFormation blows up.
AWS::Logs::LogGroup.LogGroupName property is one such property. AWS::CloudWatch::Alarm.AlarmName is another example.
A work around is to unset the name so that a random name is used, perform an update, then set the name back to it's predictable fixed value and update again.
Rant: It's an annoying problem that really shouldn't exist. I.e. AWS CF should be smart enough to not have to use this weird clunky resource replacement implementation. But ... that's AWS CF for you ...

I think it is impossible to manipulate resources from CF which already exist out of the stack.
One workaround would be to change the name of the Lambda like my-lambda-v2 to keep the old log group together with the new one.
After one month you can delete the old one.

Use customresource Backed lambda within your cloudformation template. The custom resource would be triggered automatically the first time and update your retention policy of the existing log group. If you need it you custom resource lambda to be triggered every time, then use a templating engine like jinja2.
import boto3
client = boto3.client('logs')
response = client.put_retention_policy(
logGroupName='string',
retentionInDays=123
)
You can basically make your CF template do (almost) anything you want using Custom Resource
More information (Boto3, you can find corresponding SDK for the language you use) - https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/logs.html#CloudWatchLogs.Client.put_retention_policy
EDIT: Within the CloudFormation Template, it would look something like the following:
LogRetentionSetFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src
Handler: set_retention_period.handler
Role: !GetAtt LambdaRole.Arn
DeploymentPreference:
Type: AllAtOnce
PermissionForLogRetentionSetup:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:invokeFunction
FunctionName:
Fn::GetAtt: [ LogRetentionSetFunction, Arn ]
Principal: lambda.amazonaws.com
InvokeLambdaFunctionToSetLogRetention:
DependsOn: [PermissionForLogRetentionSetup]
Type: Custom::SetLogRetention
Properties:
ServiceToken: !GetAtt LogRetentionSetFunction.Arn
StackName: !Ref AWS::StackName
AnyVariable: "Choose whatever you want to send"
Tags:
'owner': !Ref owner
'task': !Ref task
The lambda function would have the code which sets up the log retention as per the code which I already specified before.
For more information, please google "custom resource backed lambda". Also to get you a head start I have added the ink below:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html

Related

Expose SNSTopic TopicArn in AWS CloudFormation Template: How might I expose my TopicArn in my CloudFormation script for my SNS Topic?

I'd like to expose the TopicArn Value (referenced in the outputs section at the bottom of my code snippet) of my SNStopic via Cloudformation template in the outputs tab of my stack in a similar manner to the way it's exposed in the resources when I create an SNStopic through the service catalog. I tried to access it by referencing it in the outputs section of my yaml script using dot notation but have been unsuccessful thus far. How might I be able to do so? I'm looking to do this so others using my script in the future won't have to go searching for the TopicArn in another place in order to subscribe to it.
Another important thing to note is that the provisioned product id below, under the properties section of the resources code block generates an SNSTopic.
Resources:
LabTrainingSnsTopic:
Type: "AWS::ServiceCatalog::CloudFormationProvisionedProduct"
Properties:
ProductId: prod-4iafsjovqrsrm # Sns Topic
ProvisioningArtifactName: "v1.1" # Must be an actual version number.
ProvisionedProductName: !Ref ProvisionedProductName
...
Outputs:
AccountID:
Description: The account in which this was built.
Value: !Ref 'AWS::AccountId'
TopicArn:
Description: Arn of the topic we created
Value: !GetAtt LabTrainingHigSnsTopic.ProvisionedProductName.Resources.SNSTopic
service catalog screenshot
cloudformation screenshot

Can AWS CloudFormation resources call !GetAtt on themselves?

I am trying to set up the Inventory configuration for an S3 bucket with CloudFormation. I want to get daily inventories of data in one subfolder, and have the inventories written to a different subfolder in the same bucket. I have defined the bucket as follows:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
# ...other properties...
InventoryConfigurations:
- Id: runs
Enabled: true
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: !GetAtt S3Bucket.Arn
Format: CSV
Prefix: inventory/runs/
IncludedObjectVersions: Current
OptionalFields: [ETag, Size, BucketKeyStatus]
Prefix: runs/
ScheduleFrequency: Daily
Unfortunately, the !GetAtt S3Bucket.Arn line seems to be failing, causing an error message like "Error: Failed to create changeset for the stack: , ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Circular dependency between resource". If I use the actual ARN of the bucket in place of !GetAtt S3Bucket.Arn (it already exists from a previous version of the stack), then the deploy succeeds, so I know buckets can write Inventories to themselves.
So I guess my question is, is there a way to let Cfn resources call !GetAtt on themselves, so I don't have to hard-code the bucket ARN in InventoryConfigurations? Thanks in advance!
Can AWS CloudFormation resources call !GetAtt on themselves?
Unfortunately no, as the !GetAtt is used to reference other resources in the stack as you've experienced (other as in concrete resources that have already been created).
However, in your case, considering you know the bucket name, you could just construct the bucket ARN yourself directly.
Format:
arn:aws:s3:::bucket_name
e.g. if the name is test, you can use arn:aws:s3:::test
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: 'arn:aws:s3:::test'

Re-using an existing SNS Topic in a Cloudformation template

I have a SAM cloudformation template:
Transform: AWS::Serverless-2016-10-31
Description: Create SNS with a sub
Parameters:
NotificationEmail:
Type: String
Description: Email address to subscribe to SNS topic
Resources:
NotificationTopic:
Type: AWS::SNS::Topic
DeletionPolicy: Retain
Properties:
TopicName: sam-test-sns
Subscription:
- Endpoint: !Ref NotificationEmail
Protocol: email
Outputs:
SNSTopic:
Value: !Ref NotificationTopic
So I want to keep the topic sam-test-sns around since there are several subscribers already, and I don't want subscribers to tediously re-subscribe if I tear down the service and bring it back up.
Tearing down the service with Retain keeps the topic around, so that's fine. But when I try deploy the template, it fails because it already exists.
So what is the right approach to use an existing SNS topic?
Keeping the "Ec2NotificationTopic" resource in the template after removing the stack but keeping the topic around, will instruct CloudFormation to also create the topic when (re)creating the stack, which will always fail.
Since you are just referencing an existing topic, you should remove the resource from the template and replace the references to it with the ARN/name.
With the output done you are exporting the variable. I am going to assume you want this resource in another stack.
First you need to export the value so for example
Outputs:
SNSTopic:
Value: !Ref NotificationTopic
Export:
Name: Fn::Sub: "${AWS::StackName}-SNSTopic"
Add a parameter to your new stack of SNSStackName, where you would pass in the SNS stacks name (within the current region).
Then from within your new stack to reference you would need to call the output value like below:
Fn::ImportValue:
Fn::Sub: "${SNSStackName}-SNSTopic"

Serverless: Deplyment error S3 Bucket already exists in stack

I am trying to deploy a serverless project which has s3 bucket creation cloudformation in the serverless.yml file, but the problem is when I tried to deploy, it says the s3 bucket already exists and failing the deployment.
I know s3 bucket name should be globally unique, and I am damn sure it is a unique name that I am using, even if changed to something else, it still says the same.
the cloudformation stack it says the s3 bucket exists is actually the newly created stack, not sure how to fix this issue. can anyone help me out with this issue and tell me how to fix the deployment issue and the cause for the issue :).
Thanks in advance.
The issue I had was, for one of the lambdas I had the above-mentioned bucket as the event source, so when some bucket is added as event source it actually creating that bucket as well, therefore when it runs the actual creation related cloudformation it is saying the bucket already exists.
So I fixed it by only keeping the event source and removed the actual declaration of that bucket.
If you add existing: true to the S3 config in your serverless.yml file it won't try to create the S3 bucket like the below:-
funcName:
handler: handler
events:
- s3:
bucket: 'my-bucket-name'
events: s3:ObjectCreated:*
existing: true
rules:
- suffix: .pdf
- prefix: documents
Anything involving CloudFormation (or any other infrastructure-in-code) is fussy, and the error messages can mislead, meaning there are a ton of things that can cause this problem (see issues on GitHub like this one).
But in my experience, the most common causes of these kind of problems are are not the pre-existing bucket, but problems with AWS credentials, permissions, or region that give misleading error messages. To fix these, or at least rule them out:
Make sure your serveless.yml is set to the region you already deployed the stack in. Example:
custom:
stage: dev
region: us-east-2
Override any latent credentials from, for example, ~/.aws/credentials, by explicitly setting your credentials in the shell you'll use to deploy. Example from the Serverless docs:
export AWS_ACCESS_KEY_ID=<your access key here>
export AWS_SECRET_ACCESS_KEY=<your access secret here.
Make sure those AWS credentials have the roles and permissions they need.
But, as I mentioned, CloudFormation is fussy. There may be other problems to solve, but try these first. You may try them and still be beating your head against the wall, but it'll more likely be the right wall. Hope this helps.
Try to use Conditional statements and pass them as a Parameter to create the bucket or not
AWSTemplateFormatVersion: 2010-09-09
Parameters:
EnvType:
Description: Environment type.
Default: test
Type: String
AllowedValues:
- prod
- test
ConstraintDescription: must specify prod or test.
Conditions:
CreateProdResources: !Equals
- !Ref EnvType
- prod
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: ami-0ff8a91507f77f867
MountPoint:
Type: 'AWS::EC2::VolumeAttachment'
Condition: CreateProdResources
Properties:
InstanceId: !Ref EC2Instance
VolumeId: !Ref NewVolume
Device: /dev/sdh
NewVolume:
Type: 'AWS::EC2::Volume'
Condition: CreateProdResources
Properties:
Size: 100
AvailabilityZone: !GetAtt
- EC2Instance
- AvailabilityZone
Follow the sample condition flow to decide wheather to create a resource or not.
See this for more details
When deploying, the BucketName must be unique across all regions. So if anyone has already created a bucket with "local-bucket-dev," it will throw
An error occurred: AttachmentsBucket - local-bucket-dev already
exists.
Try to just the BucketName to be unique.
I hope that helps.

AWS cloudformation error: Template validation error: Template error: resource NotificationsTopic does not support attribute type Arn in Fn::GetAtt

I am trying to create an AWS cloudformation stack using a yaml template.
The goal is to create a sns topic for some notifications.
I want to output the topic arn, to be able to subscribe multiple functions to that topic by just specifying the topic arn.
However I am getting an error when I try to create the stack from the aws console:
"Template validation error: Template error: resource NotificationsTopic does not support attribute type Arn in Fn::GetAtt"
I have done exactly the same for s3 buckets, dynamodb tables, and all working good, but for some reason, with SNS topic I cannot get the ARN.
I want to avoid hardcoding the topic arn in all functions that are subscribed. Because if one day the the ARN topic changes, I'll need to change all functions, instead I want to import the topic arn in all functions and use it. This way I will have to modify nothing if for any reason I have a new arn topic in the future.
This is the template:
Parameters:
stage:
Type: String
Default: dev
AllowedValues:
- dev
- int
- uat
- prod
Resources:
NotificationsTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: !Sub 'notifications-${stage}'
Subscription:
- SNS Subscription
TopicName: !Sub 'notifications-${stage}'
Outputs:
NotificationsTopicArn:
Description: The notifications topic Arn.
Value: !GetAtt NotificationsTopic.Arn
Export:
Name: !Sub '${AWS::StackName}-NotificationsTopicArn'
NotificationsTopicName:
Description: Notifications topic name.
Value: !Sub 'notifications-${stage}'
Export:
Name: !Sub '${AWS::StackName}-NotificationsTopicName'
Not all resources are the same. Always check the documentation for the particular resource. It has the "Return Values" section and you can easily verify that SNS topic has ARN as a Ref value, so you don't have to use GetAtt function
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sns-topic.html
Edit: Thanks for the comment which points out that not every resource provides its ARN. A notable example is the Autoscaling group. Sure, the key thing in my answer was "check the documentation for each resource", this is an example that not every resource has every attribute.
Having said that, ARN missing for the ASG output is a really strange thing. It cannot be also constructed easily, because the ARN also contains GroupId which is a random hash. There is probably some effort to solve this at least for the use-case of ECS Capacity Providers https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/548 and https://github.com/aws/containers-roadmap/issues/631#issuecomment-648377011 but I think that is is an significant enough issue that it should be mentioned here.
For resources that don't directly return ARN, I found a workaround which consists of building the ARN myself.
For instance, to get the ARN of my codepipeline:
!Join [ ':', [ "arn:aws:codepipeline", !Ref AWS::Region, !Ref AWS::AccountId, !Ref StackDeletePipeline ] ]