Can AWS CloudFormation resources call !GetAtt on themselves? - amazon-web-services

I am trying to set up the Inventory configuration for an S3 bucket with CloudFormation. I want to get daily inventories of data in one subfolder, and have the inventories written to a different subfolder in the same bucket. I have defined the bucket as follows:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
# ...other properties...
InventoryConfigurations:
- Id: runs
Enabled: true
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: !GetAtt S3Bucket.Arn
Format: CSV
Prefix: inventory/runs/
IncludedObjectVersions: Current
OptionalFields: [ETag, Size, BucketKeyStatus]
Prefix: runs/
ScheduleFrequency: Daily
Unfortunately, the !GetAtt S3Bucket.Arn line seems to be failing, causing an error message like "Error: Failed to create changeset for the stack: , ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Circular dependency between resource". If I use the actual ARN of the bucket in place of !GetAtt S3Bucket.Arn (it already exists from a previous version of the stack), then the deploy succeeds, so I know buckets can write Inventories to themselves.
So I guess my question is, is there a way to let Cfn resources call !GetAtt on themselves, so I don't have to hard-code the bucket ARN in InventoryConfigurations? Thanks in advance!

Can AWS CloudFormation resources call !GetAtt on themselves?
Unfortunately no, as the !GetAtt is used to reference other resources in the stack as you've experienced (other as in concrete resources that have already been created).
However, in your case, considering you know the bucket name, you could just construct the bucket ARN yourself directly.
Format:
arn:aws:s3:::bucket_name
e.g. if the name is test, you can use arn:aws:s3:::test
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: 'arn:aws:s3:::test'

Related

Cloudformation fails with 'failed validation constraint for keyword [pattern]'

I am trying to create a Workflow object using AWS CloudFormation. This workflow will be used with AWS File Transfer Family so that files get copied to S3 upon uploading.
AWSTemplateFormatVersion: "2010-09-09"
Resources:
SftpToS3Workflow:
Type: AWS::Transfer::Workflow
Properties:
Description: 'Workflow used by AWS File Transfer Family. Copies the files to S3'
Steps:
- Type: COPY
CopyStepDetails:
Name: copt-to-s3-wf-step
DestinationFileLocation:
S3FileLocation:
Bucket: !ImportValue GenesysS3BucketName
Key: "genesys/"
OverwriteExisting: 'TRUE'
Outputs:
SftpToS3WorkflowId:
Description: 'Id of the Workflow'
Value: !GetAtt SftpToS3Workflow.WorkflowId
Export:
Name: SftpToS3WorkflowId
Unfortunately, this script fails with the below error. The error does not say what property is failing validation. Can someone help, please? I could not find even one single example on GitHub.
Properties validation failed for resource SftpToS3Workflow with message: #/Description: failed validation constraint for keyword [pattern]
I have used this CloudFormation schema to write the code:
https://github.com/APIs-guru/openapi-directory/blob/0380216a44c364b4517b31a93295089a6f4f23b9/APIs/amazonaws.com/transfer/2018-11-05/openapi.yaml
The Description can only be
^[\w- ]*$
So it should be:
Description: 'Workflow used by AWS File Transfer Family - Copies the files to S3'

Create then change a resource in Cloud Formation

To preface this, I'm very new to Cloud Formation. I'm trying to build a template that will deploy a fairly simple environment, with two services.
I need to have an S3 bucket that triggers a message to SQS whenever an object is created. When creating these assets, the S3 configuration must include a pointer to the SQS queue. But the SQS Queue must have a policy that specifically allows the S3 bucket permission. This creates a circular dependency. In order to break this circle I would like to do the following:
Create S3 bucket
Create SQS queue, reference S3 bucket
Modify the S3 bucket to reference SQS queue.
When I try this I get an error telling me it can't find the SQS queue. When I put a DependsOn command in #3 it errors out in a circular dependency.
Can you declare a resource , the re-declare it with new parameters later in the template? If so, how would you do that. Am I approaching this wrong?
What leads to circular dependencies in such scenarios is the use of intrinsic functions like Ref or Fn::GetAtt, which require the reference resources to be available. To avoid this, you can specify a resource ARN without referring to a resource. Here is an example template where CloudFormation does the following:
Create a queue
Add a queue policy to grant permissions to a non-existent bucket
Create the bucket
Template:
Parameters:
BucketName:
Description: S3 Bucket name
Type: String
Default: mynewshinybucket
Resources:
Queue:
Type: AWS::SQS::Queue
QueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues:
- !Ref Queue
PolicyDocument:
Statement:
- Effect: Allow
Action: SQS:SendMessage
Resource: !GetAtt Queue.Arn
Principal:
AWS: '*'
Condition:
ArnLike:
# Specify bucket ARN by referring to a parameter instead of the actual bucket resource which does not yet exist
aws:SourceArn: !Sub arn:aws:s3:::${BucketName}
Bucket:
Type: AWS::S3::Bucket
# Create the bucket after the queue policy to avoid "Unable to validate the following destination configurations" errors
DependsOn: QueuePolicy
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
QueueConfigurations:
- Event: 's3:ObjectCreated:Put'
Queue: !GetAtt Queue.Arn
Edit:
When using Ref/GetAtt/Sub to retrieve values from another resource, all of them require this resource to be available.
CloudFormation will make sure that the resource that uses the function will always be created after the reference resource. This way circular dependencies are detected.
Sub is used for string substitution but works exactly as a Ref when used with parameters or resources (Source).
The point is that we are referring to a parameter (and not a resource), which are always available.
Using Sub is a bit simpler in this case, because using Ref would require an additional Join. For example this would give you the same result:
aws:SourceArn: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref BucketName
Another way would be to hard-code the bucket ARN without using any intrinsic functions. The important thing is not to reference the bucket itself to avoid the circular dependency.

CloudFormation template for lambda-based service, S3Key does not exist

I am attempting to create a CloudFormation template for an AWS lambda service and I'm running into a "chicken or the egg" scenario between the s3 bucket holding my lambda code, and the lambda function calling said bucket.
The intent is for our lambda code to be built to a jar, which will be hosted in an S3 Bucket, and our lambda function will reference that bucket. However when I run the template (using the CLI aws cloudformation create-stack --template-body "file://template.yaml"), I run into the following error creating the lambda function:
CREATE_FAILED Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: ...; Proxy: null)
I believe this is happening because cloudformation is building both the bucket and lambda in the same transaction, and I can't stop it in the middle to push content into the brand new bucket.
I can't be the only one that has this problem, so I'm wondering if there's a common practice for tackling it? I'd like to keep all my configuration in a single template file if possible, but the only solutions I'm coming up with would require splitting the stack creation into multiple steps. (e.g. build the bucket first, deploy my code to it, then create the rest of the stack.) Is there a better way to do this?
template.yaml (the relevant bits)
...
myS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "${AWS::StackName}"
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
AccessControl: Private
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
VersioningConfiguration:
Status: Enabled
myLambdaFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub "${AWS::StackName}-dec"
Handler: "lambda.Handler"
Role: !GetAtt myLambdaExecutionRole.Arn
Code:
S3Bucket: !Ref myS3Bucket
S3Key: "emy-lambda-fn.jar"
Runtime: "java8"
Timeout: 90
MemorySize: 384
Environment:
Variables:
stackName: !Sub "${AWS::StackName}"
...
I'm coming up with would require splitting the stack creation into multiple steps. [...] Is there a better way to do this?
Splitting template into two is the most logical and easiest way of doing what you are trying to do.
There are some alternatives that would allow you to keep everything in one template, but they are more difficult to implement, manage and simply use. One alternative would be to develop a custom resources. The resource would be in the form of a lambda function that would get invoked after the bucket creation. The lambda would wait and check for existence of your emy-lambda-fn.jar in the bucket, and when the key is uploaded (within 15 min max), the function returns, and your stack creation continues. This means that your myLambdaFunction would be creating only after the custom resource returns, ensuring that emy-lambda-fn.jar exists.

Serverless: Deplyment error S3 Bucket already exists in stack

I am trying to deploy a serverless project which has s3 bucket creation cloudformation in the serverless.yml file, but the problem is when I tried to deploy, it says the s3 bucket already exists and failing the deployment.
I know s3 bucket name should be globally unique, and I am damn sure it is a unique name that I am using, even if changed to something else, it still says the same.
the cloudformation stack it says the s3 bucket exists is actually the newly created stack, not sure how to fix this issue. can anyone help me out with this issue and tell me how to fix the deployment issue and the cause for the issue :).
Thanks in advance.
The issue I had was, for one of the lambdas I had the above-mentioned bucket as the event source, so when some bucket is added as event source it actually creating that bucket as well, therefore when it runs the actual creation related cloudformation it is saying the bucket already exists.
So I fixed it by only keeping the event source and removed the actual declaration of that bucket.
If you add existing: true to the S3 config in your serverless.yml file it won't try to create the S3 bucket like the below:-
funcName:
handler: handler
events:
- s3:
bucket: 'my-bucket-name'
events: s3:ObjectCreated:*
existing: true
rules:
- suffix: .pdf
- prefix: documents
Anything involving CloudFormation (or any other infrastructure-in-code) is fussy, and the error messages can mislead, meaning there are a ton of things that can cause this problem (see issues on GitHub like this one).
But in my experience, the most common causes of these kind of problems are are not the pre-existing bucket, but problems with AWS credentials, permissions, or region that give misleading error messages. To fix these, or at least rule them out:
Make sure your serveless.yml is set to the region you already deployed the stack in. Example:
custom:
stage: dev
region: us-east-2
Override any latent credentials from, for example, ~/.aws/credentials, by explicitly setting your credentials in the shell you'll use to deploy. Example from the Serverless docs:
export AWS_ACCESS_KEY_ID=<your access key here>
export AWS_SECRET_ACCESS_KEY=<your access secret here.
Make sure those AWS credentials have the roles and permissions they need.
But, as I mentioned, CloudFormation is fussy. There may be other problems to solve, but try these first. You may try them and still be beating your head against the wall, but it'll more likely be the right wall. Hope this helps.
Try to use Conditional statements and pass them as a Parameter to create the bucket or not
AWSTemplateFormatVersion: 2010-09-09
Parameters:
EnvType:
Description: Environment type.
Default: test
Type: String
AllowedValues:
- prod
- test
ConstraintDescription: must specify prod or test.
Conditions:
CreateProdResources: !Equals
- !Ref EnvType
- prod
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: ami-0ff8a91507f77f867
MountPoint:
Type: 'AWS::EC2::VolumeAttachment'
Condition: CreateProdResources
Properties:
InstanceId: !Ref EC2Instance
VolumeId: !Ref NewVolume
Device: /dev/sdh
NewVolume:
Type: 'AWS::EC2::Volume'
Condition: CreateProdResources
Properties:
Size: 100
AvailabilityZone: !GetAtt
- EC2Instance
- AvailabilityZone
Follow the sample condition flow to decide wheather to create a resource or not.
See this for more details
When deploying, the BucketName must be unique across all regions. So if anyone has already created a bucket with "local-bucket-dev," it will throw
An error occurred: AttachmentsBucket - local-bucket-dev already
exists.
Try to just the BucketName to be unique.
I hope that helps.

Update existing Log Group using CloudFormation

I have a lambda which has a log group, say LG-1, for which retention is set to Never Expire (default). I need to change this Never Expire to 1 month. I am doing this using CloudFormation. As the log group already exists, when I am trying to deploy my lambda again with the changes in template as :
LambdaFunctionLogGroup:
Type: 'AWS::Logs::LogGroup'
DependsOn: MyLambda
Properties:
RetentionInDays: 30
LogGroupName: !Join
- ''
- - /aws/lambda/
- !Ref MyLambda
the update is failing with error :
[LogGroup Name] already exists.
One possible solution is to delete the log group and then again create it with new changes as shown above which works perfectly well.
But I need to do it without deleting the log group as it will result in the deletion of all the previous logs that I have.
Is there any workaround which is possible ?
#ttulka answered:
".. it is impossible to manipulate resources from CF which already exist out of the stack."
But actually the problem is more general than that and applies to resources created inside of the stack. It has to do with AWS CloudFormation resource "Replacement policy". For some resources the way CloudFormation "updates" the resource is to create a new resource, then delete the old resource (this is called the "Replacement" update policy). This means there is a period of time where you've got two resources of the same type with many of the same properties existing at the same time. But if a certain resource property has to be unique, the two resource can't exist at the same time if they have the same value for this property, so ... CloudFormation blows up.
AWS::Logs::LogGroup.LogGroupName property is one such property. AWS::CloudWatch::Alarm.AlarmName is another example.
A work around is to unset the name so that a random name is used, perform an update, then set the name back to it's predictable fixed value and update again.
Rant: It's an annoying problem that really shouldn't exist. I.e. AWS CF should be smart enough to not have to use this weird clunky resource replacement implementation. But ... that's AWS CF for you ...
I think it is impossible to manipulate resources from CF which already exist out of the stack.
One workaround would be to change the name of the Lambda like my-lambda-v2 to keep the old log group together with the new one.
After one month you can delete the old one.
Use customresource Backed lambda within your cloudformation template. The custom resource would be triggered automatically the first time and update your retention policy of the existing log group. If you need it you custom resource lambda to be triggered every time, then use a templating engine like jinja2.
import boto3
client = boto3.client('logs')
response = client.put_retention_policy(
logGroupName='string',
retentionInDays=123
)
You can basically make your CF template do (almost) anything you want using Custom Resource
More information (Boto3, you can find corresponding SDK for the language you use) - https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/logs.html#CloudWatchLogs.Client.put_retention_policy
EDIT: Within the CloudFormation Template, it would look something like the following:
LogRetentionSetFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src
Handler: set_retention_period.handler
Role: !GetAtt LambdaRole.Arn
DeploymentPreference:
Type: AllAtOnce
PermissionForLogRetentionSetup:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:invokeFunction
FunctionName:
Fn::GetAtt: [ LogRetentionSetFunction, Arn ]
Principal: lambda.amazonaws.com
InvokeLambdaFunctionToSetLogRetention:
DependsOn: [PermissionForLogRetentionSetup]
Type: Custom::SetLogRetention
Properties:
ServiceToken: !GetAtt LogRetentionSetFunction.Arn
StackName: !Ref AWS::StackName
AnyVariable: "Choose whatever you want to send"
Tags:
'owner': !Ref owner
'task': !Ref task
The lambda function would have the code which sets up the log retention as per the code which I already specified before.
For more information, please google "custom resource backed lambda". Also to get you a head start I have added the ink below:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html