How to reference Fn::Transform in Cloudformation template - amazon-web-services

I'm trying to write a transform macro within cloudformation which will format a given string into an S3 bucket name prefix. I wrote out the script as a lambda and created the macro, but I get an error saying that my transform macro name does not exist.
BucketNameFormattingMacro:
Type: AWS::CloudFormation::Macro
Properties:
Description: Changes strings to be formatted properly to be added to S3 bucket names.
FunctionName: !GetAtt BucketNameFormattingScript.Arn
TransformFunctionPermissions:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !GetAtt BucketNameFormattingScript.Arn
Principal: 'cloudformation.amazonaws.com'
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Transform:
- Name: BucketNameFormattingMacro
Parameters:
InputString: !Sub '{ENV}-bucket'
Running the script throws this:
"No transform named 699790013825::BucketNameFormattingMacro found.. Rollback requested by user."
There is a string of numbers before the name of the macro that I didn't put there that I suspect is part of the issue. Why are those numbers there and how do I properly reference the transform to use it within my bucket name?
Edit: Unfortunately, you can't access a macro within a cloudformation script if it's being created within the same script.

Related

CloudFormation Error: 'CodeUri' requires Bucket and Key properties to be specified

I'm creating a Lambda through CloudFormation. The Function code path must be dynamic.
Here's my template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Parameters:
LambdaBucketName:
Type: String
Description: The name S3 Bucket of the lambda function code
Resources:
FUNC:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: 'my-lambda-func'
Handler: index.handler
Runtime: nodejs18.x
CodeUri:
FunctionCode:
Bucket: !Ref LambdaBucketName
Key: my-lambda-func.zip
etc...
When it deploys, I get this Cfn error message:
ROLLBACK_IN_PROGRESS : 'CodeUri' requires Bucket and Key properties to be specified.
But documentation says it's ok to do this. AWS::Serverless::Function
CodeUri
The function code's Amazon S3 URI, path to local folder, or FunctionCode object.
If I use just this:
CodeUri: s3://my-bucket/my-lambda-func.zip
It's fine because it's not dynamic. But if I try using that with !Ref (mapping), it won't work. Complains about the pattern.
If I try:
CodeUri:
Bucket: !Ref LambdaBucketName
Key: my-lambda-func.zip
Then I get a pattern error on Bucket. The ref'd bucket name is just a normal short string.
How can I get this to work?
Since the following works:
CodeUri: s3://my-bucket/my-lambda-func.zip
you can make it dynamic using:
CodeUri: !Sub "s3://{LambdaBucketName}/my-lambda-func.zip"

AWS-Serverless template reference Globals environment variables within the template

I am trying to create a AWS SAM template.yaml to create a lambda accessing a DynamoDB table resource and define the IAM permissions of this lambda.
I want the table name to be in an environment variable and I want it to be different depending on the development state (dev-prod etc).
However I cannot find a way to define the permission ARN. I have the following template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Lambda to API-Gateway endpoints mapper for Pantrimony
Parameters:
stage:
Type: String
Default: dev
region:
Type: String
Default: eu-central-1
Globals:
Function:
Environment:
Variables:
VICTUALS_TABLE: !Sub "Victuals-${stage}"
Resources:
GetVictuals:
Type: 'AWS::Serverless::Function'
Properties:
Handler: Pantrymony.back::Pantrymony.back.Lambda.ApiFunctions::GetVictuals
Runtime: dotnet6
Description: Lambda handler for API Gateway
MemorySize: 512
Timeout: 60
Events:
GetVictualsApi:
Type: Api
Properties:
Path: /victuals
Method: get
Policies:
- Statement:
- Sid: DescribeTablesPolicy
Effect: Allow
Action:
- dynamodb:DescribeTable
Resource: !Sub 'arn:aws:dynamodb:eu-central-1:926574008145:table/${self:Globals.Function.Environment.Variables.VICTUALS_TABLE}'
How can I reference a Globals variable in the rest of the template yaml.
As you can see I tried the ${self:...} term which is working for templates in the Serverless-framwork (for variables defined in provider.environment...) but in AWS-SAM it doesn't seem to work.
Is there another way to define some global variable which will be accessible throughout the template?

How to dynamically pass codeUri in SAM template

I am trying to deploy lambda having a zip(contains jar file). Now if the static value of the artifact in CodeUri is provided, it works fine but the problem is that the artifact is not static in nature i.e the version of the jar file (along with its name ex: abc-<1.x.x>-prod.jar) will change whenever their is new build.
So, I want to pass the artifact name in CodeUri as dynamic value rather than static value.
I had tried splitting Bucket, Key & pass the value as parameter but it fails saying NoSuchKey while deployment.
Edit: Adding Sample Template
Transform: AWS::Serverless-2016-10-31
Description: engine-service
Parameters:
Environment:
Type: String
Default: ""
SecurityGroupIds:
Type: String
Default: ""
SubnetIds1:
Type: String
Default: ""
SubnetIds2:
Type: String
Default: ""
DBSubnetGroupName:
Type: String
Default: ""
RDSSecret:
Type: String
Default: ""
RDSInstance:
Type: String
Default: ""
API:
Type: String
Default: ""
Globals:
Function:
Timeout: 120
Resources:
TranslationEngineLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub "translation-engine-service-${Environment}"
CodeUri: target/abc-**1.0.0-SNAPSHOT**-prod.jar
Handler: com.abc.Main
Runtime: java11
MemorySize: 1024
Environment:
Variables:
BUCKET_NAME: "abc-dummy"
DB_SECRET: "abc-dummy"
FUNCTION_NAME: TranslateFunction
SPRING_PROFILES_ACTIVE: db
TEXT_EXTRACT_LAMBDA: !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:text-extract-service-${Environment}
TRANSLATE_OPTION: AWS
VpcConfig:
SecurityGroupIds:
- !Ref SecurityGroupIds
SubnetIds:
- !Ref SubnetIds1
- !Ref SubnetIds2
Policies:
- AWSLambda_FullAccess
- AmazonEC2FullAccess
- SecretsManagerReadWrite
- AmazonS3ReadOnlyAccess
- AmazonRDSFullAccess
TranslationEngineLambdaInvoke:
Type: "AWS::Lambda::Permission"
Properties:
Action: "lambda:InvokeFunction"
FunctionName: !GetAtt "TranslationEngineLambda.Arn"
Principal: "apigateway.amazonaws.com"
SourceArn: !Join ['', ['arn:aws:execute-api:MyRegion:MyAccountNumber:', Fn::ImportValue: !Ref API, '/*/POST/language-translator/v1/translate']]
Outputs:
TranslationEngineLambda:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt TranslationEngineLambda.Arn
TranslationEngineLambdaIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt TranslationEngineLambdaRole.Arn`
Your question encompasses a few things.
First, if you're using CodeUri as you're doing in the template with a relative URL, AWS SAM will use that path to search from the directory in which the template resides to find the required files. If you're using Bucket/Key, AWS SAM will look in S3 to the specified Bucket and search for the Key. This is of course an entirely different way of working and assumes that you've already uploaded the artefact to that location yourself. You've presumably not done this, which results in the NoSuchKey error.
One of the more useful things about AWS SAM is that you can not only use it to deploy your code, but you can also use it to build your artefacts themselves. In that case, you have to point your CodeUri to the root of the folder in which your Lambda function code resides. AWS SAM will then - in the build step - create the necessary artefact (be it a jar of a zip). During the deployment, it will upload those artefacts to S3, update the CodeUris to reflect this and deploy the CloudFormation stack.
I don't think you can use CloudFormation parameters (with !Sub, !Join or similar) when using a relative CodeUri URL since the parameters are only interpreted in the cloud, and not during the AWS SAM build or package steps. So if you do not want to rely on AWS SAM to build your artefacts, you're probably better of also uploading them yourself.

Instead of referring an existing AWS S3 bucket, Cloud Formation is trying to create the bucket

I'm stuck in a weird issue. I have created an AWS S3 bucket using following cloud formation template:-
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
License: Unlicensed
Description: >
This template creates a global unique S3 bucket in a specific region which is unique.
The bucket name is formed by the environment, account id and region
Parameters:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
Environment:
Description: This paramenter will accept the environment details from the user
Type: String
Default: sbx
AllowedValues:
- sbx
- dev
- qa
- e2e
- prod
ConstraintDescription: Invalid environment. Please select one of the given environments only
Resources:
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html
MyS3Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Sub 'global-bucket-${Environment}-${AWS::Region}-${AWS::AccountId}' # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html
AccessControl: Private
LoggingConfiguration:
DestinationBucketName: !Ref 'LoggingBucket'
LogFilePrefix: 'access-logs'
Tags:
- Key: name
Value: globalbucket
- Key: department
Value: engineering
LoggingBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Sub 'global-loggings-${Environment}-${AWS::Region}-${AWS::AccountId}'
AccessControl: LogDeliveryWrite
Outputs:
GlobalS3Bucket:
Description: A private S3 bucket with deletion policy as retain and logging configuration
Value: !Ref MyS3Bucket
Export:
Name: global-bucket
If you note in the template above then I'm exporting this S3 bucket in the Outputs section by the name called global-bucket.
Now, my intention is to refer to this existing bucket going forward in my AWS account whenever any new resource like Lambda, etc wants an S3 bucket. Here is an example using AWS SAM (Serverless Application Model), I'm trying to create an AWS Lambda and trying to refer to this existing S3 bucket using property !ImportValue and the export name as global-bucket as shown below:-
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
hellolambda
Sample SAM Template for hellolambda
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs12.x
Events:
HelloLambdaEvent:
Type: S3
Properties:
Bucket: !Ref SrcBucket
Events: s3:ObjectCreated:*
SrcBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !ImportValue global-bucket
Now, the problem is when I execute the command like sam build and then sam deploy --guided and select the same region (where my previous CloudFormation stack output is present) then I get the following error:-
global-bucket-sbx-ap-southeast-1-088853283839 already exists in stack arn:aws:cloudformation:ap-southeast-1:088853283839:stack/my-s3-global-bucket/aabd20e0-f57d-11ea-80bf-06f1487f6a64
The screenshot below:-
The problem is AWS CloudFormation is trying to create the S3 bucket rather than referring to the existing one.
But, if I try to update this SAM template like and then execute sam deploy, I get the following error:-
Waiting for changeset to be created..
Error: Failed to create changeset for the stack: my-lambda-stack, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [HelloWorldFunction] is invalid. Event with id [HelloLambdaEvent] is invalid. S3 events must reference an S3 bucket in the same template.
I'm blocked by both ends. I would really appreciate it if someone can assist to guide me writing the SAM template correctly in my Lambda so that I can refer the existing bucket properly instead of creating the new one.
Thank you
Any items listed under the Resources section refer to the resources the stack is responsible for maintaining.
When you list SrcBucket you are asking for CloudFormation to create a new S3 bucket with the name being the value of !ImportValue global-bucket which is the name of an S3 bucket you have already created.
Assuming that this is the bucket name you can simply reference it in your template as shown below.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
hellolambda
Sample SAM Template for hellolambda
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs12.x
Events:
HelloLambdaEvent:
Type: S3
Properties:
Bucket: !ImportValue global-bucket
Events: s3:ObjectCreated:*

Cloud Formation: separate cloudformation template of S3 bucket and Lambda

I have created a cloudformation template to configure a S3 bucket with an event notification that will call a lambda function. The lamba is triggered whenever a new object is created in the bucket.
The problem I have is when I delete the stack the bucket is also deleted. For debugging and testing purpose I had to delete the stack.
AWSTemplateFormatVersion: '2010-09-09'
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function: !GetAtt ImageProcessor.Arn
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref ImageProcessor
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
To resolve this, I separated the two resources on separate template using Outputs. Problem with this is that I cannot delete Lambda function stack because it is being referenced by the Bucket stack.
I want to know what is the right approach. Is it really required to separate these two resources. I believe lambda function is required to be changed frequently.
If yes what is the correct way to do it.
If not how should I handle the necessity to makes changes.
The approach using Outputs and Imports will always create the dependencies and will not allow to delete. This is a generic behavior in any resources. How do we deal with deleting in this case.Is it good to use this approach
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Outputs:
ImageProcessingARN:
Description: ARN of the function
Value:
Fn::Sub: ${ImageProcessor.Arn}
Export:
Name: ImageProcessingARN
ImageProcessingName:
Description: Name of the function
Value: !Ref ImageProcessor
Export:
Name: ImageProcessingName
AWSTemplateFormatVersion: '2010-09-09'
Description: Test
Parameters:
BucketName:
Description: Name of the bucket
Type: String
Default: imageprocess-bucket
Resources:
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function:
Fn::ImportValue: ImageProcessingARN
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName:
Fn::ImportValue: ImageProcessingName
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
There is no such thing as the right approach, it almost always depends on your unique situation. Strictly speaking it is not required to separate the resources in different CloudFormation templates. A lambda function that changes a lot is also not a sufficient reason for separating the resources.
You seem to be separating the resources correctly in two different stacks. You just do not like that you have to delete the S3 bucket first because it makes debugging more difficult.
If my assumption is correct that you want to delete or update the Lambda CloudFormation stack frequently while not wanting to delete S3 bucket, then there are at least 2 solutions to this problem:
Put a Deletion Policy and an UpdateReplacePolicy on your S3 bucket. By adding these policies you can delete the CloudFormation stack, while keeping the S3 bucket. This will allow you to keep the S3 bucket and the Lambda function in one CloudFormation Template. To create the new stack again, remove the S3 Bucket Resource from the template and manually import the resource back into the CloudFormation stack later.
Use Queue Configuration as Notification Configuration. This is a good approach if you plan on separating the CloudFormation Template in a S3 bucket template and a Lambda function template (a decision based on frequency of change and dependencies between the two templates). Put an SQS queue in the S3 bucket template. Create the CloudFormation stack based on the S3 bucket template. Use the SQS arn (as a CloudFormation template configuration parameter or use the ImportValue intrinsic function) in the Lambda function stack and let SQS trigger the Lambda function. I think this is the best approach since you can now remove the Lambda function stack without having to delete the S3 bucket stack. This way you effectively reduce the coupling between the two CloudFormation stacks since you make the SQS in the S3 bucket stack unaware of potential Lambda function listeners.
4: I think that it is still possible to delete the S3 bucket CloudFormation stack first and delete the Image Processing Lambda CloudFormation stack second. Although I assume this is not something you typically want to do.