I have added one layer to my Lambda function through CloudFormation. Now I have a requirement to add one more layer to my function. Basically, I need two layers in my existing Lambda function. Is it possible? I tried searching the AWS docs but I don't see it.
Resources:
LambdaLayer:
Type: "AWS::Lambda::LayerVersion"
Properties:
CompatibleRuntimes:
- python3.8
Content:
S3Bucket: !Sub "hello-${AWS::Region}"
S3Key: !Sub "myapp/layer1.zip"
LayerName: "layer1"
LambdaFunction:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: hello
S3Key: myapp/function.zip"
FunctionName: "hello-function"
Handler: "hello-function.lambda_handler"
Layers:
- !Ref LambdaLayer
Yes, is it possible. Add additional layers in the same way that you did the first layer, only append numbers to the resource names to distinguish them:
Resources:
LambdaLayer1:
Type: "AWS::Lambda::LayerVersion"
Properties:
CompatibleRuntimes:
- python3.8
Content:
S3Bucket: !Sub "hello-${AWS::Region}"
S3Key: !Sub "myapp/layer1.zip"
LayerName: "layer1"
LambdaLayer2:
Type: "AWS::Lambda::LayerVersion"
...
LayerName: "layer2"
LambdaFunction:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: hello
S3Key: myapp/function.zip"
FunctionName: "hello-function"
Handler: "hello-function.lambda_handler"
Layers:
- !Ref LambdaLayer1
- !Ref LambdaLayer2
Related
I have tried to use two different end point for different purpose, but its working fine when i use only one endpoint but deployment got failed with error when i use two endpoints. here is my code in template.yml file
DataConditionApiLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub ${Environment}-${Application}-condition-api
Description: Lambda function behind the POST, GET, DELETE schedule api endpoints
Timeout: 90
Runtime: nodejs14.x
MemorySize: 128
Handler: !Sub src/handlers/condition/api.eventHandler
Policies:
- AWSLambdaBasicExecutionRole
- DynamoDBCrudPolicy:
TableName:
!Ref DataConditionTable
- SSMParameterReadPolicy:
ParameterName: 'acf/*'
Environment:
Variables:
Environment: !Ref Environment
conditionTableName: !Ref DataConditionTable
GreenhouseApiKeySSMPath: !Ref GreenhouseApiKeySSMPath
ConsensysKeySSMPath: !Ref ConsensysKeySSMPath
ConsensysBaseUri: !Ref ConsensysBaseUri
Tags:
Environment: !Ref Environment
Application: !Ref Application
CodePath: !Ref CodePath
Events:
HttpApiEvent:
Type: HttpApi
Properties:
- Path: /datafilter/{dataConditionId}
Method: GET
- Path: /datafilter/condition
Method: POST
any suggest the better solution to implement multiple endpoints.
So far I have this template.yml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
My Lambda for doing something
Resources:
FirstLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: FirstLayer
Description: First layer of dependencies
ContentUri: layers/first-layer/
CompatibleRuntimes:
- nodejs14.x
Metadata:
BuildMethod: nodejs14.x
SecondLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: SecondLayer
Description: Second layer of dependencies
ContentUri: layers/second-layer/
CompatibleRuntimes:
- nodejs14.x
Metadata:
BuildMethod: nodejs14.x
MyLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: "MyLambda"
Policies:
- AmazonS3FullAccess
CodeUri: src/
Handler: lambda.handler
Timeout: 30
MemorySize: 2048 # Chrome will require higher memory
Runtime: nodejs14.x
Layers:
- !Ref FirstLayer
- !Ref SecondLayer
With this template I am able to start and invoke MyLambda locally and also deploy it to AWS. The problem I have is that I would like to reuse these same layers on other Lambdas as well, so for doing that I could simply extract these layers to another yml file, deploy them separately and then include the layers ARNs in the Layers property of my Lambda, but then, how can I run it locally with sam? I wouldn't like to have 2 template.yml files for my Lambda, one including the Layers on the Resources (like the one I already have) to run locally and another one with the refs to the actual layers ARNs to deploy on AWS, but that's the only solution I am seeing now.
The first question you need to ask is if those lambdas belong to the same application. If that´s not the case, you should use different templates, in order to deploy different stacks, to have isolated environments.
However, if you want to share resources, you have to very similar options:
Configure the layer in the parent template and pass the ARN as a parameter.
template.yml
Resources:
SharedLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: shared_layer
Description: Some code to share with the other lambda functions
ContentUri: ./layer
CompatibleRuntimes:
- nodejs14.x
RetentionPolicy: Delete
Application:
Type: "AWS::Serverless::Application"
Properties:
Location: "./app.template.yml"
Parameters:
SharedLayer: !Ref SharedLayer
app.template.yml
Parameters:
SharedLayer:
Type: String
Description: ARN of the SharedLayer
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./
Handler: index.handler
Layers:
- !Ref SharedLayer
Configure the layers in a nested template, set the ARN as an output, and then pass its output as a parameter to the other templates.
layers.template.yml
Resources:
SharedLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: shared_layer
Description: Some code to share with the other lambda functions
ContentUri: ./layer
CompatibleRuntimes:
- nodejs14.x
RetentionPolicy: Delete
Outputs:
SharedLayerARN:
Description: ARN of the Shared Layer
Value: !Ref SharedLayer
template.yml
Layer:
Type: "AWS::Serverless::Application"
Properties:
Location: "./layers.template.yml"
Application:
Type: "AWS::Serverless::Application"
Properties:
Location: "./app.template.yml"
Parameters:
SharedLayer: !GetAtt Layer.Outputs.SharedLayerARN
Both scenarios are supported by AWS SAM.
In a Cloudformation template, I define two S3 Buckets.
Bucket1:
Type: AWS::S3::Bucket
Properties:
...
Bucket2:
Type: AWS::S3::Bucket
Properties:
...
Outputs:
Bucket1:
Description: S3 Bucket
Value: !Ref Bucket1
Export:
Name: !Sub "${AWS::StackName}:Bucket1"
Bucket2:
Description: S3 Bucket
Value: !Ref Bucket2
Export:
Name: !Sub "${AWS::StackName}:Bucket2"
I use these exported buckets in two different cloudformation templates.
Template 1
Parameters:
LoaderCodeBucket:
Type: String
Resources:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Fn::ImportValue:
!Sub "${LoaderCodeBucket}:Bucket1"
Template 2
Parameters:
ProcessorCodeBucket:
Type: String
Resources:
MyOtherLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket:
Fn::ImportValue:
!Sub "${ProcessorCodeBucket}:Bucket2"
Template 1 passes aws cloudformation validate-template --template-body ... while Template 2 fails due to
Template error: the attribute in Fn::ImportValue must not depend on any resources, imported values, or Fn::GetAZs.
The only difference is the lambda function in template 2 is used in an aws analytics application that is also defined in template 2.
I know for sure it's the S3 Bucket causing issues because when I remove that section of code, it passes the validation check.
I have been using this site to try to debug this issue, but none of the questions seem to answer this particular issue.
This is in same region/same account.
My question is:
Why is this particular section of code (template 2) throwing a template error when template 1 passes with no error?
This is a working example.
Template 1:
AWSTemplateFormatVersion: "2010-09-09"
Description: "Test"
Resources:
MyBucketOne:
Type: "AWS::S3::Bucket"
Properties:
BucketName: bucket-one-12341234
MyBucketTwo:
Type: "AWS::S3::Bucket"
Properties:
BucketName: bucket-two-12341234
Outputs:
MyBucketOneOutput:
Description: "Bucket Name of BucketOne"
Value: !Ref MyBucketOne
Export:
Name: !Sub "${AWS::StackName}-BucketOne"
MyBucketTwoOutput:
Description: "Bucket Name of BucketTwo"
Value: !Ref MyBucketTwo
Export:
Name: !Sub "${AWS::StackName}-BucketTwo"
Template 2: we can import it as !ImportValue my-s3-BucketOne
AWSTemplateFormatVersion: "2010-09-09"
Description: "Test"
Resources:
MyLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: index.handler
Runtime: nodejs12.x
FunctionName: "test-s3-import"
Code:
S3Bucket: !ImportValue my-s3-BucketOne
S3Key: "index.zip"
Description: "Test Lambda"
MemorySize: 128
Timeout: 60
Role: test-role-arn
If you do want to use from Parameter, it will be Fn::ImportValue: !Sub ${BucketExportNamePrefix}-BucketOne
AWSTemplateFormatVersion: "2010-09-09"
Description: "Test"
Parameters:
BucketExportNamePrefix:
Type: String
Default: "my-s3"
Resources:
MyLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: index.handler
Runtime: nodejs12.x
FunctionName: "test-s3-import"
Code:
S3Bucket:
Fn::ImportValue: !Sub ${BucketExportNamePrefix}-BucketOne
S3Key: "index.zip"
Description: "Test Lambda"
MemorySize: 128
Timeout: 60
Role: test-role-arn
I am trying to achieve something similar to below in a AWS Cloudformation YAML file:
AWSTemplateFormatVersion: 2010-09-09
testAttribute = "test"
Resources:
Lambda:
Type: AWS::Lambda::Function
Properties:
Runtime: python3.7
Role: !GetAtt iam.Arn
MemorySize: 128
Timeout: 10
Handler: lambda_function.lambda_handler
FunctionName: "testName"+${testAttribute}
Description: 'This is my lambda'
Code:
S3Bucket: myBucket
S3Key: "lambda/testName"+${testAttribute}+".zip"
I know that above isn't quite correct, but I cant find a good answer when searching how to achieve it. Anyone who have some guidance on this matter?
It depends on the use case but if the "variable" would be static and you don't need the change it when deploying the stack, I would suggest an alternative solution, to use the Mappings section.
This allows you to define some static values without sending them when deploying the stack (you will have much cleaner deploy commands, and the logic would be on the template side instead of the deploy side).
In this case, I'm using !Sub intrinsic function with a mapping (you can set multiple variables to be substituted using !Sub):
AWSTemplateFormatVersion: 2010-09-09
Mappings:
attributes:
lambda:
testAttribute: "test"
Resources:
Lambda:
Type: AWS::Lambda::Function
Properties:
Runtime: python3.7
Role: !GetAtt iam.Arn
MemorySize: 128
Timeout: 10
Handler: lambda_function.lambda_handler
FunctionName: !Sub
- "testName${attr}"
- {attr: !FindInMap [attributes, lambda, testAttribute]}
Description: 'This is my lambda'
Code:
S3Bucket: myBucket
S3Key: !Sub
- "lambda/testName${attr}.zip"
- {attr: !FindInMap [attributes, lambda, testAttribute]}
Note: Mappings have a mandatory three-level nesting, take this into consideration while designing your solution
You could use Parameters with a default value, and Sub later in the template:
AWSTemplateFormatVersion: 2010-09-09
Parameters:
testAttribute:
Type: String
Default: test
Resources:
Lambda:
Type: AWS::Lambda::Function
Properties:
Runtime: python3.7
Role: !GetAtt iam.Arn
MemorySize: 128
Timeout: 10
Handler: lambda_function.lambda_handler
FunctionName: !Sub "testName${testAttribute}"
Description: 'This is my lambda'
Code:
S3Bucket: myBucket
S3Key: !Sub "lambda/testName${testAttribute}.zip"
[Edited for typo]
The following AWS CloudFormation gives a circular dependency error. My understanding is that the dependencies flow like this: rawUploads -> generatePreview -> previewPipeline -> rawUploads. Although it doesn't seem like rawUploads depends on generatePreview, I guess CF needs to know what lambda to trigger when creating the bucket, even though the trigger is defined in the lambda part of the CloudFormation template.
I've found some resources online that talk about a similar issue, but it doesn't seem to apply here. https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-circular-dependency-cloudformation/
What are my options for breaking this circular dependency chain? Scriptable solutions are viable, but multiple deployments with manual changes are not for my use case.
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
rawUploads:
Type: 'AWS::S3::Bucket'
previewAudioFiles:
Type: 'AWS::S3::Bucket'
generatePreview:
Type: AWS::Serverless::Function
Properties:
Handler: generatePreview.handler
Runtime: nodejs6.10
CodeUri: .
Environment:
Variables:
PipelineId: !Ref previewPipeline
Events:
BucketrawUploads:
Type: S3
Properties:
Bucket: !Ref rawUploads
Events: 's3:ObjectCreated:*'
previewPipeline:
Type: Custom::ElasticTranscoderPipeline
Version: '1.0'
Properties:
ServiceToken:
Fn::Join:
- ":"
- - arn:aws:lambda
- Ref: AWS::Region
- Ref: AWS::AccountId
- function
- aws-cloudformation-elastic-transcoder-pipeline-1-0-0
Name: transcoderPipeline
InputBucket:
Ref: rawUploads
OutputBucket:
Ref: previewAudioFiles
One way is to give the S3 buckets explicit names so that later, instead of relying on Ref: bucketname, you can simply use the bucket name. That's obviously problematic if you want auto-generated bucket names and in those cases it's prudent to generate the bucket name from some prefix plus the (unique) stack name, for example:
InputBucket: !Join ["-", ['rawuploads', Ref: 'AWS::StackName']]
Another option is to use a single CloudFormation template but in 2 stages - the 1st stage creates the base resources (and whatever refs are not circular) and then you add the remaining refs to the template and do a stack update. Not ideal, obviously, so I would prefer the first approach.
You can also use the first technique in cases when you need a reference to an ARN, for example:
!Join ['/', ['arn:aws:s3:::logsbucket', 'AWSLogs', Ref: 'AWS:AccountId', '*']]
When using this technique, you may want to also consider using DependsOn because you have removed an implicit dependency which can sometimes cause problems.
This post helped me out in the end: https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/
I ended up configuring an SNS topic in CloudFormation. The bucket would push events on this topic, and the Lambda function listens to this topic. This way the dependency graph is as follows:
S3 bucket -> SNS topic -> SNS topic policy
Lambda function -> SNS topic
Lambda function -> transcoder pipeline
Something along the lines of this (some policies omitted)
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
SNSTopic:
Type: AWS::SNS::Topic
SNSTopicPolicy:
Type: AWS::SNS::TopicPolicy
Properties:
PolicyDocument:
Id: MyTopicPolicy
Version: '2012-10-17'
Statement:
- Sid: Statement-id
Effect: Allow
Principal:
AWS: "*"
Action: sns:Publish
Resource:
Ref: SNSTopic
Condition:
ArnLike:
aws:SourceArn:
!Join ["-", ['arn:aws:s3:::rawuploads', Ref: 'AWS::StackName']]
Topics:
- Ref: SNSTopic
rawUploads:
Type: 'AWS::S3::Bucket'
DependsOn: SNSTopicPolicy
Properties:
BucketName: !Join ["-", ['rawuploads', Ref: 'AWS::StackName']]
NotificationConfiguration:
TopicConfigurations:
- Topic:
Ref: "SNSTopic"
Event: 's3:ObjectCreated:*'
previewAudioFiles:
Type: 'AWS::S3::Bucket'
generatePreview:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Join ["-", ['generatepreview', Ref: 'AWS::StackName']]
Handler: generatePreview.handler
Runtime: nodejs6.10
CodeUri: .
Environment:
Variables:
PipelineId: !Ref previewPipeline
Events:
BucketrawUploads:
Type: SNS
Properties:
Topic: !Ref "SNSTopic"
previewPipeline:
Type: Custom::ElasticTranscoderPipeline
DependsOn: 'rawUploads'
Version: '1.0'
Properties:
ServiceToken:
Fn::Join:
- ":"
- - arn:aws:lambda
- Ref: AWS::Region
- Ref: AWS::AccountId
- function
- aws-cloudformation-elastic-transcoder-pipeline-1-0-0
Name: transcoderPipeline
InputBucket:
!Join ["-", ['arn:aws:s3:::rawuploads', Ref: 'AWS::StackName']]
OutputBucket:
Ref: previewAudioFiles