I want to create a Lambda function that is triggered from a S3 bucket created within the same CloudFormation stack but cannot get the syntax quite right.
The event should only be fired when an object is uploaded to /uploads. I also need to specify some bucket properties (CORS).
S3 bucket definition in resources
resources:
Resources:
myBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket
# CORS properties...
Event in function definition:
events:
- s3:
bucket: myBucket
event: s3:ObjectCreated:Put
rules:
- prefix: uploads/
I do not want to use existing: true because it creates some helper objects for this simple task. I cannot find any documentation or examples that fit my case.
The existing:true flag only relates to S3 buckets created outside of your serverless project, for buckets that already exist, which is not the case here.
The situation you face is that you can't use the typical serverless framework convenience of defining the bucket in the Lambda event trigger, like this:
functions:
users:
handler: users.handler
events:
- s3:
bucket: photos
event: s3:ObjectRemoved:*
The reason that you can't use that method is that it creates the photos bucket and does not allow you to supply additional bucket configuration, e.g. CORS or bucket policy.
The solution to this is to create the S3 bucket in the S3 provider configuration, with CORS policy, and then refer to the bucket from your Lambda function event configuration. For example:
provider:
s3:
photosBucket:
name: photos
versioningConfiguration:
Status: Enabled
corsConfiguration:
CorsRules
- rule1 here
Related
I am trying to create a lambda function with function code in the S3 bucket. Below is my template.
This template creates the lambda but, not the S3 bucket mentioned. I am looking for assistance to create S3 bucket through this template.
Resources:
ProducerLambda:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: !Sub '${ApplicationId}-${env}-Producer-Lambda-${AWS::AccountId}-${AWS::Region}'
Handler: index.handler
Runtime: nodejs14.x
CodeUri:
Bucket: s3bucket
Key: s3bucketref.zip
Role: 'arn:aws:iam::${AWS::AccountId}:role/Producer-lambda-trigger-role'
VpcConfig:
SecurityGroupIds: !Ref SecurityGroup
SubnetIds: !Ref VPCSubnetId
Environment:
Variables:
Region: !Sub '${AWS::Region}'
CodeUri is used to specify the path to the function's code - this can be an Amazon S3 URI, the path to a local folder, or a FunctionCode object.
They are not used to create S3 buckets.
If the packaged function does not exist at s3bucket/s3bucketref.zip then you will have to create the bucket yourself and upload the package.
Or alternatively, using sam build will build the Lambda for you & sam deploy will then automatically create an S3 bucket for you:
Deploying AWS Lambda functions through AWS CloudFormation requires an Amazon Simple Storage Service (Amazon S3) bucket for the Lambda deployment package. The AWS SAM CLI creates and manages this Amazon S3 bucket for you.
The latter is much simpler to manage.
To preface this, I'm very new to Cloud Formation. I'm trying to build a template that will deploy a fairly simple environment, with two services.
I need to have an S3 bucket that triggers a message to SQS whenever an object is created. When creating these assets, the S3 configuration must include a pointer to the SQS queue. But the SQS Queue must have a policy that specifically allows the S3 bucket permission. This creates a circular dependency. In order to break this circle I would like to do the following:
Create S3 bucket
Create SQS queue, reference S3 bucket
Modify the S3 bucket to reference SQS queue.
When I try this I get an error telling me it can't find the SQS queue. When I put a DependsOn command in #3 it errors out in a circular dependency.
Can you declare a resource , the re-declare it with new parameters later in the template? If so, how would you do that. Am I approaching this wrong?
What leads to circular dependencies in such scenarios is the use of intrinsic functions like Ref or Fn::GetAtt, which require the reference resources to be available. To avoid this, you can specify a resource ARN without referring to a resource. Here is an example template where CloudFormation does the following:
Create a queue
Add a queue policy to grant permissions to a non-existent bucket
Create the bucket
Template:
Parameters:
BucketName:
Description: S3 Bucket name
Type: String
Default: mynewshinybucket
Resources:
Queue:
Type: AWS::SQS::Queue
QueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues:
- !Ref Queue
PolicyDocument:
Statement:
- Effect: Allow
Action: SQS:SendMessage
Resource: !GetAtt Queue.Arn
Principal:
AWS: '*'
Condition:
ArnLike:
# Specify bucket ARN by referring to a parameter instead of the actual bucket resource which does not yet exist
aws:SourceArn: !Sub arn:aws:s3:::${BucketName}
Bucket:
Type: AWS::S3::Bucket
# Create the bucket after the queue policy to avoid "Unable to validate the following destination configurations" errors
DependsOn: QueuePolicy
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
QueueConfigurations:
- Event: 's3:ObjectCreated:Put'
Queue: !GetAtt Queue.Arn
Edit:
When using Ref/GetAtt/Sub to retrieve values from another resource, all of them require this resource to be available.
CloudFormation will make sure that the resource that uses the function will always be created after the reference resource. This way circular dependencies are detected.
Sub is used for string substitution but works exactly as a Ref when used with parameters or resources (Source).
The point is that we are referring to a parameter (and not a resource), which are always available.
Using Sub is a bit simpler in this case, because using Ref would require an additional Join. For example this would give you the same result:
aws:SourceArn: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref BucketName
Another way would be to hard-code the bucket ARN without using any intrinsic functions. The important thing is not to reference the bucket itself to avoid the circular dependency.
When writing the AWS CloudFormation template to create a Lambda function, 'Code' field is required.
I found the documentation here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
The document says you can specify the source of your Lambda function as a zip file in a S3 bucket. And in the S3Bucket field, it says "You can specify a bucket from another AWS account as long as the Lambda function and the bucket are in the same region."
If you put a bucket name in the S3Bucket field, it will try to find the bucket in the same AWS account. So my question is how can I specify a bucket from another AWS account?
A code snippet in yaml I created for the CFT:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.handler
Runtime: nodejs6.10
Role: !GetAtt LambdaRole.Arn
FunctionName: 'MyLambda'
MemorySize: 1024
Timeout: 30
Code:
S3Bucket: 'my-bucket'
S3Key: 'my-key'
An S3 bucket is an S3 bucket. It doesn't matter which AWS account it is in. If you have permission to access the bucket then you can access it.
Simply provide the name of the S3 bucket (it must be in the same region in this specific case) and make sure the credentials you are using are allowed access to the S3 bucket.
If you are deploying your Cloudformation stack in multiple AWS regions, you can quickly create identical S3 buckets in each of these regions using a tool like cfs3-uploader.
I am trying to create a lambda function from a CloudFormation template based on this example:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-lambda.html
As can be seen from this link:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html
there is no way to add a trigger for the lambda function (like a S3 upload trigger).
Is there a workaround to specify the trigger while writing the template?
You can use cloudwatch rule to trigger your lambda function :
AWSTemplateFormatVersion: '2010-09-09'
Resources:
MyCloudWatchRule:
Type: "AWS::Events::Rule"
Properties:
Description: "Rule to trigger lambda"
Name: "MyCloudWatchRule"
EventPattern: <Provide Valid JSON Event pattern>
State: "ENABLED"
Targets:
- Arn: "arn:aws:lambda:us-west-2:12345678:function:MyLambdaFunction"
Id: "1234567-acvd-awse-kllpk-123456789"
Ref :
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEventsandEventPatterns.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule-syntax
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-events-rule-target.html
It's been a while so I imagine you've solved the problem, but I'll put in my 2 cents to help others.
It's best to use SAM (Serverless Application Model) for this kind of things. So use AWS::Serverless::Function instead of AWS::Lambda::Function
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html
In there, you can specify an EventSource which accepts the following possible values:
S3
SNS
Kinesis
DynamoDB
SQS
Api
Schedule
CloudWatchEvent
CloudWatchLogs
IoTRule
AlexaSkill
Cognito
HttpApi
SAM does the rest of the work. Follow this guide for the rest of the details:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Nowadays, this issue is fixed by Amazon:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule--examples
Just create Lambda permissions like in the example.
Lambda function can be triggered by several AWS resources such as S3, SNS, SQS, API, etc. Checkout for the full list at AWS docs
I suggest you use Altostra Designer, which let you create and configure Lambda Function super quick and also choose what will trigger it.
You need to add a NotificationConfiguration to the S3 bucket definition. However, this will lead to a circular dependency where the S3 bucket refers to the Lambda function and the Lambda function refers to the S3 bucket.
To avoid this circular dependency, create all resources (including the S3 bucket and the Lambda function) without specifying the notification configuration. Then, after you have created your stack, update the template with a notification configuration and then update the stack.
Here is a SAM based YAML example for CloudWatch log group trigger
lambdaFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri:
Bucket: someBucket
Key: someKey
Description: someDescription
Handler: function.lambda_handler
MemorySize:
Ref: MemorySize
Runtime: python3.7
Role: !GetAtt 'iamRole.Arn'
Timeout:
Ref: Timeout
Events:
NRSubscription0:
Type: CloudWatchLogs
Properties:
LogGroupName: 'someLogGroupName'
FilterPattern: "" #Match everything
For S3 example event see https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-property-function-s3.html
I want to add trigger event on a Lambda function on an already existing bucket and for that I am using below configuration:
events:
- s3:
bucket: serverlesstest
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .pdf
where bucket serverlesstest is already existing on S3.
This configurations is throwing the error:
An error occurred while provisioning your stack: S3BucketServerlesstest - serverlesstest already exists.
How can I resolve this error using Serverless Framework?
It’s not currently possible in the core framework because of CloudFormation behavior. maybe.
But you can use this plugin.
https://github.com/matt-filion/serverless-external-s3-event
After installing serverless-plugin-existing-s3 by npm install serverless-plugin-existing-s3.
And add plugins to serverless.yml
plugins:
serverless-plugin-existing-s3
Give your deploy permission to access the bucket.
provider:
name: aws
runtime: nodejs4.3
iamRoleStatements:
...
- Effect: "Allow"
Action:
- "s3:PutBucketNotification"
Resource:
Fn::Join:
- ""
- - "arn:aws:s3:::BUCKET_NAME or *"
And use existingS3 event, it is not just s3.
functions:
someFunction:
handler: index.handler
events:
- existingS3:
bucket: BUCKET_NAME
events:
- s3:ObjectCreated:*
rules:
- prefix: images/
- suffix: .jpg
After sls deploy command,
You can attach event by using sls s3deploy command.
Feature Proposal
it will be added someday in the future.
https://github.com/serverless/serverless/issues/4241
This is possible as of serverless version v1.47.0, by adding the existing: true flag to your event configuration: https://serverless.com/framework/docs/providers/aws/events/s3/
example from the source:
functions:
users:
handler: users.handler
events:
- s3:
bucket: legacy-photos
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
existing: true # <- this makes it work with existing objects
The source provides the following caveats:
IMPORTANT: You can only attach 1 existing S3 bucket per function.
NOTE: Using the existing config will add an additional Lambda function and IAM Role to your stack. The Lambda function backs-up the Custom S3 Resource which is used to support existing S3 buckets.
Unfortunately, you can't specify an existing S3 bucket to trigger the Lambda function because the Serverless Framework* can't change existing infrastructure using Cloud Formation. This configuration requires that you create a new bucket.
You can read more in the following issues that were open on GitHub:
Can't subscribe to events of existing S3 bucket
s3 events can't refer to existing bucket
* I would try to configure this trigger using AWS Console or the SDK instead of the Serverelss Framework.
serverless.yml seems to be very sensitive to spaces. For me this advice was helpful.
If config looks like this
functions:
hello:
handler: handler.main
events:
- s3:
bucket: codepipeline-us-east-1-213458767560
event: s3:ObjectCreated:*
rules:
- prefix: test/MyAppBuild
you need to add 2 more spaces to the indent of bucket, event & rules:
functions:
hello:
handler: handler.main
events:
- s3:
bucket: codepipeline-us-east-1-213458767560
event: s3:ObjectCreated:*
rules:
- prefix: test/MyAppBuild
If the bucket was created using Serverless elsewhere in the stack, then you could use
- s3:
Bucket: { Ref: serverlesstest }
Otherwise you'll have to construct the name or ARN yourself.