I want to create an s3 bucket, and trigger a lambda function whenever some file is uploaded to 'uploads' folder in the bucket. I want to create those resources using serverless framework in aws.
I have defined my s3 bucket configuration in under 'provider.s3', and then I am trying to reference that bucket under functions.hello.events.bucket
However, I am getting following error when i run sls package
Serverless Error ----------------------------------------
MyS3Bucket - Bucket name must conform to pattern (?!^(\d{1,3}\.){3}\d{1,3}$)(^(([a-z0-9]|[a-z0-9][a-z0-9-]*[a-z0-9])\.)*([a-z0-9]|[a-z0-9][a-z0-9-]*[a-z0-9])$). Please check provider.s3.MyS3Bucket and/or s3 events of function "hello".
serverless.yml
service: some-service
frameworkVersion: '2'
useDotenv: true
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
s3:
MyS3Bucket:
bucketName: ${env:MY_BUCKET_NAME}
accessControl: Private
lifecycleConfiguration:
Rules:
- Id: ExpireRule
Status: Enabled
ExpirationInDays: '7'
package:
individually: true
functions:
hello:
name: my-lambda-function
handler: function.handler
memorySize: 128
timeout: 900
events:
- s3:
bucket: MyS3Bucket
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
My next try was defining the s3 bucket under 'resources', and using the reference of the bucket in the lambda trigger. I am still getting the warning messages
Serverless: Configuration warning at 'functions.hello.events[0].s3.bucket': should be string
serverless.yml
service: some-service
frameworkVersion: '2'
useDotenv: true
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
package:
individually: true
functions:
hello:
name: my-lambda-function
handler: handler.handler
memorySize: 128
timeout: 900
events:
- s3:
bucket:
Ref: MyS3Bucket
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
existing: true
resources:
Resources:
MyS3Bucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketName: 'test.bucket'
OwnershipControls:
Rules:
- ObjectOwnership: ObjectWriter
LifecycleConfiguration:
Rules:
- Id: ExpireRule
Status: Enabled
ExpirationInDays: '7'
You should use your bucket name, not MyS3Bucket:
events:
- s3:
bucket: ${env:MY_BUCKET_NAME}
Create a custom s3 bucket name variable e.g.
custom:
bucket: foo-thumbnails
And use it events
events:
- s3:
bucket: ${self:custom.bucket}
Related
I have created AWS Lambda function to run when new fils in S3 at specific path is created, which works perfectly.
service: redshift
frameworkVersion: '2'
custom:
bucket: extapp
path_prefix: 'xyz'
database: ABC
schema: xyz_dbo
table_prefix: shipmentlog
user: admin
password: "#$%^&*(*&^%$%"
port: 5439
endpoint: "*********.redshift.amazonaws.com"
role: "arn:aws:iam::*****:role/RedshiftFileTransfer"
provider:
name: aws
runtime: python3.8
stage: prod
region: us-west-2
stackName: redshift-prod-copy
stackTags:
Service: "it"
lambdaHashingVersion: 20201221
memorySize: 128
timeout: 900
logRetentionInDays: 14
environment:
S3_BUCKET: ${self:custom.bucket}
S3_BUCKET_PATH_PREFIX: ${self:custom.path_prefix}
REDSHIFT_DATABASE: ${self:custom.database}
REDSHIFT_SCHEMA: ${self:custom.schema}
REDSHIFT_TABEL_PREFIX: ${self:custom.table_prefix}
REDSHIFT_USER: ${self:custom.user}
REDSHIFT_PASSWORD: ${self:custom.password}
REDSHIFT_PORT: ${self:custom.port}
REDSHIFT_ENDPOINT: ${self:custom.endpoint}
REDSHIFT_ROLE: ${self:custom.role}
iam:
role:
name: s3-to-redshift-copy
statements:
- Effect: Allow
Action:
- s3:GetObject
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
functions:
copy:
handler: handler.run
events:
- s3:
bucket: ${self:custom.bucket}
event: s3:ObjectCreated:*
rules:
- prefix: ${self:custom.path_prefix}/
- suffix: .json
existing: true
package:
exclude:
- node_modules/**
- package*.json
- README.md
plugins:
- serverless-python-requirements
But when I deployed this function, there was also another function get deployed with name redshift-prod-custom-resource-existing-s3 which is Node.js function. I want to understand why this second function necessary for triggering primary lambda function when new file is creates in S3 bucket at specific path.
It's the serverless Framework's method of adding the trigger to call the lambda to the S3 bucket via a Custom Resource
I am trying to call the detectText method from Rekognition framework and it failed to call S3 bucket. I am not sure how to give roles in SAM Template. Below is my SAM template
GetTextFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: gettextfn/
Handler: text.handler
Runtime: nodejs12.x
Timeout: 3
MemorySize: 128
Environment:
Variables:
imagebucket: !Ref s3bucket
Events:
TextApiEvent:
Type: HttpApi
Properties:
Path: /gettext
Method: get
ApiId: !Ref myapi
Looks like your lambda needs RekognitionDetectOnlyPolicy and also looks you miss the policy to read/write data from S3 bucket also. Have a look at below Policies: added after Environment:
Environment:
Variables:
imagebucket: !Ref s3bucket
Policies:
- S3ReadPolicy:
BucketName: !Ref s3bucket
- RekognitionDetectOnlyPolicy: {}
Events:
You can refer the complete list of AWS SAM policy templates here https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html
Also have a look at a sample template here
https://github.com/rollendxavier/serverless_computing/blob/main/template.yaml
I am trying to get my lambda to run when an image is added to a "folder" in an s3 bucket. Here is the template
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 1. Creates the S# bucket that sotres the images from the camera.\n
2. Resizes the images when a new image shows up from a camera.\n
3. Adds a record of the image in the DB.
Globals:
Function:
Timeout: 10
Parameters:
DeploymentStage:
Type: String
Default: production
Resources:
CameraImagesBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
CreateThumbnailFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: image_resize/
Handler: app.lambda_handler
Runtime: python3.8
Description: Creates a thumbnail of images in the camare_images bucket
Policies:
- S3ReadPolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
- S3WritePolicy:
BucketName: !Sub
- com.wastack.camera.images.${stage}
- { stage: !Ref DeploymentStage }
Events:
CameraImageEvent:
Type: S3
Properties:
Bucket:
Ref: CameraImagesBucket
Events:
- 's3:ObjectCreated:*'
Filter:
S3Key:
Rules:
- Name: prefix
Value: camera_images
When I look at the lambda created on the AWS console, I do not see the trigger even in the lambda visualiser. The lambda doesn't event have the s3 read and write policies attached to it.
The s3 bucket and the lambda are created, but the policies and triggers that are supposed to connect them are not created.
I did not get any error when I run sam deploy
Question: why did it not attach the s3 trigger event or the s3 access policies to the lambda function?
Policies for s3 So the template is straight forward. If you place the full template in does it work. If that is also failing, check the permissions on what you're running SAM as. Also there's an open ticket on github, This appears to be your issue. See comments.
Trying to create a serverless automation by creating a S3 service that will trigger an event every time a csv file is uploaded and send it to SQS. Right now I was able to create all resources just fine. Even s3 is being created. However, the block of code under NotificationConfiguration is what is actually crashing
Right now this is what I have:
service: ms-test
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage, 'dev'}
region: us-east-1
resources:
Resources:
testQueue:
Type: AWS::SQS::Queue
Properties:
DelaySeconds: 0
MaximumMessageSize: 262144
MessageRetentionPeriod: 1209600
QueueName: ${self:provider.stage}-testConsumer
ReceiveMessageWaitTimeSeconds: 0
VisibilityTimeout: 5400
S3EventQueuePolicy:
Type: AWS::SQS::QueuePolicy
DependsOn: testQueue
Properties:
PolicyDocument:
Id: SQSPolicy
Statement:
- Effect: Allow
Action: SQS:SendMessage
Resource:
Fn::GetAtt:
- "testQueue"
- "Arn"
Condition:
ArnLike:
aws:SourceArn: arn:aws:s3:us-east-1:::${self:provider.stage}-testBucket
Queues:
- Ref: testQueue
testBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:provider.stage}-testBucket
AccessControl: Private
CorsConfiguration:
CorsRules:
-
AllowedMethods:
- "PUT"
- "POST"
- "GET"
- "DELETE"
- "HEAD"
AllowedOrigins:
- "*"
NotificationConfiguration:
QueueConfigurations:
- Event: s3:ObjectCreated:*
Filter:
S3Key:
Rules:
- Name: suffix
Value: .csv
Queue: arn:aws:sqs:us-east-1:302082700830:${self:provider.stage}-testConsumer
However, I am receiving this error: An error occurred: testBucket - Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: D1359608708E5F4D; S3 Extended Request ID: aD6Lx0/DHbejSHoXcYdFluaIcaNTZ5UByv+W+4WiWjtipLeLHPb0kmX6xX/n1pxBtECPc3FzE9g=; Proxy: null).
Can anyone help with this?
I was following this guide to deploy my Lambda. The solution used their own .template file to deploy it. However, I needed to make some code changes to that Lambda, so I uploaded my changed Lambda code to my own bucket, and changed the .template to work with my own bucket.
The original template
CustomResource:
S3Bucket: solutions
S3Key: >-
serverless-image-handler/v3.0.0/serverless-image-handler-custom-resource.zip
Name: serverless-image-handler-custom-resource
Handler: image_handler_custom_resource/cfn_custom_resource.lambda_handler
Description: >-
Serverless Image Handler: CloudFormation custom resource function
invoked during CloudFormation create, update, and delete stack
operations.
Runtime: python2.7
Timeout: '60'
MemorySize: '128'
My Customized Template (uses my bucket)
CustomResource:
S3Bucket: my-bucket
S3Key: >-
serverless-image-handler/serverless-image-handler-custom-resource.zip
Name: serverless-image-handler-custom-resource
Handler: image_handler_custom_resource/cfn_custom_resource.lambda_handler
Description: >-
Serverless Image Handler: CloudFormation custom resource function
invoked during CloudFormation create, update, and delete stack
operations.
Runtime: python2.7
Timeout: '60'
MemorySize: '128'
Of course, in my bucket I put the package under the correct path serverless-image-handler/serverless-image-handler-custom-resource.zip. However, when trying to deploy, I'm getting the following error.
Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist (Service: AWSLambda; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 6b666b56-dc62-11e8-acb0-8df0d82e071b)
It's like it can't "see" my own bucket, but it sees the bucket solutions. How can I make it see my bucket?
EDIT
Part of the template where the bucket is defined.
Parameters:
OriginS3Bucket:
Description: S3 bucket that will source your images.
Default: original-images-bucket-name
Type: String
ConstraintDescription: Must be a valid S3 Bucket.
MinLength: '1'
MaxLength: '64'
AllowedPattern: '[a-zA-Z][a-zA-Z0-9-.]*'
OriginS3BucketRegion:
Description: S3 bucket Region that will source your images.
Default: eu-central-1
Type: String
AllowedValues:
- ap-south-1
- ap-northeast-1
- ap-northeast-2
- ap-southeast-1
- ap-southeast-2
- ca-central-1
- eu-central-1
- eu-west-1
- eu-west-2
- eu-west-3
- sa-east-1
- us-east-1
- us-east-2
- us-west-1
- us-west-2
Part of the template that threw an error.
CustomResource:
Type: 'AWS::Lambda::Function'
DependsOn:
- CustomResourceLoggingPolicy
- CustomResourceDeployPolicy
Properties:
Code:
S3Bucket: !Join
- ''
- - !FindInMap
- Function
- CustomResource
- S3Bucket
- '-'
- !Ref 'AWS::Region'
S3Key: !FindInMap
- Function
- CustomResource
- S3Key
MemorySize: !FindInMap
- Function
- CustomResource
- MemorySize
Handler: !FindInMap
- Function
- CustomResource
- Handler
Role: !GetAtt
- CustomResourceRole
- Arn
Timeout: !FindInMap
- Function
- CustomResource
- Timeout
Runtime: !FindInMap
- Function
- CustomResource
- Runtime
Description: !FindInMap
- Function
- CustomResource
- Description
Environment:
Variables:
LOG_LEVEL: INFO