I have created AWS Lambda function to run when new fils in S3 at specific path is created, which works perfectly.
service: redshift
frameworkVersion: '2'
custom:
bucket: extapp
path_prefix: 'xyz'
database: ABC
schema: xyz_dbo
table_prefix: shipmentlog
user: admin
password: "#$%^&*(*&^%$%"
port: 5439
endpoint: "*********.redshift.amazonaws.com"
role: "arn:aws:iam::*****:role/RedshiftFileTransfer"
provider:
name: aws
runtime: python3.8
stage: prod
region: us-west-2
stackName: redshift-prod-copy
stackTags:
Service: "it"
lambdaHashingVersion: 20201221
memorySize: 128
timeout: 900
logRetentionInDays: 14
environment:
S3_BUCKET: ${self:custom.bucket}
S3_BUCKET_PATH_PREFIX: ${self:custom.path_prefix}
REDSHIFT_DATABASE: ${self:custom.database}
REDSHIFT_SCHEMA: ${self:custom.schema}
REDSHIFT_TABEL_PREFIX: ${self:custom.table_prefix}
REDSHIFT_USER: ${self:custom.user}
REDSHIFT_PASSWORD: ${self:custom.password}
REDSHIFT_PORT: ${self:custom.port}
REDSHIFT_ENDPOINT: ${self:custom.endpoint}
REDSHIFT_ROLE: ${self:custom.role}
iam:
role:
name: s3-to-redshift-copy
statements:
- Effect: Allow
Action:
- s3:GetObject
Resource: "arn:aws:s3:::${self:custom.bucket}/*"
functions:
copy:
handler: handler.run
events:
- s3:
bucket: ${self:custom.bucket}
event: s3:ObjectCreated:*
rules:
- prefix: ${self:custom.path_prefix}/
- suffix: .json
existing: true
package:
exclude:
- node_modules/**
- package*.json
- README.md
plugins:
- serverless-python-requirements
But when I deployed this function, there was also another function get deployed with name redshift-prod-custom-resource-existing-s3 which is Node.js function. I want to understand why this second function necessary for triggering primary lambda function when new file is creates in S3 bucket at specific path.
It's the serverless Framework's method of adding the trigger to call the lambda to the S3 bucket via a Custom Resource
Related
I'm trying to generate presigned URLs to upload objects to an S3 bucket but whenever I execute the PUT requests to the presigned URLs, I get 'Access Denied' errors. This is the code I am using to generate the URLs (runs inside a node lambda function):
app.post('/upload', async (req, res) => {
const filename = `${uuid()}.jpg`;
const url = s3.getSignedUrl('putObject', {
Bucket: BucketName,
Key: filename,
Expires: 3600,
ContentType: 'image/jpeg',
ACL: 'public-read',
});
res.status(200).json({ url, filename });
});
And here are the permissions from my serverless.yml file:
service: my-service
provider:
name: aws
runtime: nodejs12.x
profile: my-profile
region: eu-west-2
iamRoleStatements:
- Effect: 'Allow'
Action:
- 's3:PutObject'
- 's3:PutObjectAcl'
Resource:
- !GetAtt galleryBucket.Arn
I was applying the PutObject and PutObjectAcl permissions to the galleryBucket instead of the objects within it.
You must apply the PutObject permission to the objects inside the bucket, not the bucket itself
I updated my permissions to this and the PUT requests succeeded:
service: my-service
provider:
name: aws
runtime: nodejs12.x
profile: my-profile
region: eu-west-2
iamRoleStatements:
- Effect: 'Allow'
Action:
- 's3:PutObject'
- 's3:PutObjectAcl'
Resource:
Fn::Join: ['', ['arn:aws:s3:::', !Ref galleryBucket, '/*']]
Note the /* at the end of the resource identifier
I want to create an s3 bucket, and trigger a lambda function whenever some file is uploaded to 'uploads' folder in the bucket. I want to create those resources using serverless framework in aws.
I have defined my s3 bucket configuration in under 'provider.s3', and then I am trying to reference that bucket under functions.hello.events.bucket
However, I am getting following error when i run sls package
Serverless Error ----------------------------------------
MyS3Bucket - Bucket name must conform to pattern (?!^(\d{1,3}\.){3}\d{1,3}$)(^(([a-z0-9]|[a-z0-9][a-z0-9-]*[a-z0-9])\.)*([a-z0-9]|[a-z0-9][a-z0-9-]*[a-z0-9])$). Please check provider.s3.MyS3Bucket and/or s3 events of function "hello".
serverless.yml
service: some-service
frameworkVersion: '2'
useDotenv: true
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
s3:
MyS3Bucket:
bucketName: ${env:MY_BUCKET_NAME}
accessControl: Private
lifecycleConfiguration:
Rules:
- Id: ExpireRule
Status: Enabled
ExpirationInDays: '7'
package:
individually: true
functions:
hello:
name: my-lambda-function
handler: function.handler
memorySize: 128
timeout: 900
events:
- s3:
bucket: MyS3Bucket
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
My next try was defining the s3 bucket under 'resources', and using the reference of the bucket in the lambda trigger. I am still getting the warning messages
Serverless: Configuration warning at 'functions.hello.events[0].s3.bucket': should be string
serverless.yml
service: some-service
frameworkVersion: '2'
useDotenv: true
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
package:
individually: true
functions:
hello:
name: my-lambda-function
handler: handler.handler
memorySize: 128
timeout: 900
events:
- s3:
bucket:
Ref: MyS3Bucket
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
existing: true
resources:
Resources:
MyS3Bucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: Private
BucketName: 'test.bucket'
OwnershipControls:
Rules:
- ObjectOwnership: ObjectWriter
LifecycleConfiguration:
Rules:
- Id: ExpireRule
Status: Enabled
ExpirationInDays: '7'
You should use your bucket name, not MyS3Bucket:
events:
- s3:
bucket: ${env:MY_BUCKET_NAME}
Create a custom s3 bucket name variable e.g.
custom:
bucket: foo-thumbnails
And use it events
events:
- s3:
bucket: ${self:custom.bucket}
I have below configuration on serverless.yml. But it doesn't deploy websocket connection. I wonder what could be wrong with my configuration. I have followed this instruction: https://www.serverless.com/framework/docs/providers/aws/events/websocket/
service:
name: ${opt:componentName}-api
plugins:
- '#hewmen/serverless-plugin-typescript'
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-2
websocketsApiName: custom-websockets-api-name
websocketApiRouteSelectionExpression: $request.body.action
stackName: ${opt:stage}-${self:service.name}
iamRoleStatements:
- Effect: Allow
Action:
- logs:Create*
- logs:Get*
Resource: "*"
- Effect: Allow
Action:
- dynamodb:*
Resource: "*"
functions:
wsHandler:
handler: src/websocketLambda.handleWebSocket
name: ${self:provider.stackName}-ws
evnets:
- websocket: $default
The output of serverless deploy is:
Serverless: Stack update finished...
Service Information
service: device-api-transactions-api
stage: dev
region: ap-southeast-2
stack: dev-device-api-transactions-api
resources: 6
api keys:
None
endpoints:
None
functions:
wsHandler: dev-device-api-transactions-api-ws
layers:
None
Serverless: Removing old service artifacts from S3...
Serverless: Run the "serverless" command to setup monitoring, troubleshooting and testing.
I am trying to deploy with VPC and this is my serverless.yaml
vpcSettings: &vpcSettings
vpc: ${self:custom.allVpcSettings.${self:provider.stage}.vpc}
provider:
name: aws
runtime: nodejs10.x
stage: ${opt:stage, 'local'}
region: us-west-1
memorySize: 256
timeout: 30
deploymentPrefix: fs-sls-${self:provider.stage}-deploy
deploymentBucket: fs-serverless-deployment
variables: ${file(.env.${opt:stage, self:provider.stage}.json)}
environment:
NODE_ENV: ${self:provider.variables.NODE_ENV}
functions:
ping:
handler: src/handler.ping
description: Let us know if the service is up and running
events:
- http:
path: ping
method: get
cors: true
graphql:
handler: src/handler.graphqlHandler
<<: *vpcSettings
description: One function where all GQL request comes
memorySize: 1024
events:
- http:
path: graphql
method: post
cors: true
- http:
path: graphql
method: get
cors: true
plugins:
- serverless-offline
custom:
serverless-offline:
port: 6000
allVpcSettings:
local:
vpc: 'This is a dummy value that should be ignored'
dev:
vpc:
securityGroupIds:
- sg-xxxxxxxxxxxxxxx
subnetIds:
- subnet-xxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxx
prod:
vpc:
securityGroupIds:
- sg-xxxxxxxxxxxxxxx
subnetIds:
- subnet-xxxxxxxxxxxxxxx
- subnet-xxxxxxxxxxxxxxx
It fails with the following error
Serverless Error ---------------------------------------
The provided execution role does not have permissions to call CreateNetworkInterface on EC2
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 10.16.0
Framework Version: 1.52.0
Plugin Version: 2.0.0
SDK Version: 2.1.1
The user that I created for this purpose has AdministratorAccess as well as AWSLambdaVPCAccessExecutionRole in it's permissions. What else is expected here?
So I fixed it. It means the role of deploying lambda doesn't have permission. So it boils down to the fact to give it the role. First, confirm if you have the role.
Check in the image where to look for the role.
Once you don't find it. Which you most likely won't. Take the Role name and goto IAM -> Roles and Search for the role name and add AWSLambdaVPCAccessExecutionRole to the selected role.
This should give it the required permission.
Now try deploying the SLS and it should work.
Once you have the role, edit it by adding the
Although the user that you've created to deploy this lambda function has Administrator access, the lambda function itself needs networking permissions if you're deploying it into a VPC.
Try adding these permissions in the provider block of your serverless.yml template:
- Effect: Allow
Action:
- ec2:DescribeNetworkInterfaces
- ec2:CreateNetworkInterface
- ec2:DeleteNetworkInterface
- ec2:DescribeInstances
- ec2:AttachNetworkInterface
Resource:
- *
If that works, you'll want to deploy a more limited permission structure for your production environment.
I'm using Serverless to create a web application that serves its static content, e.g. a web font, from a S3 bucket. The S3 bucket is configured as a resource in my serverless.yml file. Its CORS configuration has an AllowOrigin set to a wildcard.
I want to change this to have an AllowOrigin with the http endpoint of the service as created by Serverless, e.g. 31alib51b6.execute-api.eu-west-1.amazonaws.com.
I wondered if it's possible to configure this in the serverless.yml file itself.
My example serverless.yml file:
service: example-service
provider:
name: aws
runtime: nodejs4.3
region: eu-west-1
functions:
web:
handler: handler.handler
name: ${self:service}-${self:provider.stage}
description: ${self:service} web application - ${self:provider.stage}
events:
- http:
path: web
method: get
- http:
path: web/{proxy+}
method: get
resources:
Resources:
S3Assets:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:provider.stage}-assets
CorsConfiguration:
CorsRules:
- AllowedMethods:
- GET
- HEAD
AllowedOrigins:
- "*"
You can define the AllowedOrigin with the following statement:
CorsConfiguration:
CorsRules:
- AllowedMethods:
- GET
- HEAD
AllowedOrigins:
- Fn::Join:
- ""
- - "https://"
- Ref: ApiGatewayRestApi
- ".execute-api.eu-west-1.amazonaws.com"
"Ref: ApiGatewayRestApi" references the internal name of the generated API.