So I'm trying to run a lambda on amazon and narrowed down the error finally by testing the lambda in amazons testing console.
The error I got is this.
{
"errorMessage": "Please install mysql2 package manually",
"errorType": "Error",
"stackTrace": [
"new MysqlDialect (/var/task/node_modules/sequelize/lib/dialects/mysql/index.js:14:30)",
"new Sequelize (/var/task/node_modules/sequelize/lib/sequelize.js:234:20)",
"Object.exports.getSequelizeConnection (/var/task/src/twilio/twilio.js:858:20)",
"Object.<anonymous> (/var/task/src/twilio/twilio.js:679:25)",
"__webpack_require__ (/var/task/src/twilio/twilio.js:20:30)",
"/var/task/src/twilio/twilio.js:63:18",
"Object.<anonymous> (/var/task/src/twilio/twilio.js:66:10)",
"Module._compile (module.js:570:32)",
"Object.Module._extensions..js (module.js:579:10)",
"Module.load (module.js:487:32)",
"tryModuleLoad (module.js:446:12)",
"Function.Module._load (module.js:438:3)",
"Module.require (module.js:497:17)",
"require (internal/module.js:20:19)"
]
}
Easy enough, so I have to install mysql2. So I added it to my package.json file.
{
"name": "test-api",
"version": "1.0.0",
"description": "",
"main": "handler.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 0"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"aws-sdk": "^2.153.0",
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-plugin-transform-runtime": "^6.23.0",
"babel-preset-es2015": "^6.24.1",
"babel-preset-stage-3": "^6.24.1",
"serverless-domain-manager": "^1.1.20",
"serverless-dynamodb-autoscaling": "^0.6.2",
"serverless-webpack": "^4.0.0",
"webpack": "^3.8.1",
"webpack-node-externals": "^1.6.0"
},
"dependencies": {
"babel-runtime": "^6.26.0",
"mailgun-js": "^0.13.1",
"minimist": "^1.2.0",
"mysql": "^2.15.0",
"mysql2": "^1.5.1",
"qs": "^6.5.1",
"sequelize": "^4.31.2",
"serverless": "^1.26.0",
"serverless-plugin-scripts": "^1.0.2",
"twilio": "^3.10.0",
"uuid": "^3.1.0"
}
}
I noticed when I do sls deploy however, it seems to only be packaging some of the modules?
Serverless: Package lock found - Using locked versions
Serverless: Packing external modules: babel-runtime#^6.26.0, twilio#^3.10.0, qs#^6.5.1, mailgun-js#^0.13.1, sequelize#^4.31.2, minimi
st#^1.2.0, uuid#^3.1.0
Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................................
Serverless: Stack update finished...
I think this is why it's not working. In short, how do I get mysql2 library to be packaged correctly with serverless so my lambda function will work with the sequelize library?
Please note that when I test locally my code works fine.
My serverless file is below
service: testapi
# Use serverless-webpack plugin to transpile ES6/ES7
plugins:
- serverless-webpack
- serverless-plugin-scripts
# - serverless-domain-manager
custom:
#Define the Stage or default to Staging.
stage: ${opt:stage, self:provider.stage}
webpackIncludeModules: true
#Define Databases Here
databaseName: "${self:service}-${self:custom.stage}"
#Define Bucket Names Here
uploadBucket: "${self:service}-uploads-${self:custom.stage}"
#Custom Script setup
scripts:
hooks:
#Script below will run schema changes to the database as neccesary and update according to stage.
'deploy:finalize': node database-schema-update.js --stage ${self:custom.stage}
#Domain Setup
# customDomain:
# basePath: "/"
# domainName: "api-${self:custom.stage}.test.com"
# stage: "${self:custom.stage}"
# certificateName: "*.test.com"
# createRoute53Record: true
provider:
name: aws
runtime: nodejs6.10
stage: staging
region: us-east-1
environment:
DOMAIN_NAME: "api-${self:custom.stage}.test.com"
DATABASE_NAME: ${self:custom.databaseName}
DATABASE_USERNAME: ${env:RDS_USERNAME}
DATABASE_PASSWORD: ${env:RDS_PASSWORD}
UPLOAD_BUCKET: ${self:custom.uploadBucket}
TWILIO_ACCOUNT_SID: ""
TWILIO_AUTH_TOKEN: ""
USER_POOL_ID: ""
APP_CLIENT_ID: ""
REGION: "us-east-1"
IDENTITY_POOL_ID: ""
RACKSPACE_API_KEY: ""
#Below controls permissions for lambda functions.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:UpdateTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:us-east-1:*:*"
functions:
create_visit:
handler: src/visits/create.main
events:
- http:
path: visits
method: post
cors: true
authorizer: aws_iam
get_visit:
handler: src/visits/get.main
events:
- http:
path: visits/{id}
method: get
cors: true
authorizer: aws_iam
list_visit:
handler: src/visits/list.main
events:
- http:
path: visits
method: get
cors: true
authorizer: aws_iam
update_visit:
handler: src/visits/update.main
events:
- http:
path: visits/{id}
method: put
cors: true
authorizer: aws_iam
delete_visit:
handler: src/visits/delete.main
events:
- http:
path: visits/{id}
method: delete
cors: true
authorizer: aws_iam
twilio_send_text_message:
handler: src/twilio/twilio.send_text_message
events:
- http:
path: twilio/sendtextmessage
method: post
cors: true
authorizer: aws_iam
#This function handles incoming calls and where to route it to.
twilio_incoming_call:
handler: src/twilio/twilio.incoming_calls
events:
- http:
path: twilio/calls
method: post
twilio_failure:
handler: src/twilio/twilio.twilio_failure
events:
- http:
path: twilio/failure
method: post
twilio_statuschange:
handler: src/twilio/twilio.statuschange
events:
- http:
path: twilio/statuschange
method: post
twilio_incoming_message:
handler: src/twilio/twilio.incoming_message
events:
- http:
path: twilio/messages
method: post
twilio_whisper:
handler: src/twilio/twilio.whisper
events:
- http:
path: twilio/whisper
method: post
- http:
path: twilio/whisper
method: get
twilio_start_call:
handler: src/twilio/twilio.start_call
events:
- http:
path: twilio/startcall
method: post
- http:
path: twilio/startcall
method: get
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.uploadBucket}
RDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
Engine : mysql
MasterUsername: ${env:RDS_USERNAME}
MasterUserPassword: ${env:RDS_PASSWORD}
DBInstanceClass : db.t2.micro
AllocatedStorage: '5'
PubliclyAccessible: true
#TODO: The Value of Stage is also available as a TAG automatically which I may use to replace this manually being put here..
Tags:
-
Key: "Name"
Value: ${self:custom.databaseName}
DeletionPolicy: Snapshot
DNSRecordSet:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: test.com.
Name: database-${self:custom.stage}.test.com
Type: CNAME
TTL: '300'
ResourceRecords:
- {"Fn::GetAtt": ["RDSDatabase","Endpoint.Address"]}
DependsOn: RDSDatabase
UPDATE:: So I confirmed that running sls package --stage dev seems to create this in the zip folder that would eventually upload to AWS. This confirms that serverless is not creating the package correctly with the mysql2 reference for some reason? Why is this?
webpack config file as requested
const slsw = require("serverless-webpack");
const nodeExternals = require("webpack-node-externals");
module.exports = {
entry: slsw.lib.entries,
target: "node",
// Since 'aws-sdk' is not compatible with webpack,
// we exclude all node dependencies
externals: [nodeExternals()],
// Run babel on all .js files and skip those in node_modules
module: {
rules: [
{
test: /\.js$/,
loader: "babel-loader",
include: __dirname,
exclude: /node_modules/
}
]
}
};
Thanks to dashmugs comment after some investigation on this page (https://github.com/serverless-heaven/serverless-webpack), there is a section on Forced Inclusion. I'll paraphrase it here.
Forced inclusion Sometimes it might happen that you use dynamic
requires in your code, i.e. you require modules that are only known at
runtime. Webpack is not able to detect such externals and the compiled
package will miss the needed dependencies. In such cases you can force
the plugin to include certain modules by setting them in the
forceInclude array property. However the module must appear in your
service's production dependencies in package.json.
# serverless.yml
custom:
webpackIncludeModules:
forceInclude:
- module1
- module2
So I simply did this...
webpackIncludeModules:
forceInclude:
- mysql
- mysql2
Now it works! Hope this helps someone else with the same issue.
None of the previous helped me, I used this solution: https://github.com/sequelize/sequelize/issues/9489#issuecomment-493304014
The trick is to use dialectModule property and override sequelize.
import Sequelize from 'sequelize';
import mysql2 from 'mysql2'; // Needed to fix sequelize issues with WebPack
const sequelize = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
dialect: 'mysql',
dialectModule: mysql2, // Needed to fix sequelize issues with WebPack
host: process.env.DB_HOST,
port: process.env.DB_PORT
}
)
export async function connectToDatabase() {
console.log('Trying to connect via sequelize')
await sequelize.sync()
await sequelize.authenticate()
console.log('=> Created a new connection.')
// Do something
}
The previous so far works on MySql but is not working with Postgres
Related
I am deploying on amazon using CI/CD pipeline using Github action.Deployed on Amazon successfully but when I run the graphql query on appsync it show that the #aws-sdk/client-dynamodb not found.
You can see the error on this image
succesfully deployed on github
I add dynamodb package on package.json
`{
"name": "Serverless-apix",
"type": "module",
"version": "1.0.0",
"description": "",
"scripts": {
"start": "sls offline start --stage local",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"#aws-sdk/client-dynamodb": "^3.215.0",
"ramda": "^0.28.0",
"serverless": "^3.24.1",
"serverless-appsync-plugin": "^1.14.0",
"serverless-iam-roles-per-function": "^3.2.0"
},
"devDependencies": {}
}`
Controller Where I call the Dynamodb
`import { PutItemCommand } from "#aws-sdk/client-dynamodb";
import { ddbClient } from "../../dynamodb/db.js";`
Db file
`import { DynamoDBClient } from "#aws-sdk/client-dynamodb";
const REGION = "us-east-1"; //e.g. "us-east-1"
// const client = new DynamoDBClient({
// // region: "us-east-1",
// // accessKeyId: "AKIA272VV64HPNKION2R",
// // secretAccessKeyId: "7IARIpa5q8ZEf0NQKKsUPTDP+oszFaw+Dd5v4s7N",
// // endpoint: "http://localhost:8000"
// region: 'localhost',
// endpoint: 'http://localhost:8000'
// });
const ddbClient = new DynamoDBClient({ region: REGION });
export { ddbClient };å`
Serverless file
`service: image-base-serverless-api
provider:
name: aws
runtime: nodejs14.x
stage: ${opt:stage, 'dev'}
region: us-east-1
environment:
# DYNAMODB_TABLE_NAME: ${self:custom.usersTableName}
STAGE: ${self:provider.stage}
REGION: ${self:provider.region}
APPSYNC_NAME: "${self:custom.defaultPrefix}-appsync"
SERVICE_NAME: ${self:service}-${self:provider.stage}
DYNAMODB: ${self:service}-${self:provider.stage}
TABLE_NAME:
Ref: usersTable
iam:
role:
statements: # permissions for all of your functions can be set here
- Effect: Allow
Action: # Gives permission to DynamoDB tables in a specific region
- dynamodb:*
- lambda:*
- s3:*
Resource:
- arn:aws:dynamodb:${self:provider.region}:*:*
- arn:aws:lambda:${self:provider.region}:*:*
- "Fn::GetAtt": [usersTable, Arn]
plugins: ${file(plugins/plugins-${self:provider.stage}.yml)}
package:
exclude:
- node_modules/**
- venv/**
custom:
usersTableName: users-table-${self:provider.stage}
dynamodb:
stages:
- local
start:
port: 8000
inMemory: false
dbPath: "dynamodb_local_data"
migrate: true
appSync:
name: image-base-serverless-backened-api-${self:provider.stage}
schema: schema.graphql
authenticationType: API_KEY
serviceRole: "AppSyncServiceRole"
mappingTemplates: ${file(appsync/mappingtemplate.yml)}
dataSources: ${file(appsync/datasource.yml)}
appsync-offline: # appsync-offline configuration
port: 62222
dynamodb:
client:
endpoint: "http://localhost:8000"
region: localhost
defaultPrefix: ${self:service}-${self:provider.stage}
functions:
- ${file(src/adminusers/admin-user-route.yml)}
resources:
# Roles
- ${file(resources/roles.yml)}
# DynamoDB tables
- ${file(resources/dynamodb-tables.yml)}
# - ${file(resources/dynamodb.yml)}
# - ${file(resources/iam.yml)}
`
I Shall be very thankfull if you help me to solve this error
I downgrade the node version and also try deploy the amazon directly and also deploy the code using CI/CD pipeline but its show the same error on each tried.I have
AWS Lambda request url is:
https://<id>.<region>.amazonaws.com/$default/apispec.json
Screenshot
The URL should be https://<id>.<region>.amazonaws.com/apispec.json
It's fine when I manually remove the $default though.
This is bugging us, so if anyone could help us, it will be much appreciated.
Swagger Config:
swagger_config['swagger_ui_bundle_js'] = '//unpkg.com/swagger-ui-dist#3/swagger-ui-bundle.js'
swagger_config['swagger_ui_standalone_preset_js'] = '//unpkg.com/swagger-ui-dist#3/swagger-ui-standalone-preset.js'
swagger_config['jquery_js'] = '//unpkg.com/jquery#2.2.4/dist/jquery.min.js'
swagger_config['swagger_ui_css'] = '//unpkg.com/swagger-ui-dist#3/swagger-ui.css'
# swagger_config['specs'][0] = {'endpoint':'/cms-api/apispec','route':'/cms-api/apispec.json'}
Swagger(app, config=swagger_config, template=template)
swagger_config
swagger_config = {
"headers": [
],
"specs": [
{
"endpoint": 'apispec',
"route": '/apispec.json',
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
"swagger_ui": True,
"specs_route": "/cms-api"
}
serverless.yml
service: cms-backend
frameworkVersion: '3'
custom:
wsgi:
app: src.__init__.app
provider:
name: aws
stage: ${opt:stage, 'dev'}
runtime: python3.8
logs:
httpApi: true
httpApi:
metrics: true
cors: true
region: ap-southeast-1
functions:
app:
handler: wsgi_handler.handler
events:
- httpApi: '*'
plugins:
- serverless-wsgi
- serverless-python-requirements
Encountered a similar issue and it took me longer than I wanted to figure it out, but it turned out to be related to the API gateway stages. Serverless doesn't use this feature and just deploys a new function for a different stage.
If you're not using the stages of the API Gateway, try leaving them out by adding the STRIP_STAGE_PATH environment variable as mentioned in the serverless-wsgi documentation. It should leave out the $default stage variable from your path:
provider:
environment:
STRIP_STAGE_PATH: yes
(This is a code snippet of the source code of the serverless_wsgi module):
def get_script_name(headers, request_context):
strip_stage_path = os.environ.get("STRIP_STAGE_PATH", "").lower().strip() in
[
"yes",
"y",
"true",
"t",
"1",
]
if "amazonaws.com" in headers.get("Host", "") and not strip_stage_path:
script_name = "/{}".format(request_context.get("stage", ""))
else:
script_name = ""
return script_name
After a lot of tries I'm surrendering and asking this
this is my serverless.yml
console: true
service: notifications
frameworkVersion: '3'
useDotenv: true
plugins:
- serverless-offline
- serverless-dotenv-plugin
- serverless-plugin-typescript
- serverless-middleware
custom:
middleware:
folderName: middleware # defaults to '.middleware'
cleanFolder: false # defaults to 'true'
provider:
name: aws
runtime: nodejs14.x
versionFunctions: false
region: ${opt:region, 'sa-east-1'}
functions:
findAllLeads:
handler: src/server.findAllLeads
events:
- http:
path: leads
method: get
cors: true # <-- CORS!
findLead:
handler: src/server.findLead
middleware:
pre:
- middleware/middleware.myF
events:
- http:
path: leads/{id}
method: get
cors:
origin: "*"
createLead:
handler: src/server.createLead
middleware:
pre:
- middleware/middleware.myF
events:
- http:
path: leads
method: post
cors: true # <-- CORS!
and my project strcucture is something like
middleware/
middleware.ts
src/
servers.ts // with handlers
and my middleware.ts is
// export const middleware = (event: any) => {
// console.log(event.body);
// };
export const myF = (event) => {
// will do some logic if I make it work
console.log(event.body);
};
and ALL im getting is
{"message": "Internal server error"}
as a response and in the logs
Does anyone know how to fix this?
I’m trying to setup an Amazon MSK cluster and connect to it from a lambda function. The lambda function will be a producer of messages, not a consumer.
I am using the serverless framework to provision everything and in my serverless.yml I have added the following and that seems to be working fine.
MSK:
Type: AWS::MSK::Cluster
Properties:
ClusterName: kafkaOne
KafkaVersion: 2.2.1
NumberOfBrokerNodes: 3
BrokerNodeGroupInfo:
InstanceType: kafka.t3.small
ClientSubnets:
- Ref: PrivateSubnet1
- Ref: PrivateSubnet2
- Ref: PrivateSubnet3
But when trying to connect to this cluster to actually send messages I am unsure how to get the connection string here? I presume it should be the ZookeeperConnectString?
I’m new to kafka/msk so maybe I am not seeing something obvious.
Any advice much appreciated. Cheers.
I don't know what kind of code base u are using, so I will add my code which I wrote in GO.
In essence you should connect to MSK cluster the same way as you would connect to some stand alone Kafka instance. We are using brokers for "connecting" or better said writing to MSK cluster.
I'm using segmentio/kafka-go library. My function for sending event to MSK cluster looks like this
// Add event
func addEvent(ctx context.Context, requestBody RequestBodyType) (bool, error) {
// Prepare dialer
dialer := &kafka.Dialer{
Timeout: 2 * time.Second,
DualStack: true,
}
brokers := []string{os.Getenv("KAFKA_BROKER_1"), os.Getenv("KAFKA_BROKER_2"), os.Getenv("KAFKA_BROKER_3"), os.Getenv("KAFKA_BROKER_4")}
// Prepare writer config
kafkaConfig := kafka.WriterConfig{
Brokers: brokers,
Topic: os.Getenv("KAFKA_TOPIC"),
Balancer: &kafka.Hash{},
Dialer: dialer,
}
// Prepare writer
w := kafka.NewWriter(kafkaConfig)
// Convert struct to json string
event, err := json.Marshal(requestBody)
if err != nil {
fmt.Println("Convert struct to json for writing to KAFKA failed")
panic(err)
}
// Write message
writeError := w.WriteMessages(ctx,
kafka.Message{
Key: []byte(requestBody.Event),
Value: []byte(event),
},
)
if writeError != nil {
fmt.Println("ERROR WRITING EVENT TO KAFKA")
panic("could not write message " + err.Error())
}
return true, nil
}
My serverless.yml
Upper code (addEvent) belongs to functions -> postEvent in serverless.yml... If you are consuming from kafka, then you should check functions -> processEvent. Consuming event is fairly simple, but setting everything up for producing to Kafka it crazy. We are probably working on this for month and a half and still figuring out how everything should be set up. Sadly serverless does not do everything for you, so you will have to "click trough" manually in AWS, but we compared to other frameworks and serverless is still the best right now
provider:
name: aws
runtime: go1.x
stage: dev
profile: ${env:AWS_PROFILE}
region: ${env:REGION}
apiName: my-app-${sls:stage}
lambdaHashingVersion: 20201221
environment:
ENV: ${env:ENV}
KAFKA_TOPIC: ${env:KAFKA_TOPIC}
KAFKA_BROKER_1: ${env:KAFKA_BROKER_1}
KAFKA_BROKER_2: ${env:KAFKA_BROKER_2}
KAFKA_BROKER_3: ${env:KAFKA_BROKER_3}
KAFKA_BROKER_4: ${env:KAFKA_BROKER_4}
KAFKA_ARN: ${env:KAFKA_ARN}
ACCESS_CONTROL_ORIGINS: ${env:ACCESS_CONTROL_ORIGINS}
ACCESS_CONTROL_HEADERS: ${env:ACCESS_CONTROL_HEADERS}
ACCESS_CONTROL_METHODS: ${env:ACCESS_CONTROL_METHODS}
BATCH_SIZE: ${env:BATCH_SIZE}
SLACK_API_TOKEN: ${env:SLACK_API_TOKEN}
SLACK_CHANNEL_ID: ${env:SLACK_CHANNEL_ID}
httpApi:
cors: true
apiGateway:
resourcePolicy:
- Effect: Allow
Action: '*'
Resource: '*'
Principal: '*'
vpc:
securityGroupIds:
- sg-*********
subnetIds:
- subnet-******
- subnet-*******
functions:
postEvent:
handler: bin/postEvent
package:
patterns:
- bin/postEvent
events:
- http:
path: event
method: post
cors:
origin: ${env:ACCESS_CONTROL_ORIGINS}
headers:
- Content-Type
- Content-Length
- Accept-Encoding
- Origin
- Referer
- Authorization
- X-CSRF-Token
- X-Amz-Date
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
methods:
- OPTIONS
- POST
processEvent:
handler: bin/processEvent
package:
patterns:
- bin/processEvent
events:
- msk:
arn: ${env:KAFKA_ARN}
topic: ${env:KAFKA_TOPIC}
batchSize: ${env:BATCH_SIZE}
startingPosition: LATEST
resources:
Resources:
GatewayResponseDefault4XX:
Type: 'AWS::ApiGateway::GatewayResponse'
Properties:
ResponseParameters:
gatewayresponse.header.Access-Control-Allow-Origin: "'*'"
gatewayresponse.header.Access-Control-Allow-Headers: "'*'"
ResponseType: DEFAULT_4XX
RestApiId:
Ref: 'ApiGatewayRestApi'
myDefaultRole:
Type: AWS::IAM::Role
Properties:
Path: /
RoleName: my-app-dev-eu-serverless-lambdaRole-${sls:stage} # required if you want to use 'serverless deploy --function' later on
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
# note that these rights are needed if you want your function to be able to communicate with resources within your vpc
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaMSKExecutionRole
I must warn you that we spend a lot of time figuring out how to properly setup VPC and other networking / permission stuff. My collage will write blog post once he arrivers from vacation. :) I hope this helps you some how. Best of luck ;)
UPDATE
If you are using javascript, then you would connect to Kafka similar to this
const { Kafka } = require('kafkajs')
const kafka = new Kafka({
clientId: 'order-app',
brokers: [
'broker1:port',
'broker2:port',
],
ssl: true, // false
})
The connection string which is called broker bootstrap string can be found my making an API call like aws kafka get-bootstrap-brokers --cluster-arn ClusterArn
See example here: https://docs.aws.amazon.com/msk/latest/developerguide/msk-get-bootstrap-brokers.html
Also here is a step by step walk through on how produce/consume data: https://docs.aws.amazon.com/msk/latest/developerguide/produce-consume.html
I am using serverless framework for the backend. How can I implement request validation? (do not want to write validation inside lambda functions).
To implement request validation using serverless you need to do a couple of things:
Include your model/header definitions in your stack, and then tell API gateway to use them for request validation.
You'll need to install the following packages:
serverless-aws-documentation
serverless-reqvalidator-plugin
And then you'll need to include them in your serverless.yml:
plugins:
- serverless-reqvalidator-plugin
- serverless-aws-documentation
Note: below is only a quick run-down of how to incorporate the packages. Visit the packages' documentation pages for more comprehensive examples...
Provide API gateway with a description of your models / headers.
You can import json schemas for your models, and declare http headers using the serverless-aws-documentation plugin.
Here's how you'd add a model to your serverless.yml:
custom:
documentation:
api:
info:
version: v0.0.0
title: Some API title
description: Some API description
models:
- name: SomeLambdaRequest
contentType: application/json
schema: ${file(models/SomeLambdaRequest.json)} # reference to your model's json schema file. You can also declare the model inline.
And here's how you'd reference the model in your lambda definition:
functions:
someLambda:
handler: src/someLambda.handler
events:
- http:
# ... snip ...
documentation:
summary: some summary
description: some description
requestBody:
description: some description
requestModels:
application/json: SomeLambdaRequest
You can also declare request headers against your lambda definition like so:
functions:
someLambda:
handler: src/someLambda.handler
events:
- http:
# ... snip ...
documentation:
summary: some summary
description: some description
requestHeaders:
- name: x-some-header
description: some header value
required: true # true or false
- name: x-another-header
description: some header value
required: false # true or false
Tell API gateway to actually use the models for validation
This part makes use of the serverless-reqvalidator-plugin package, and you need to add AWS::ApiGateway::RequestValidator resources to your serverless.yml file.
You can specify whether you want to validate request body, request headers, or both.
resources:
Resources:
onlyBody:
Type: AWS::ApiGateway::RequestValidator
Properties:
Name: 'only-body'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true # true or false
ValidateRequestParameters: false # true or false
And then on individual functions you can make use of the validator like so:
functions:
someLambda:
handler: src/someLambda.handler
events:
- http:
# ... snip ...
reqValidatorName: onlyBody # reference and use the 'only-body' request validator
Put all together your lambda definition would end up looking a little like this:
functions:
someLambda:
handler: src/someLambda.handler
events:
- http:
# ... snip ...
reqValidatorName: onlyBody # reference and use the 'only-body' request validator
documentation:
summary: some summary
description: some description
requestBody:
description: some description
requestModels:
application/json: SomeLambdaRequest
requestHeaders:
- name: x-some-header
description: some header value
required: true # true or false
- name: x-another-header
description: some header value
required: false # true or false
This is now supported by Serverless framework, so there is no need to use external plugins.
To enable requests validation one need is to add the following to the serverless.yml:
HttpHandler:
handler: src/lambda/http/create.handler
events:
- http:
method: post
path: items
request:
schemas:
application/json: ${file(models/create-todo-model.json)}
Instead of keeping the file location directly under application/json you can also keep the name of the model after defining it under serverless.yml file's apiGateway section. link to documentation
Kindly note that, as of Feb-2022, serverless-offline plugin is not validating the http.request.schemas in your local. Though they are supporting deprecated version http.request.schema.
As Ivan indicated, there is no need for external plugins as this is supported by the Serverless framework. However, I think the way to configure this has changed.
functions:
create:
handler: posts.create
events:
- http:
path: posts/create
method: post
request:
schema:
application/json: ${file(create_request.json)}
This example was taken from:
https://www.serverless.com/framework/docs/providers/aws/events/apigateway/#request-schema-validators
In case you are like me and you don't want to add plugins as suggested in "https://stackoverflow.com/questions/49133294/request-validation-using-serverless-framework".
If you set parameters as required and want to validate them, you must add a request validator to your serverless.yml
Resources:
ParameterRequestValidator:
Type: AWS::ApiGateway::RequestValidator
Properties:
Name: ParameterRequestValidator
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: false
ValidateRequestParameters: true
ApiGatewayMethodNameOfYourApiLookItUpInYourTemplate:
Properties:
RequestValidatorId:
Ref: ParameterRequestValidator
The method you want to validate will be named something like ApiGateway<Method><Get | Post | Patch | Put | Delete >:. You can look the name up when you package your serverless functions in the created template files.
Courtesy for this solutions goes to https://github.com/serverless/serverless/issues/5034#issuecomment-581832806
Request validation using serverless
plugins:
- serverless-python-requirements
- serverless-wsgi
- serverless-reqvalidator-plugin
- serverless-aws-documentation
provider:
name: aws
runtime: python3.8
region: us-east-1
functions:
hello:
handler: handler.hello
events:
- http:
path: /
method: get
likes:
handler: handler.likes
events:
- http:
path: /likes
method: get
integration: lambda
reqValidatorName: xMyRequestValidator
request:
passThrough: NEVER
parameters:
querystrings:
userid: true
activityid: true
template:
application/json: '{ "userid":"$input.params(''userid'')","activityid":"$input.params(''activityid'')"}'
response:
headers:
Content-Type: "'application/json'"
custom:
wsgi:
app: handler.app
pythonBin: python # Some systems with Python3 may require this
packRequirements: false
pythonRequirements:
dockerizePip: non-linux
resources:
Resources:
xMyRequestValidator:
Type: "AWS::ApiGateway::RequestValidator"
Properties:
Name: 'my-req-validator'
RestApiId:
Ref: ApiGatewayRestApi
ValidateRequestBody: true
ValidateRequestParameters: true