AWS Lambda adds $default to the path of flasgger apispec - flask

AWS Lambda request url is:
https://<id>.<region>.amazonaws.com/$default/apispec.json
Screenshot
The URL should be https://<id>.<region>.amazonaws.com/apispec.json
It's fine when I manually remove the $default though.
This is bugging us, so if anyone could help us, it will be much appreciated.
Swagger Config:
swagger_config['swagger_ui_bundle_js'] = '//unpkg.com/swagger-ui-dist#3/swagger-ui-bundle.js'
swagger_config['swagger_ui_standalone_preset_js'] = '//unpkg.com/swagger-ui-dist#3/swagger-ui-standalone-preset.js'
swagger_config['jquery_js'] = '//unpkg.com/jquery#2.2.4/dist/jquery.min.js'
swagger_config['swagger_ui_css'] = '//unpkg.com/swagger-ui-dist#3/swagger-ui.css'
# swagger_config['specs'][0] = {'endpoint':'/cms-api/apispec','route':'/cms-api/apispec.json'}
Swagger(app, config=swagger_config, template=template)
swagger_config
swagger_config = {
"headers": [
],
"specs": [
{
"endpoint": 'apispec',
"route": '/apispec.json',
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
"swagger_ui": True,
"specs_route": "/cms-api"
}
serverless.yml
service: cms-backend
frameworkVersion: '3'
custom:
wsgi:
app: src.__init__.app
provider:
name: aws
stage: ${opt:stage, 'dev'}
runtime: python3.8
logs:
httpApi: true
httpApi:
metrics: true
cors: true
region: ap-southeast-1
functions:
app:
handler: wsgi_handler.handler
events:
- httpApi: '*'
plugins:
- serverless-wsgi
- serverless-python-requirements

Encountered a similar issue and it took me longer than I wanted to figure it out, but it turned out to be related to the API gateway stages. Serverless doesn't use this feature and just deploys a new function for a different stage.
If you're not using the stages of the API Gateway, try leaving them out by adding the STRIP_STAGE_PATH environment variable as mentioned in the serverless-wsgi documentation. It should leave out the $default stage variable from your path:
provider:
environment:
STRIP_STAGE_PATH: yes
(This is a code snippet of the source code of the serverless_wsgi module):
def get_script_name(headers, request_context):
strip_stage_path = os.environ.get("STRIP_STAGE_PATH", "").lower().strip() in
[
"yes",
"y",
"true",
"t",
"1",
]
if "amazonaws.com" in headers.get("Host", "") and not strip_stage_path:
script_name = "/{}".format(request_context.get("stage", ""))
else:
script_name = ""
return script_name

Related

"Cannot find package '#aws-sdk/client-dynamodb' on Appsync Amazon

I am deploying on amazon using CI/CD pipeline using Github action.Deployed on Amazon successfully but when I run the graphql query on appsync it show that the #aws-sdk/client-dynamodb not found.
You can see the error on this image
succesfully deployed on github
I add dynamodb package on package.json
`{
"name": "Serverless-apix",
"type": "module",
"version": "1.0.0",
"description": "",
"scripts": {
"start": "sls offline start --stage local",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"#aws-sdk/client-dynamodb": "^3.215.0",
"ramda": "^0.28.0",
"serverless": "^3.24.1",
"serverless-appsync-plugin": "^1.14.0",
"serverless-iam-roles-per-function": "^3.2.0"
},
"devDependencies": {}
}`
Controller Where I call the Dynamodb
`import { PutItemCommand } from "#aws-sdk/client-dynamodb";
import { ddbClient } from "../../dynamodb/db.js";`
Db file
`import { DynamoDBClient } from "#aws-sdk/client-dynamodb";
const REGION = "us-east-1"; //e.g. "us-east-1"
// const client = new DynamoDBClient({
// // region: "us-east-1",
// // accessKeyId: "AKIA272VV64HPNKION2R",
// // secretAccessKeyId: "7IARIpa5q8ZEf0NQKKsUPTDP+oszFaw+Dd5v4s7N",
// // endpoint: "http://localhost:8000"
// region: 'localhost',
// endpoint: 'http://localhost:8000'
// });
const ddbClient = new DynamoDBClient({ region: REGION });
export { ddbClient };å`
Serverless file
`service: image-base-serverless-api
provider:
name: aws
runtime: nodejs14.x
stage: ${opt:stage, 'dev'}
region: us-east-1
environment:
# DYNAMODB_TABLE_NAME: ${self:custom.usersTableName}
STAGE: ${self:provider.stage}
REGION: ${self:provider.region}
APPSYNC_NAME: "${self:custom.defaultPrefix}-appsync"
SERVICE_NAME: ${self:service}-${self:provider.stage}
DYNAMODB: ${self:service}-${self:provider.stage}
TABLE_NAME:
Ref: usersTable
iam:
role:
statements: # permissions for all of your functions can be set here
- Effect: Allow
Action: # Gives permission to DynamoDB tables in a specific region
- dynamodb:*
- lambda:*
- s3:*
Resource:
- arn:aws:dynamodb:${self:provider.region}:*:*
- arn:aws:lambda:${self:provider.region}:*:*
- "Fn::GetAtt": [usersTable, Arn]
plugins: ${file(plugins/plugins-${self:provider.stage}.yml)}
package:
exclude:
- node_modules/**
- venv/**
custom:
usersTableName: users-table-${self:provider.stage}
dynamodb:
stages:
- local
start:
port: 8000
inMemory: false
dbPath: "dynamodb_local_data"
migrate: true
appSync:
name: image-base-serverless-backened-api-${self:provider.stage}
schema: schema.graphql
authenticationType: API_KEY
serviceRole: "AppSyncServiceRole"
mappingTemplates: ${file(appsync/mappingtemplate.yml)}
dataSources: ${file(appsync/datasource.yml)}
appsync-offline: # appsync-offline configuration
port: 62222
dynamodb:
client:
endpoint: "http://localhost:8000"
region: localhost
defaultPrefix: ${self:service}-${self:provider.stage}
functions:
- ${file(src/adminusers/admin-user-route.yml)}
resources:
# Roles
- ${file(resources/roles.yml)}
# DynamoDB tables
- ${file(resources/dynamodb-tables.yml)}
# - ${file(resources/dynamodb.yml)}
# - ${file(resources/iam.yml)}
`
I Shall be very thankfull if you help me to solve this error
I downgrade the node version and also try deploy the amazon directly and also deploy the code using CI/CD pipeline but its show the same error on each tried.I have

Is there no setting for AWS API Gateway REST API to disable execute-api endpoint in CloudFormation template?

I have setup an API Gateway (v1, not v2) REST API resource using CloudFormation template. Recently I have noticed that the default execute-api endpoint is also created, which I can disable in the settings.
The type of this API is AWS::ApiGateway::RestApi.
Naturally, I would like this to be done through the template, so the question is: can this setting be defined in the CloudFormation template, rather than havign to be clicked manually in the AWS Console? This option is available for the APIGateway V2 API resource (AWS::ApiGatewayV2::Api) but not the APIGateway V1 REST API resource (AWS::ApiGateway::RestApi) in the CloudFormation templates, even though it can be changed manuall for the APIGateway V1 REST API in the console.
There is also a CLI way of doing this for the AWS::ApiGateway::RestApi.
Here are some links I have used to search for this setting:
AWS::ApiGatewayV2::API
AWS::ApiGateway::RestApi
Disabling default api-execute endpoint via CLI
Support for disabling the default execute-api endpoint has recently been added to AWS::ApiGateway::RestApi cloudformation: DisableExecuteApiEndpoint
MyRestApi:
Type: 'AWS::ApiGateway::RestApi'
Properties:
DisableExecuteApiEndpoint: true
You can disable it though a simple custom resource. Below is an example of such a fully working template that does that:
Resources:
MyRestApi:
Type: 'AWS::ApiGateway::RestApi'
Properties:
Description: A test API
Name: MyRestAPI
LambdaBasicExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
MyCustomResource:
Type: Custom::DisableDefaultApiEndpoint
Properties:
ServiceToken: !GetAtt 'MyCustomFunction.Arn'
APIId: !Ref 'MyRestApi'
MyCustomFunction:
Type: AWS::Lambda::Function
Properties:
Handler: index.lambda_handler
Description: "Disable default API endpoint"
Timeout: 30
Role: !GetAtt 'LambdaBasicExecutionRole.Arn'
Runtime: python3.7
Code:
ZipFile: |
import json
import logging
import cfnresponse
import boto3
logger = logging.getLogger()
logger.setLevel(logging.INFO)
client = boto3.client('apigateway')
def lambda_handler(event, context):
logger.info('got event {}'.format(event))
try:
responseData = {}
if event['RequestType'] in ["Create"]:
APIId = event['ResourceProperties']['APIId']
response = client.update_rest_api(
restApiId=APIId,
patchOperations=[
{
'op': 'replace',
'path': '/disableExecuteApiEndpoint',
'value': 'True'
}
]
)
logger.info(str(response))
cfnresponse.send(event, context,
cfnresponse.SUCCESS, responseData)
else:
logger.info('Unexpected RequestType!')
cfnresponse.send(event, context,
cfnresponse.SUCCESS, responseData)
except Exception as err:
logger.error(err)
responseData = {"Data": str(err)}
cfnresponse.send(event,context,
cfnresponse.FAILED,responseData)
return
In case anyone stumbles across this answer that is using CDK, this can be done concisely (without defining a Lambda function) using the AwsCustomResource construct:
const restApi = new apigw.RestApi(...);
const executeApiResource = new cr.AwsCustomResource(this, "execute-api-resource", {
functionName: "disable-execute-api-endpoint",
onCreate: {
service: "APIGateway",
action: "updateRestApi",
parameters: {
restApiId: restApi.restApiId,
patchOperations: [{
op: "replace",
path: "/disableExecuteApiEndpoint",
value: "True"
}]
},
physicalResourceId: cr.PhysicalResourceId.of("execute-api-resource")
},
policy: cr.AwsCustomResourcePolicy.fromStatements([new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ["apigateway:PATCH"],
resources: ["arn:aws:apigateway:*::/*"],
})])
});
executeApiResource.node.addDependency(restApi);
You can disable it in AWS CDK. This is done by finding the CloudFormation resource and setting it to true.
const api = new apigateway.RestApi(this, 'api', );
(api.node.children[0] as apigateway.CfnRestApi).addPropertyOverride('DisableExecuteApiEndpoint','true')
Here is a Python variant of the answer provided by snorberhuis.
rest_api = apigateway.RestApi(self,...)
cfn_apigw = rest_api.node.default_child
cfn_apigw.add_property_override('DisableExecuteApiEndpoint', True)
Amazon's docs on "Abstractions and Escape Hatches" is very good for understanding what's going on here.

AWS Cloudformation template for a codepipeline with jenkins build stage

I need to write a CFT for pipeline with Jenkins integration for build/test. I found this documentation to setup ActionTypeId for the jenkins stage. But this doc does not specify how to set the server url of the jenkins server. And also it is not clear to me where to give the Jenkins Provider name. Is it in the ActionTypeId or in configuration properties?
I could not find any example for this use case in the internet as well.
Please provide a proper example for setup Jenkins Action Provider for AWS Codepipeline using AWS Cloudformation template.
Following is a section from the sample cft I wrote from what I learnt from above doc.
"stages": [
{
"name": "Jenkins",
"actions": [
...
{
"name": "Jenkins Build",
"actionTypeId": {
"category": "Build",
"owner": "Custom",
"provider": "Jenkins",
"version": "1"
},
"runOrder": 2,
"configuration": {
???
},
...
}
]
},
...
]
The piece of information which was missing for me was that I need to create a Custom Action to use Jenkins as the Action provider for my codepipeline.
First I added the custom action as below:
JenkinsCustomActionType:
Type: AWS::CodePipeline::CustomActionType
Properties:
Category: Build
Provider: !Ref JenkinsProviderName
Version: 1
ConfigurationProperties:
-
Description: "The name of the build project must be provided when this action is added to the pipeline."
Key: true
Name: ProjectName
Queryable: false
Required: true
Secret: false
Type: String
InputArtifactDetails:
MaximumCount: 5
MinimumCount: 0
OutputArtifactDetails:
MaximumCount: 5
MinimumCount: 0
Settings:
EntityUrlTemplate: !Join ['', [!Ref JenkinsServerURL, "/job/{Config:ProjectName}/"]]
ExecutionUrlTemplate: !Join ['', [!Ref JenkinsServerURL, "/job/{Config:ProjectName}/{ExternalExecutionId}/"]]
Tags:
- Key: Name
Value: custom-jenkins-action-type
The jenkins server URL is given in the settings for Custom Action
and the Jenkins provider name is given for Provider. Which were the
issues I had initially.
Then configure the pipeline stage as following:
DevPipeline:
Type: AWS::CodePipeline::Pipeline
DependsOn: JenkinsCustomActionType
Properties:
Name: Dev-CodePipeline
RoleArn:
Fn::GetAtt: [ CodePipelineRole, Arn ]
Stages:
...
- Name: DevBuildVerificationTest
Actions:
- Name: JenkinsDevBVT
ActionTypeId:
Category: Build
Owner: Custom
Version: 1
Provider: !Ref JenkinsProviderName
Configuration:
# JenkinsDevBVTProjectName - Jenkins Job name defined as a parameter in the CFT
ProjectName: !Ref JenkinsDevBVTProjectName
RunOrder: 4
The Custom Action has to be created before the Pipeline. Hence DependsOn: JenkinsCustomActionType

Swagger definition for an AWS Api-Gateway Lambda Proxy endpoint

FYI - I've checked similar issues related to this, but none solves my problem.
I'm trying to create the Swagger definition for a number of APIs under AWS Api-Gateway. I'm able to successfully do this for other(POST, GET) endpoints from an auto-generated YAML configuration I downloaded from the API Stage.
But I encountered issues when I tried to do same for an Api-Gateway endpoint with Lambda Proxy Integration: Error from Swagger editor.swagger.io
Below is my YAML definition for the failing endpoint:
swagger: "2.0"
info:
version: "2018-04-18T17-09-07Z"
title: "XXX API"
host: "api.xxx.io"
schemes:
- "https"
parameters:
stage:
name: stage
in: path
type: string
enum: [ staging, production]
required: true
paths:
/env/{stage}/{proxy+}:
x-amazon-apigateway-any-method:
produces:
- "application/json"
parameters:
- $ref: '#/parameters/stage'
- name: "proxy"
in: "path"
required: true
type: "string"
responses: {}
x-amazon-apigateway-integration:
uri: "arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:xxxxxxxxx:function:environment/invocations"
responses:
default:
statusCode: "200"
passthroughBehavior: "when_no_match"
httpMethod: "POST"
cacheNamespace: "4vbcjm"
cacheKeyParameters:
- "method.request.path.proxy"
contentHandling: "CONVERT_TO_TEXT"
type: "aws_proxy"
this is inline with AWS Documentation: enter link description here
Please, what am I missing?
At a glance I believe you have an error in your parameters block. If you include a $ref it discards anything in that block that follows it, so your proxy name is getting dropped. I have a similar setup with api-gateway proxying all calls to a lambda and this is my parameters block:
parameters:
- name: "proxy"
in: "path"
required: true
type: "string"
Additionally you may want an authorizer if you're at all worried about DDoS or serving up secure data. That's done by adding a security array as a sibling to parameters, and a securityDefinitions block as a sibling to paths
security:
- authorizer: []
securityDefinitions:
authorizer:
type : "apiKey"
name : "Authorization"
in : "header"
x-amazon-apigateway-authtype : "custom"
x-amazon-apigateway-authorizer : {
type : "request",
authorizerUri : "arn:aws:apigateway:${region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${region}:${account_id}:function:${authorizer_function_name}/invocations",
authorizerResultTtlInSeconds : 58,
identitySource: "method.request.header.authorization",
}
*note I'm publishing swagger as a terraform template, hence the ${} substitution.

Getting Sequelize.js library to work on Amazon Lambda

So I'm trying to run a lambda on amazon and narrowed down the error finally by testing the lambda in amazons testing console.
The error I got is this.
{
"errorMessage": "Please install mysql2 package manually",
"errorType": "Error",
"stackTrace": [
"new MysqlDialect (/var/task/node_modules/sequelize/lib/dialects/mysql/index.js:14:30)",
"new Sequelize (/var/task/node_modules/sequelize/lib/sequelize.js:234:20)",
"Object.exports.getSequelizeConnection (/var/task/src/twilio/twilio.js:858:20)",
"Object.<anonymous> (/var/task/src/twilio/twilio.js:679:25)",
"__webpack_require__ (/var/task/src/twilio/twilio.js:20:30)",
"/var/task/src/twilio/twilio.js:63:18",
"Object.<anonymous> (/var/task/src/twilio/twilio.js:66:10)",
"Module._compile (module.js:570:32)",
"Object.Module._extensions..js (module.js:579:10)",
"Module.load (module.js:487:32)",
"tryModuleLoad (module.js:446:12)",
"Function.Module._load (module.js:438:3)",
"Module.require (module.js:497:17)",
"require (internal/module.js:20:19)"
]
}
Easy enough, so I have to install mysql2. So I added it to my package.json file.
{
"name": "test-api",
"version": "1.0.0",
"description": "",
"main": "handler.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 0"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"aws-sdk": "^2.153.0",
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-plugin-transform-runtime": "^6.23.0",
"babel-preset-es2015": "^6.24.1",
"babel-preset-stage-3": "^6.24.1",
"serverless-domain-manager": "^1.1.20",
"serverless-dynamodb-autoscaling": "^0.6.2",
"serverless-webpack": "^4.0.0",
"webpack": "^3.8.1",
"webpack-node-externals": "^1.6.0"
},
"dependencies": {
"babel-runtime": "^6.26.0",
"mailgun-js": "^0.13.1",
"minimist": "^1.2.0",
"mysql": "^2.15.0",
"mysql2": "^1.5.1",
"qs": "^6.5.1",
"sequelize": "^4.31.2",
"serverless": "^1.26.0",
"serverless-plugin-scripts": "^1.0.2",
"twilio": "^3.10.0",
"uuid": "^3.1.0"
}
}
I noticed when I do sls deploy however, it seems to only be packaging some of the modules?
Serverless: Package lock found - Using locked versions
Serverless: Packing external modules: babel-runtime#^6.26.0, twilio#^3.10.0, qs#^6.5.1, mailgun-js#^0.13.1, sequelize#^4.31.2, minimi
st#^1.2.0, uuid#^3.1.0
Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................................
Serverless: Stack update finished...
I think this is why it's not working. In short, how do I get mysql2 library to be packaged correctly with serverless so my lambda function will work with the sequelize library?
Please note that when I test locally my code works fine.
My serverless file is below
service: testapi
# Use serverless-webpack plugin to transpile ES6/ES7
plugins:
- serverless-webpack
- serverless-plugin-scripts
# - serverless-domain-manager
custom:
#Define the Stage or default to Staging.
stage: ${opt:stage, self:provider.stage}
webpackIncludeModules: true
#Define Databases Here
databaseName: "${self:service}-${self:custom.stage}"
#Define Bucket Names Here
uploadBucket: "${self:service}-uploads-${self:custom.stage}"
#Custom Script setup
scripts:
hooks:
#Script below will run schema changes to the database as neccesary and update according to stage.
'deploy:finalize': node database-schema-update.js --stage ${self:custom.stage}
#Domain Setup
# customDomain:
# basePath: "/"
# domainName: "api-${self:custom.stage}.test.com"
# stage: "${self:custom.stage}"
# certificateName: "*.test.com"
# createRoute53Record: true
provider:
name: aws
runtime: nodejs6.10
stage: staging
region: us-east-1
environment:
DOMAIN_NAME: "api-${self:custom.stage}.test.com"
DATABASE_NAME: ${self:custom.databaseName}
DATABASE_USERNAME: ${env:RDS_USERNAME}
DATABASE_PASSWORD: ${env:RDS_PASSWORD}
UPLOAD_BUCKET: ${self:custom.uploadBucket}
TWILIO_ACCOUNT_SID: ""
TWILIO_AUTH_TOKEN: ""
USER_POOL_ID: ""
APP_CLIENT_ID: ""
REGION: "us-east-1"
IDENTITY_POOL_ID: ""
RACKSPACE_API_KEY: ""
#Below controls permissions for lambda functions.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:UpdateTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:us-east-1:*:*"
functions:
create_visit:
handler: src/visits/create.main
events:
- http:
path: visits
method: post
cors: true
authorizer: aws_iam
get_visit:
handler: src/visits/get.main
events:
- http:
path: visits/{id}
method: get
cors: true
authorizer: aws_iam
list_visit:
handler: src/visits/list.main
events:
- http:
path: visits
method: get
cors: true
authorizer: aws_iam
update_visit:
handler: src/visits/update.main
events:
- http:
path: visits/{id}
method: put
cors: true
authorizer: aws_iam
delete_visit:
handler: src/visits/delete.main
events:
- http:
path: visits/{id}
method: delete
cors: true
authorizer: aws_iam
twilio_send_text_message:
handler: src/twilio/twilio.send_text_message
events:
- http:
path: twilio/sendtextmessage
method: post
cors: true
authorizer: aws_iam
#This function handles incoming calls and where to route it to.
twilio_incoming_call:
handler: src/twilio/twilio.incoming_calls
events:
- http:
path: twilio/calls
method: post
twilio_failure:
handler: src/twilio/twilio.twilio_failure
events:
- http:
path: twilio/failure
method: post
twilio_statuschange:
handler: src/twilio/twilio.statuschange
events:
- http:
path: twilio/statuschange
method: post
twilio_incoming_message:
handler: src/twilio/twilio.incoming_message
events:
- http:
path: twilio/messages
method: post
twilio_whisper:
handler: src/twilio/twilio.whisper
events:
- http:
path: twilio/whisper
method: post
- http:
path: twilio/whisper
method: get
twilio_start_call:
handler: src/twilio/twilio.start_call
events:
- http:
path: twilio/startcall
method: post
- http:
path: twilio/startcall
method: get
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.uploadBucket}
RDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
Engine : mysql
MasterUsername: ${env:RDS_USERNAME}
MasterUserPassword: ${env:RDS_PASSWORD}
DBInstanceClass : db.t2.micro
AllocatedStorage: '5'
PubliclyAccessible: true
#TODO: The Value of Stage is also available as a TAG automatically which I may use to replace this manually being put here..
Tags:
-
Key: "Name"
Value: ${self:custom.databaseName}
DeletionPolicy: Snapshot
DNSRecordSet:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: test.com.
Name: database-${self:custom.stage}.test.com
Type: CNAME
TTL: '300'
ResourceRecords:
- {"Fn::GetAtt": ["RDSDatabase","Endpoint.Address"]}
DependsOn: RDSDatabase
UPDATE:: So I confirmed that running sls package --stage dev seems to create this in the zip folder that would eventually upload to AWS. This confirms that serverless is not creating the package correctly with the mysql2 reference for some reason? Why is this?
webpack config file as requested
const slsw = require("serverless-webpack");
const nodeExternals = require("webpack-node-externals");
module.exports = {
entry: slsw.lib.entries,
target: "node",
// Since 'aws-sdk' is not compatible with webpack,
// we exclude all node dependencies
externals: [nodeExternals()],
// Run babel on all .js files and skip those in node_modules
module: {
rules: [
{
test: /\.js$/,
loader: "babel-loader",
include: __dirname,
exclude: /node_modules/
}
]
}
};
Thanks to dashmugs comment after some investigation on this page (https://github.com/serverless-heaven/serverless-webpack), there is a section on Forced Inclusion. I'll paraphrase it here.
Forced inclusion Sometimes it might happen that you use dynamic
requires in your code, i.e. you require modules that are only known at
runtime. Webpack is not able to detect such externals and the compiled
package will miss the needed dependencies. In such cases you can force
the plugin to include certain modules by setting them in the
forceInclude array property. However the module must appear in your
service's production dependencies in package.json.
# serverless.yml
custom:
webpackIncludeModules:
forceInclude:
- module1
- module2
So I simply did this...
webpackIncludeModules:
forceInclude:
- mysql
- mysql2
Now it works! Hope this helps someone else with the same issue.
None of the previous helped me, I used this solution: https://github.com/sequelize/sequelize/issues/9489#issuecomment-493304014
The trick is to use dialectModule property and override sequelize.
import Sequelize from 'sequelize';
import mysql2 from 'mysql2'; // Needed to fix sequelize issues with WebPack
const sequelize = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
dialect: 'mysql',
dialectModule: mysql2, // Needed to fix sequelize issues with WebPack
host: process.env.DB_HOST,
port: process.env.DB_PORT
}
)
export async function connectToDatabase() {
console.log('Trying to connect via sequelize')
await sequelize.sync()
await sequelize.authenticate()
console.log('=> Created a new connection.')
// Do something
}
The previous so far works on MySql but is not working with Postgres