aws-cdk Associate Lambda Function with CloudFront Web Distribution - amazon-web-services

I'm trying to create a CloudFront Web Distribution using aws-cdk. I'm able to successfully create the web distribution but I haven't been able to figure out how to associate a lambda function yet.
Below is a snippet of my Typescript aws-cdk code to create the CloudFront Web Distribution. I removed some of the code that's not relevant.
new cloudfront.CloudFrontWebDistribution(this, 'RetsFilesCDN', {
originConfigs: [
{
s3OriginSource: {
originAccessIdentity: cfAccess, /* A CfnCloudFrontOriginAccessIdentity object created in earlier code */
s3BucketSource: files /* S3 bucket created in earlier code */
},
behaviors: [
{
compress: true,
defaultTtlSeconds: 172800,
isDefaultBehavior: true,
maxTtlSeconds: 31536000,
minTtlSeconds: 0
}
]
}
]
});
The CloudFormation code that I'm trying to get it to generate is something like this:
RetsFilesCDNCFDistribution6F414E1A:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
CacheBehaviors:
[]
Comment: CDN for files from the Real Estate RETS services that BranchCMS
integrates with
DefaultCacheBehavior:
AllowedMethods:
- GET
- HEAD
CachedMethods:
- GET
- HEAD
Compress: true
DefaultTTL: 172800
ForwardedValues:
Cookies:
Forward: none
QueryString: false
MaxTTL: 259200
MinTTL: 172800
LambdaFunctionAssociations:
- EventType: origin-response
LambdaFunctionARN: lambdaFunctionArnHere
TargetOriginId: origin1
ViewerProtocolPolicy: redirect-to-https
DefaultRootObject: index.html
Enabled: true
HttpVersion: http2
IPV6Enabled: true
Origins:
- DomainName:
Fn::GetAtt:
- RetsFilesC9F78E92
- DomainName
Id: origin1
S3OriginConfig:
OriginAccessIdentity:
Fn::Join:
- ""
- - origin-access-identity/cloudfront/
- Ref: RetsFilesAccess
PriceClass: PriceClass_100
ViewerCertificate:
AcmCertificateArn: arn:aws:acm:us-east-1:666445282096:certificate/25d4967c-c29a-4d11-983f-86d709769372
SslSupportMethod: sni-only
The exact part that I can't seem to generate is:
LambdaFunctionAssociations:
- EventType: origin-response
LambdaFunctionARN: lambdaFunctionArnHere
Thank you in advance for your help.

I'm not sure if this is the best technique, but the following worked for me.
import cdk = require('#aws-cdk/cdk');
import cloudfront = require('#aws-cdk/aws-cloudfront');
import lambda = require('#aws-cdk/aws-lambda');
export class MyStack extends cdk.Stack {
constructor(parent: cdk.App, name: string, props?: cdk.StackProps) {
super(parent, name, props);
// Create the Lambda function
const lambdaFunc = new lambda.Function(this, 'MyLambda', {
YOUR_LAMBDA_PROPERTIES: HERE
});
// Create the CloudFront Web Distribution
const cf = new cloudfront.CloudFrontWebDistribution(this, 'MyCDN', {
YOUR_CLOUDFRONT_PROPERTIES: HERE
});
/**
* THIS IS THE BEGINNING OF THE SOLUTION
*/
// Get the CloudFront Distribution object to add the LambdaFunctionAssociations to
const cfDist = cf.findChild('CFDistribution') as cloudfront.CfnDistribution;
// Manually add the LambdaFunctionAssociations by adding an override
cfDist.addOverride('Properties.DistributionConfig.DefaultCacheBehavior.LambdaFunctionAssociations', [{
EventType: 'origin-response',
LambdaFunctionARN: lambdaFunc.functionArn + ':2'
}]);
/**
* END OF SOLUTION
*/
}
}

Is it possible to associate a newly created function (lamda#edge / cloudfront function) with an existing cloudfront distribution?
The above code only seems to work for a newly created distribution.

you can use this property for the assosiation of lambda function to your cloudfront cache behaviour
import { CfnDistribution.CacheBehaviorProperty } from '#aws-cdk/aws-cloudfront';
CfnDistribution.CacheBehaviorProperty.LambdaFunctionAssociations
for more information click on this link

Related

How to enable managed response header policy of SecurityHeaders with CloudFrontWebDistribution in AWS CDK?

I have a CloudFrontWebDistribution in my AWS CDK infrastructure code in typescript:
const cloudFrontDistribution = new cloudfront.CloudFrontWebDistribution(this, 'distribution', {
originConfigs: [
{
s3OriginSource: {
s3BucketSource: webBucket,
originAccessIdentity: originAccessIdentity,
},
behaviors : [ {
isDefaultBehavior: true,
defaultTtl: Duration.seconds(1),
lambdaFunctionAssociations: [
{
eventType: LambdaEdgeEventType.VIEWER_REQUEST,
lambdaFunction: midwayEdgeFunction.currentVersion,
},
]
},
]
}
],
defaultRootObject: 'index.html',
viewerCertificate: cloudfront.ViewerCertificate.fromAcmCertificate(props.certificate, {
aliases: [props.stageProps.cloud_front_domain_name],
sslMethod: cloudfront.SSLMethod.SNI,
securityPolicy: cloudfront.SecurityPolicyProtocol.TLS_V1_2_2019
}),
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.HTTPS_ONLY,
loggingConfig: {
bucket: logBucket,
includeCookies: true,
prefix: 'cflogs/'
}
});
I want to enable Security Headers managed policy (see here) to this distribution. However I only see aws cdk documentation for doing so for a Distribution object, but not for a CloudFrontWebDistribution object.
How to enable managed response headers policy of Security Headers to a CloudFrontWebDistribution object in AWS CDK?
Get an escape hatch reference to the underlying L1 CfnDistribution construct. Then, manually set the ResponseHeadersPolicyId property on DefaultCacheBehavior, making use of the ResponseHeadersPolicy.SECURITY_HEADERS static method:
const cfnDistribution = cloudFrontDistribution.node.defaultChild as cloudfront.CfnDistribution;
cfnDistribution.addPropertyOverride(
'DistributionConfig.DefaultCacheBehavior.ResponseHeadersPolicyId',
cloudfront.ResponseHeadersPolicy.SECURITY_HEADERS.responseHeadersPolicyId
);

Serverless Lambda function runs into a timeout when running Nuxt

I just managed to deploy my Nuxt application via Serverless on AWS. Basically everything works as expected but in some cases the Lambda function just runs into a timeout and can't serve my Nuxt application. Since my application is a SPA the timeout only happens during a refresh of the browser window or when I visit my page in a new tab, but only sometimes. I already increased the Lambda timeout to 30s (meets the timeout of the API Gateway) which should be enough but the timeout still occurs.
Here's my serverless.yml:
service:
name: test-app
plugins:
- serverless-nuxt-plugin
- serverless-dotenv-plugin
- serverless-domain-manager
resources:
Resources:
AssetsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.nuxt.bucketName}
CorsConfiguration:
CorsRules:
- AllowedMethods:
- GET
- HEAD
AllowedOrigins:
- "*"
provider:
name: aws
region: eu-central-1 # this field is used for the assets files s3 path.
stage: ${env:APP_ENV}
runtime: nodejs12.x
environment:
NODE_ENV: ${env:APP_ENV}
tags: # Optional service wide function tags
usecase: test-app
environment: ${self:provider.stage}
domain: ${env:DEPLOY_DOMAIN}
custom:
nuxt:
version: app-${self:provider.stage}-v1
bucketName: test-app-static-${self:provider.stage}
cdnPath: https://cdn.XXX.com
customDomain:
domainName: ${env:DEPLOY_DOMAIN}
certificateName: ${'*.'}${env:DEPLOY_DOMAIN}
createRoute53Record: true
endpointType: 'regional'
functions:
nuxt:
handler: lambda-handler.render
memorySize: 512 # in MB with steps of 64
timeout: 30 # in seconds
events:
- http: ANY /
- http: ANY /{proxy+}
And my Lambda handler:
const awsServerlessExpress = require('aws-serverless-express');
const express = require('express');
const { Nuxt } = require('nuxt-start'); // eslint-disable-line
const nuxtConfig = require("./nuxt.config.js");
const app = express();
const nuxt = new Nuxt({
...nuxtConfig,
dev: false,
_start: true,
});
app.use(async (req, res) => {
if (nuxt.ready) {
await nuxt.ready()
}
nuxt.render(req, res)
});
const server = awsServerlessExpress.createServer(app, void 0, [
'application/javascript',
'application/json',
'application/manifest+json',
'application/octet-stream',
'application/xml',
'font/eot',
'font/opentype',
'font/otf',
'image/gif',
'image/jpeg',
'image/png',
'image/svg+xml',
'image/x-icon', // for favicon
'text/comma-separated-values',
'text/css',
'text/html',
'text/javascript',
'text/plain',
'text/text',
'text/xml',
'application/rss+xml',
'application/atom+xml',
]);
module.exports.render = (event, context) => {
awsServerlessExpress.proxy(server, event, context);
};
Additionally, I setup a CloudFront distribution in front of my API Gateway to redirect http traffic to https. So nothing really special, I guess.
Here's an example of my CloudWatch logs that shows an example timeout:
So the duration of the Lambda is pretty distributed and I can't really understand why. I even found durations of 100ms but they can get up until the timeout of 30s.
Is there anything wrong in my setup or something I missed? I'm aware of the cold start bottleneck for Lambdas but these timeout calls are not caused by a cold start.
I really appreciate your help!
I currently solved the issue by increasing the memory limit first from 512MB to 1024MB and then on a second step from 1024MB to 2048MB.
See the CloudFront diagram here (blue line)
I guess that my application is just too large with too many dependencies and modules that need to be loaded when running the Lambda. However, I'm still not sure if a memory leak or something else is causing the issue and increasing the memory limit is just hiding the issue. But if anyone has the same issue, increasing the memory seems to be a good temporary fix to at least have your application available.

Creating Cognito User Pool With Custom Domain name from AWS CDK

I'm trying to creating Cognito user pool with a custom domain name through AWS CDK. I manage to get everyting working untill to the point where I needed to create an A record in the Rout53 hosted zone. I searched through all the documents but coudn't find a way to do that. Following is my code. Any help would be much appriciated.
const cfnUserPool = new CfnUserPool(this, 'MyCognitoUserPool', {
userPoolName: 'MyCognitoUserPool',
adminCreateUserConfig: {
allowAdminCreateUserOnly: false
},
policies: {
passwordPolicy: {
minimumLength: 8,
requireLowercase: true,
requireNumbers: true,
requireSymbols: true,
requireUppercase: true,
temporaryPasswordValidityDays: 30
}
},
usernameAttributes: [
UserPoolAttribute.EMAIL
],
schema: [
{
attributeDataType: 'String',
name: UserPoolAttribute.EMAIL,
mutable: true,
required: true
},
{
attributeDataType: 'String',
name: UserPoolAttribute.FAMILY_NAME,
mutable: false,
required: true
},
{
attributeDataType: 'String',
name: UserPoolAttribute.GIVEN_NAME,
mutable: false,
required: true
}
]
});
const cognitoAppDomain = new CfnUserPoolDomain(this, "PigletAuthDomainName", {
domain: authDomainName,
userPoolId: cfnUserPool.ref,
customDomainConfig: {
certificateArn: 'ACM Certificate arn'
}
});
/*
TODO: Create an A record from the created cnfUserPoolDomain
*/
Everything works up untill to this point. Now the question is how to create an A record using the CfnUserPoolDomain
Any help is much appriciated.
Update May 2020
The UserPoolDomain construct has been extended and a UserPoolDomainTarget was added to provide this functionality.
Now, all you need to do is the following:
const userPoolDomain = new cognito.UserPoolDomain(this, 'UserPoolDomain', {
userPool,
customDomain: {
domainName: authDomainName,
certificate,
},
});
new route53.ARecord(this, 'UserPoolCloudFrontAliasRecord', {
zone: hostedZone,
recordName: authDomainName,
target: route53.RecordTarget.fromAlias(new route53_targets.UserPoolDomainTarget(userPoolDomain)),
});
I had the same Problem, It looks like CloudFormation does not have a return parameter for the CfnUserPoolDomain AliasTarget. Which means the cdk can not provide this parameter either.
I ended up implementing it using the AWS SDK (npm install aws-sdk) and getting the value using the APIs:
Update: The better solution is to use the AwsCustomResource. You can see a detailed example in aws/aws-cdk (#6787):
const userPoolDomainDescription = new customResources.AwsCustomResource(this, 'user-pool-domain-description', {
onCreate: {
physicalResourceId: 'user-pool-domain-description',
service: 'CognitoIdentityServiceProvider',
action: 'describeUserPoolDomain',
parameters: {
Domain: userPoolDomain.domain
}
}
});
const dnsName = userPoolDomainDescription.getData('DomainDescription.CloudFrontDistribution').toString();
// Route53 alias record for the UserPoolDomain CloudFront distribution
new route53.ARecord(this, 'UserPoolDomainAliasRecord', {
recordName: userPoolDomain.domain,
target: route53.RecordTarget.fromAlias({
bind: _record => ({
hostedZoneId: 'Z2FDTNDATAQYW2', // CloudFront Zone ID
dnsName: dnsName,
}),
}),
zone,
})
Here's how to get around it. Assuming you have a stack.yaml that you deploy with a CI tool, say through bash:
THE_STACK_NAME="my-cognito-stack"
THE_DOMAIN_NAME="auth.yourveryowndomain.org"
# get the alias target
# notice that it will be empty upon first launch (chicken and the egg problem)
ALIAS_TARGET=$(aws cognito-idp describe-user-pool-domain --domain ${THE_DOMAIN_NAME} | grep CloudFrontDistribution | cut -d \" -f4)
# create/update the deployment CloudFormation stack
# notice the AliasTarget parameter (which can be empty, it's okay!)
aws cloudformation deploy --stack-name ${THE_STACK_NAME} --template-file stack.yaml --parameter-overrides AliasTarget=${ALIAS_TARGET} DomainName=${THE_DOMAIN_NAME}
The stack.yaml minimal version (remember to fill the UserPool config):
---
AWSTemplateFormatVersion: 2010-09-09
Parameters:
DomainName:
Type: String
Default: auth.yourveryowndomain.org
Description: The domain name to use to serve this project.
ZoneName:
Type: String
Default: yourveryowndomain.org
Description: The hosted zone name coming along with the DomainName used.
AliasTarget: # no default value, can be empty
Type: String
Description: The UserPoolDomain alias target.
Conditions: # here's "the trick"
HasAliasTarget: !Not [!Equals ['', !Ref AliasTarget]]
Resources:
Certificate:
Type: "AWS::CertificateManager::Certificate"
Properties:
DomainName: !Ref ZoneName
DomainValidationOptions:
- DomainName: !Ref ZoneName
ValidationDomain: !Ref ZoneName
SubjectAlternativeNames:
- !Ref DomainName
UserPool:
Type: AWS::Cognito::UserPool
Properties:
[... fill that with your configuration! ...]
UserPoolDomain:
Type: AWS::Cognito::UserPoolDomain
Properties:
UserPoolId: !Ref UserPool
Domain: !Ref DomainName
CustomDomainConfig:
CertificateArn: !Ref Certificate
DnsRecord: # if AliasTarget parameter is empty, well we just can't do that one!
Condition: HasAliasTarget # and here's how we don't do it when we can't
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: !Sub "${ZoneName}."
AliasTarget:
DNSName: !Ref AliasTarget
EvaluateTargetHealth: false
# HostedZoneId value for CloudFront is always this one
# see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-route53-aliastarget.html
HostedZoneId: Z2FDTNDATAQYW2
Name: !Ref DomainName
Type: A
Be aware CloudFormation conditions are not "a trick" at all: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html. We simply use it as a trick along with the "first launch won't do it all" to get around our scenario.
Kinda weird, but only for the first run! Launch it again: everything is fine.
PS: can't wait to avoid all that by simply having the CloudFrontDistribution alias target directly in the AWS::Cognito::UserPoolDomain return values!!

Getting Sequelize.js library to work on Amazon Lambda

So I'm trying to run a lambda on amazon and narrowed down the error finally by testing the lambda in amazons testing console.
The error I got is this.
{
"errorMessage": "Please install mysql2 package manually",
"errorType": "Error",
"stackTrace": [
"new MysqlDialect (/var/task/node_modules/sequelize/lib/dialects/mysql/index.js:14:30)",
"new Sequelize (/var/task/node_modules/sequelize/lib/sequelize.js:234:20)",
"Object.exports.getSequelizeConnection (/var/task/src/twilio/twilio.js:858:20)",
"Object.<anonymous> (/var/task/src/twilio/twilio.js:679:25)",
"__webpack_require__ (/var/task/src/twilio/twilio.js:20:30)",
"/var/task/src/twilio/twilio.js:63:18",
"Object.<anonymous> (/var/task/src/twilio/twilio.js:66:10)",
"Module._compile (module.js:570:32)",
"Object.Module._extensions..js (module.js:579:10)",
"Module.load (module.js:487:32)",
"tryModuleLoad (module.js:446:12)",
"Function.Module._load (module.js:438:3)",
"Module.require (module.js:497:17)",
"require (internal/module.js:20:19)"
]
}
Easy enough, so I have to install mysql2. So I added it to my package.json file.
{
"name": "test-api",
"version": "1.0.0",
"description": "",
"main": "handler.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 0"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"aws-sdk": "^2.153.0",
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-plugin-transform-runtime": "^6.23.0",
"babel-preset-es2015": "^6.24.1",
"babel-preset-stage-3": "^6.24.1",
"serverless-domain-manager": "^1.1.20",
"serverless-dynamodb-autoscaling": "^0.6.2",
"serverless-webpack": "^4.0.0",
"webpack": "^3.8.1",
"webpack-node-externals": "^1.6.0"
},
"dependencies": {
"babel-runtime": "^6.26.0",
"mailgun-js": "^0.13.1",
"minimist": "^1.2.0",
"mysql": "^2.15.0",
"mysql2": "^1.5.1",
"qs": "^6.5.1",
"sequelize": "^4.31.2",
"serverless": "^1.26.0",
"serverless-plugin-scripts": "^1.0.2",
"twilio": "^3.10.0",
"uuid": "^3.1.0"
}
}
I noticed when I do sls deploy however, it seems to only be packaging some of the modules?
Serverless: Package lock found - Using locked versions
Serverless: Packing external modules: babel-runtime#^6.26.0, twilio#^3.10.0, qs#^6.5.1, mailgun-js#^0.13.1, sequelize#^4.31.2, minimi
st#^1.2.0, uuid#^3.1.0
Serverless: Packaging service...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
................................
Serverless: Stack update finished...
I think this is why it's not working. In short, how do I get mysql2 library to be packaged correctly with serverless so my lambda function will work with the sequelize library?
Please note that when I test locally my code works fine.
My serverless file is below
service: testapi
# Use serverless-webpack plugin to transpile ES6/ES7
plugins:
- serverless-webpack
- serverless-plugin-scripts
# - serverless-domain-manager
custom:
#Define the Stage or default to Staging.
stage: ${opt:stage, self:provider.stage}
webpackIncludeModules: true
#Define Databases Here
databaseName: "${self:service}-${self:custom.stage}"
#Define Bucket Names Here
uploadBucket: "${self:service}-uploads-${self:custom.stage}"
#Custom Script setup
scripts:
hooks:
#Script below will run schema changes to the database as neccesary and update according to stage.
'deploy:finalize': node database-schema-update.js --stage ${self:custom.stage}
#Domain Setup
# customDomain:
# basePath: "/"
# domainName: "api-${self:custom.stage}.test.com"
# stage: "${self:custom.stage}"
# certificateName: "*.test.com"
# createRoute53Record: true
provider:
name: aws
runtime: nodejs6.10
stage: staging
region: us-east-1
environment:
DOMAIN_NAME: "api-${self:custom.stage}.test.com"
DATABASE_NAME: ${self:custom.databaseName}
DATABASE_USERNAME: ${env:RDS_USERNAME}
DATABASE_PASSWORD: ${env:RDS_PASSWORD}
UPLOAD_BUCKET: ${self:custom.uploadBucket}
TWILIO_ACCOUNT_SID: ""
TWILIO_AUTH_TOKEN: ""
USER_POOL_ID: ""
APP_CLIENT_ID: ""
REGION: "us-east-1"
IDENTITY_POOL_ID: ""
RACKSPACE_API_KEY: ""
#Below controls permissions for lambda functions.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:UpdateTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:us-east-1:*:*"
functions:
create_visit:
handler: src/visits/create.main
events:
- http:
path: visits
method: post
cors: true
authorizer: aws_iam
get_visit:
handler: src/visits/get.main
events:
- http:
path: visits/{id}
method: get
cors: true
authorizer: aws_iam
list_visit:
handler: src/visits/list.main
events:
- http:
path: visits
method: get
cors: true
authorizer: aws_iam
update_visit:
handler: src/visits/update.main
events:
- http:
path: visits/{id}
method: put
cors: true
authorizer: aws_iam
delete_visit:
handler: src/visits/delete.main
events:
- http:
path: visits/{id}
method: delete
cors: true
authorizer: aws_iam
twilio_send_text_message:
handler: src/twilio/twilio.send_text_message
events:
- http:
path: twilio/sendtextmessage
method: post
cors: true
authorizer: aws_iam
#This function handles incoming calls and where to route it to.
twilio_incoming_call:
handler: src/twilio/twilio.incoming_calls
events:
- http:
path: twilio/calls
method: post
twilio_failure:
handler: src/twilio/twilio.twilio_failure
events:
- http:
path: twilio/failure
method: post
twilio_statuschange:
handler: src/twilio/twilio.statuschange
events:
- http:
path: twilio/statuschange
method: post
twilio_incoming_message:
handler: src/twilio/twilio.incoming_message
events:
- http:
path: twilio/messages
method: post
twilio_whisper:
handler: src/twilio/twilio.whisper
events:
- http:
path: twilio/whisper
method: post
- http:
path: twilio/whisper
method: get
twilio_start_call:
handler: src/twilio/twilio.start_call
events:
- http:
path: twilio/startcall
method: post
- http:
path: twilio/startcall
method: get
resources:
Resources:
uploadBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.uploadBucket}
RDSDatabase:
Type: AWS::RDS::DBInstance
Properties:
Engine : mysql
MasterUsername: ${env:RDS_USERNAME}
MasterUserPassword: ${env:RDS_PASSWORD}
DBInstanceClass : db.t2.micro
AllocatedStorage: '5'
PubliclyAccessible: true
#TODO: The Value of Stage is also available as a TAG automatically which I may use to replace this manually being put here..
Tags:
-
Key: "Name"
Value: ${self:custom.databaseName}
DeletionPolicy: Snapshot
DNSRecordSet:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: test.com.
Name: database-${self:custom.stage}.test.com
Type: CNAME
TTL: '300'
ResourceRecords:
- {"Fn::GetAtt": ["RDSDatabase","Endpoint.Address"]}
DependsOn: RDSDatabase
UPDATE:: So I confirmed that running sls package --stage dev seems to create this in the zip folder that would eventually upload to AWS. This confirms that serverless is not creating the package correctly with the mysql2 reference for some reason? Why is this?
webpack config file as requested
const slsw = require("serverless-webpack");
const nodeExternals = require("webpack-node-externals");
module.exports = {
entry: slsw.lib.entries,
target: "node",
// Since 'aws-sdk' is not compatible with webpack,
// we exclude all node dependencies
externals: [nodeExternals()],
// Run babel on all .js files and skip those in node_modules
module: {
rules: [
{
test: /\.js$/,
loader: "babel-loader",
include: __dirname,
exclude: /node_modules/
}
]
}
};
Thanks to dashmugs comment after some investigation on this page (https://github.com/serverless-heaven/serverless-webpack), there is a section on Forced Inclusion. I'll paraphrase it here.
Forced inclusion Sometimes it might happen that you use dynamic
requires in your code, i.e. you require modules that are only known at
runtime. Webpack is not able to detect such externals and the compiled
package will miss the needed dependencies. In such cases you can force
the plugin to include certain modules by setting them in the
forceInclude array property. However the module must appear in your
service's production dependencies in package.json.
# serverless.yml
custom:
webpackIncludeModules:
forceInclude:
- module1
- module2
So I simply did this...
webpackIncludeModules:
forceInclude:
- mysql
- mysql2
Now it works! Hope this helps someone else with the same issue.
None of the previous helped me, I used this solution: https://github.com/sequelize/sequelize/issues/9489#issuecomment-493304014
The trick is to use dialectModule property and override sequelize.
import Sequelize from 'sequelize';
import mysql2 from 'mysql2'; // Needed to fix sequelize issues with WebPack
const sequelize = new Sequelize(
process.env.DB_NAME,
process.env.DB_USER,
process.env.DB_PASSWORD,
{
dialect: 'mysql',
dialectModule: mysql2, // Needed to fix sequelize issues with WebPack
host: process.env.DB_HOST,
port: process.env.DB_PORT
}
)
export async function connectToDatabase() {
console.log('Trying to connect via sequelize')
await sequelize.sync()
await sequelize.authenticate()
console.log('=> Created a new connection.')
// Do something
}
The previous so far works on MySql but is not working with Postgres

Setting up caching for APIGateway Methods

I have following CF Template
{
"Conditions":{
"CreatedProdStage" : {...}
}
...
"Resources":{
"GetMethod": {
...
},
"ApiDeployement":{
...
},
"ProdStage":{
"Type":"AWS::ApiGateway::Stage",
"Condition":"CreatedProdStage",
"Properties": {
"DeploymentId":"...",
"RestApiId":"...",
"MethodSettings":[{
"CachingEnabled":true,
"HttpMethod":{"Ref":"GetMethod"},
"ResourcePath":"/"
}]
}
}
}
}
And I am getting error
Invalid method setting path:
/~1/st-GetMetho-xxxAUMMRWxxx/caching/enabled. Must be one of:
[/deploymentId, /description,
/cacheClusterEnabled/cacheClusterSize/clientCertificateId/{resourcePath}/{httpMethod}/metrics/enabled,
/{resourcePath}/{httpMethod}/logging/dataTrace,
/{resourcePath}/{httpMethod}/logging/loglevel,
/{resourcePath}/{httpMethod}/throttling/burstLimit/{resourcePath}/{httpMethod}/throttling/rateLimit/{resourcePath}/{httpMethod}/caching/ttlInSeconds,
/{resourcePath}/{httpMethod}/caching/enabled,
/{resourcePath}/{httpMethod}/caching/dataEncrypted,
/{resourcePath}/{httpMethod}/caching/requireAuthorizationForCacheControl,
/{resourcePath}/{httpMethod}/caching/unauthorizedCacheControlHeaderStrategy,
///metrics/enabled, ///logging/dataTrace, ///logging/loglevel,
///throttling/burstLimit ///throttling/rateLimit
///caching/ttlInSeconds, ///caching/enabled,
///caching/dataEncrypted,
///caching/requireAuthorizationForCacheControl,
///caching/unauthorizedCacheControlHeaderStrategy, /va
Am I missing something? I thought ResourcePath and HttpMethod are the only required attributes
You first need to enable caching on the stage with the CacheClusterEnabled property. This will allow you to set up caching for methods as you have done in you MethodSettings:
...
"ProdStage":{
"Type":"AWS::ApiGateway::Stage",
"Condition":"CreatedProdStage",
"Properties": {
"DeploymentId":"...",
"RestApiId":"...",
"CacheClusterEnabled": true
"MethodSettings":[{
"CachingEnabled":true,
"HttpMethod":{"Ref":"GetMethod"},
"ResourcePath":"/"
}]
}
}
Then you will need to fix the given error. Your ResourcePath to match one of those listed in the error output. Those are not listed in the documentation, so it's a bit confusing what you need to use. What you currently have is set up for the root path only. If you want all paths use "/*"
APIGateWay::MethodSettings (see ResourcePath) doc:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-methodsetting.html
If anyone is still arriving at this, but is NOT using cache, I have provided an example for setting throttling and logging on the whole API. I could not figure it out until I started playing around with the ResourcePath and HttpMethod, and noticed the error changing.
Please note that I used * for both path and method and USED QUOTATIONS. It will fail without quotations.
ProdStage:
Type: AWS::ApiGateway::Stage
Properties:
StageName: Prod
RestApiId: !Ref StunningDisco
DeploymentId: !Ref StunningDiscoDeployment
MethodSettings:
- ResourcePath: '/*'
HttpMethod: '*'
LoggingLevel: INFO
DataTraceEnabled: True
ThrottlingBurstLimit: '10'
ThrottlingRateLimit: '10.0'
StunningDiscoDomainMapping:
Type: 'AWS::ApiGateway::BasePathMapping'
DependsOn: ProdStage
Properties:
DomainName: !Ref StunningDiscoDomain
RestApiId: !Ref StunningDisco
Stage: !Ref ProdStage
StunningDiscoDeployment:
Type: AWS::ApiGateway::Deployment
DependsOn: [StunningDiscoRootEndpoint, LightsInvokeEndpoint]
Properties:
RestApiId: !Ref StunningDisco
Try setting the HttpMethod to a string instead of a reference:
"MethodSettings":[{
"CachingEnabled":true,
"HttpMethod": "GET",
"ResourcePath":"/"
}]
}