Serverless Lambda function runs into a timeout when running Nuxt - amazon-web-services

I just managed to deploy my Nuxt application via Serverless on AWS. Basically everything works as expected but in some cases the Lambda function just runs into a timeout and can't serve my Nuxt application. Since my application is a SPA the timeout only happens during a refresh of the browser window or when I visit my page in a new tab, but only sometimes. I already increased the Lambda timeout to 30s (meets the timeout of the API Gateway) which should be enough but the timeout still occurs.
Here's my serverless.yml:
service:
name: test-app
plugins:
- serverless-nuxt-plugin
- serverless-dotenv-plugin
- serverless-domain-manager
resources:
Resources:
AssetsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.nuxt.bucketName}
CorsConfiguration:
CorsRules:
- AllowedMethods:
- GET
- HEAD
AllowedOrigins:
- "*"
provider:
name: aws
region: eu-central-1 # this field is used for the assets files s3 path.
stage: ${env:APP_ENV}
runtime: nodejs12.x
environment:
NODE_ENV: ${env:APP_ENV}
tags: # Optional service wide function tags
usecase: test-app
environment: ${self:provider.stage}
domain: ${env:DEPLOY_DOMAIN}
custom:
nuxt:
version: app-${self:provider.stage}-v1
bucketName: test-app-static-${self:provider.stage}
cdnPath: https://cdn.XXX.com
customDomain:
domainName: ${env:DEPLOY_DOMAIN}
certificateName: ${'*.'}${env:DEPLOY_DOMAIN}
createRoute53Record: true
endpointType: 'regional'
functions:
nuxt:
handler: lambda-handler.render
memorySize: 512 # in MB with steps of 64
timeout: 30 # in seconds
events:
- http: ANY /
- http: ANY /{proxy+}
And my Lambda handler:
const awsServerlessExpress = require('aws-serverless-express');
const express = require('express');
const { Nuxt } = require('nuxt-start'); // eslint-disable-line
const nuxtConfig = require("./nuxt.config.js");
const app = express();
const nuxt = new Nuxt({
...nuxtConfig,
dev: false,
_start: true,
});
app.use(async (req, res) => {
if (nuxt.ready) {
await nuxt.ready()
}
nuxt.render(req, res)
});
const server = awsServerlessExpress.createServer(app, void 0, [
'application/javascript',
'application/json',
'application/manifest+json',
'application/octet-stream',
'application/xml',
'font/eot',
'font/opentype',
'font/otf',
'image/gif',
'image/jpeg',
'image/png',
'image/svg+xml',
'image/x-icon', // for favicon
'text/comma-separated-values',
'text/css',
'text/html',
'text/javascript',
'text/plain',
'text/text',
'text/xml',
'application/rss+xml',
'application/atom+xml',
]);
module.exports.render = (event, context) => {
awsServerlessExpress.proxy(server, event, context);
};
Additionally, I setup a CloudFront distribution in front of my API Gateway to redirect http traffic to https. So nothing really special, I guess.
Here's an example of my CloudWatch logs that shows an example timeout:
So the duration of the Lambda is pretty distributed and I can't really understand why. I even found durations of 100ms but they can get up until the timeout of 30s.
Is there anything wrong in my setup or something I missed? I'm aware of the cold start bottleneck for Lambdas but these timeout calls are not caused by a cold start.
I really appreciate your help!

I currently solved the issue by increasing the memory limit first from 512MB to 1024MB and then on a second step from 1024MB to 2048MB.
See the CloudFront diagram here (blue line)
I guess that my application is just too large with too many dependencies and modules that need to be loaded when running the Lambda. However, I'm still not sure if a memory leak or something else is causing the issue and increasing the memory limit is just hiding the issue. But if anyone has the same issue, increasing the memory seems to be a good temporary fix to at least have your application available.

Related

Why is my AWS Lambda function ending before finishing with no timeout message?

I've been using AWS Lambda and testing with SAM local for nearly a year with no major issues. However, I've written a Lambda function which modifies some files with the S3 API.
The function ends with a 502: Invalid lambda response received: Lambda returned <class 'NoneType'> instead of dict
This is before my function has had a chance to finish...
I've managed to condense the code to the following:
exports.handler = async (event, context) => {
console.log("Goldi");
await fish(event, context);
console.log("Locks");
return { statusCode: 200, body: "Finished!" };
};
No matter whether I run this in SAM Local or upload to AWS Lambda, I get this output:
START RequestId: 6a30e157-3e9b-465e-a945-3e9f7fa2cd7e Version: $LATEST
2022-01-12T18:36:27.601Z 6a30e157-3e9b-465e-a945-3e9f7fa2cd7e INFO Goldi
2022-01-12T18:36:27.603Z 6a30e157-3e9b-465e-a945-3e9f7fa2cd7e INFO Some output from fish()...
END RequestId: 6a30e157-3e9b-465e-a945-3e9f7fa2cd7e
REPORT RequestId: 6a30e157-3e9b-465e-a945-3e9f7fa2cd7e Init Duration: 0.18 ms Duration: 12600.03 ms Billed Duration: 12700 ms Memory Size: 512 MB Max Memory Used: 512 MB
Invalid lambda response received: Lambda returned <class 'NoneType'> instead of dict
2022-01-12 18:36:38 127.0.0.1 - - [12/Jan/2022 18:36:38] "POST / HTTP/1.1" 502 -
I've configured this Lambda function to have a timeout of several minutes and I do not call any functions in 'context'
I've sunk several hours into trying to figure out how a Lambda function can end without any error message (from my code) or a timeout notice.
Is this a known behaviour? Does anyone know how I can find out what causes the function to suddenly stop with no output?
What is the memory size configuration if the timeout is correct it might be the memory that is hampering the performance
Add async to the function definition to fix an issue
module.exports.default = async () => {
return {
statusCode: 200,
};
};
It possible when lambda abruptly exits in one of the code path.
Something in lines of System.exit() in Java.
In JS, the lambda are run on loop to consume the event.
If your fish function closes the lambda environment by maybe calling end of runtime / closing the socket on which the response is supposed to be sent.
Lambda will finish there itself without sending response or timeout.
req.on('socket', function (socket) { socket.unref() })
So from the following tutorial covering how to get lambdas and api-gateway working using CDK, I managed to isolate that without the following line will result in the 502 BAD GATEWAY error experienced, with the suggested return type as described. It's in the new apigateway.RestApi props.
defaultCorsPreflightOptions: {
...
allowOrigins: ['http://localhost:3000'],
},
The op doesn't specify his infrastructure propositioning method. If not using the CDK and using Cloud Formation YAML then it's probably related to the equivalent in the expanded YAML (although the net result of the expansion is beyond my competency).
method.response.header.Access-Control-Allow-Origin
BrokerAPItest41BB435C:
Type: AWS::ApiGateway::Resource
Properties:
ParentId: !GetAtt 'BrokerAPID825C3CC.RootResourceId'
PathPart: test
RestApiId: !Ref 'BrokerAPID825C3CC'
Metadata:
aws:cdk:path: BrokerAwsDeployStack/BrokerAPI/Default/test/Resource
BrokerAPItestOPTIONS843EE5C3:
Type: AWS::ApiGateway::Method
Properties:
HttpMethod: OPTIONS
ResourceId: !Ref 'BrokerAPItest41BB435C'
RestApiId: !Ref 'BrokerAPID825C3CC'
AuthorizationType: NONE
Integration:
IntegrationResponses:
- ResponseParameters:
method.response.header.Access-Control-Allow-Headers: '''Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,X-Amz-User-Agent'''
method.response.header.Access-Control-Allow-Origin: '''http://localhost:3000'''
method.response.header.Vary: '''Origin'''
method.response.header.Access-Control-Allow-Methods: '''OPTIONS,GET,PUT,POST,DELETE,PATCH,HEAD'''
StatusCode: '204'
RequestTemplates:
application/json: '{ statusCode: 200 }'
Type: MOCK
MethodResponses:
- ResponseParameters:
method.response.header.Access-Control-Allow-Headers: true
method.response.header.Access-Control-Allow-Origin: true
method.response.header.Vary: true
method.response.header.Access-Control-Allow-Methods: true
StatusCode: '204'
Metadata:

Connect to Amazon MSK cluster

I’m trying to setup an Amazon MSK cluster and connect to it from a lambda function. The lambda function will be a producer of messages, not a consumer.
I am using the serverless framework to provision everything and in my serverless.yml I have added the following and that seems to be working fine.
MSK:
Type: AWS::MSK::Cluster
Properties:
ClusterName: kafkaOne
KafkaVersion: 2.2.1
NumberOfBrokerNodes: 3
BrokerNodeGroupInfo:
InstanceType: kafka.t3.small
ClientSubnets:
- Ref: PrivateSubnet1
- Ref: PrivateSubnet2
- Ref: PrivateSubnet3
But when trying to connect to this cluster to actually send messages I am unsure how to get the connection string here? I presume it should be the ZookeeperConnectString?
I’m new to kafka/msk so maybe I am not seeing something obvious.
Any advice much appreciated. Cheers.
I don't know what kind of code base u are using, so I will add my code which I wrote in GO.
In essence you should connect to MSK cluster the same way as you would connect to some stand alone Kafka instance. We are using brokers for "connecting" or better said writing to MSK cluster.
I'm using segmentio/kafka-go library. My function for sending event to MSK cluster looks like this
// Add event
func addEvent(ctx context.Context, requestBody RequestBodyType) (bool, error) {
// Prepare dialer
dialer := &kafka.Dialer{
Timeout: 2 * time.Second,
DualStack: true,
}
brokers := []string{os.Getenv("KAFKA_BROKER_1"), os.Getenv("KAFKA_BROKER_2"), os.Getenv("KAFKA_BROKER_3"), os.Getenv("KAFKA_BROKER_4")}
// Prepare writer config
kafkaConfig := kafka.WriterConfig{
Brokers: brokers,
Topic: os.Getenv("KAFKA_TOPIC"),
Balancer: &kafka.Hash{},
Dialer: dialer,
}
// Prepare writer
w := kafka.NewWriter(kafkaConfig)
// Convert struct to json string
event, err := json.Marshal(requestBody)
if err != nil {
fmt.Println("Convert struct to json for writing to KAFKA failed")
panic(err)
}
// Write message
writeError := w.WriteMessages(ctx,
kafka.Message{
Key: []byte(requestBody.Event),
Value: []byte(event),
},
)
if writeError != nil {
fmt.Println("ERROR WRITING EVENT TO KAFKA")
panic("could not write message " + err.Error())
}
return true, nil
}
My serverless.yml
Upper code (addEvent) belongs to functions -> postEvent in serverless.yml... If you are consuming from kafka, then you should check functions -> processEvent. Consuming event is fairly simple, but setting everything up for producing to Kafka it crazy. We are probably working on this for month and a half and still figuring out how everything should be set up. Sadly serverless does not do everything for you, so you will have to "click trough" manually in AWS, but we compared to other frameworks and serverless is still the best right now
provider:
name: aws
runtime: go1.x
stage: dev
profile: ${env:AWS_PROFILE}
region: ${env:REGION}
apiName: my-app-${sls:stage}
lambdaHashingVersion: 20201221
environment:
ENV: ${env:ENV}
KAFKA_TOPIC: ${env:KAFKA_TOPIC}
KAFKA_BROKER_1: ${env:KAFKA_BROKER_1}
KAFKA_BROKER_2: ${env:KAFKA_BROKER_2}
KAFKA_BROKER_3: ${env:KAFKA_BROKER_3}
KAFKA_BROKER_4: ${env:KAFKA_BROKER_4}
KAFKA_ARN: ${env:KAFKA_ARN}
ACCESS_CONTROL_ORIGINS: ${env:ACCESS_CONTROL_ORIGINS}
ACCESS_CONTROL_HEADERS: ${env:ACCESS_CONTROL_HEADERS}
ACCESS_CONTROL_METHODS: ${env:ACCESS_CONTROL_METHODS}
BATCH_SIZE: ${env:BATCH_SIZE}
SLACK_API_TOKEN: ${env:SLACK_API_TOKEN}
SLACK_CHANNEL_ID: ${env:SLACK_CHANNEL_ID}
httpApi:
cors: true
apiGateway:
resourcePolicy:
- Effect: Allow
Action: '*'
Resource: '*'
Principal: '*'
vpc:
securityGroupIds:
- sg-*********
subnetIds:
- subnet-******
- subnet-*******
functions:
postEvent:
handler: bin/postEvent
package:
patterns:
- bin/postEvent
events:
- http:
path: event
method: post
cors:
origin: ${env:ACCESS_CONTROL_ORIGINS}
headers:
- Content-Type
- Content-Length
- Accept-Encoding
- Origin
- Referer
- Authorization
- X-CSRF-Token
- X-Amz-Date
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
methods:
- OPTIONS
- POST
processEvent:
handler: bin/processEvent
package:
patterns:
- bin/processEvent
events:
- msk:
arn: ${env:KAFKA_ARN}
topic: ${env:KAFKA_TOPIC}
batchSize: ${env:BATCH_SIZE}
startingPosition: LATEST
resources:
Resources:
GatewayResponseDefault4XX:
Type: 'AWS::ApiGateway::GatewayResponse'
Properties:
ResponseParameters:
gatewayresponse.header.Access-Control-Allow-Origin: "'*'"
gatewayresponse.header.Access-Control-Allow-Headers: "'*'"
ResponseType: DEFAULT_4XX
RestApiId:
Ref: 'ApiGatewayRestApi'
myDefaultRole:
Type: AWS::IAM::Role
Properties:
Path: /
RoleName: my-app-dev-eu-serverless-lambdaRole-${sls:stage} # required if you want to use 'serverless deploy --function' later on
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
# note that these rights are needed if you want your function to be able to communicate with resources within your vpc
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
- arn:aws:iam::aws:policy/service-role/AWSLambdaMSKExecutionRole
I must warn you that we spend a lot of time figuring out how to properly setup VPC and other networking / permission stuff. My collage will write blog post once he arrivers from vacation. :) I hope this helps you some how. Best of luck ;)
UPDATE
If you are using javascript, then you would connect to Kafka similar to this
const { Kafka } = require('kafkajs')
const kafka = new Kafka({
clientId: 'order-app',
brokers: [
'broker1:port',
'broker2:port',
],
ssl: true, // false
})
The connection string which is called broker bootstrap string can be found my making an API call like aws kafka get-bootstrap-brokers --cluster-arn ClusterArn
See example here: https://docs.aws.amazon.com/msk/latest/developerguide/msk-get-bootstrap-brokers.html
Also here is a step by step walk through on how produce/consume data: https://docs.aws.amazon.com/msk/latest/developerguide/produce-consume.html

AWS Lambda Functions and AWS API Gateway(custom domain name) path redundancy/conflict

I am trying to remove the redundant path which is used in both my serverless configuration and aws api gateway mapping.
Problem:
Login serverless yaml
serverless.yml
frameworkversion: '>1.8'
service: ${stage}-login
provider:
name: aws
runtime: nodejs10.x
timeout: 12
functions:
login:
name: login
handler: login.handler
events:
- http:
path: login
cors: true
integration: lambda
request:
passThrough: WHEN_NO_MATCH
template:
application/json:
<response omitted>
plugins:
- serverless-offline
API mapping to my custom domain
API - login-dev
Stage - dev
Path(optional) - login
Goal:
Lambda Functions :
login - {base url}/dev/login
register - {base url}/dev/register
What happened:
login {base url}/dev/login/login
register - {base url}/dev/register/register
Actions taken:
Tried to remove the Path(optional) but it would not allow me to add another lambda function if path is omitted.
Tried to proxy(unsure if this works the way i understand it) but it doesn;t allow because an error shows that {login} is used in one of my lambda function parameters.
Removed path in serverless yaml configuration file and replaced it with blank or / - but not an option for me because i need to keep the existing configuration.
Any help is very much appreciated.
Have you tried this:
functions:
login:
name: login
handler: login.handler
events:
- http:
path: /login
................
By adding a "/" in the starting of path.

how to be able to not return json in express in aws lambda and aws sam when using express.static

I am making a serverless website using aws lambda and the sma cli tool from aws (mostly just to test making real requests to the api). I want to serve assets with the express.static function, but have a problem. When i use it I get an error about it not returning json an the error says that it needs to do that to work. I have 2 functions for now: views (to serve the ejs files) and assets (to serve static files like css and frontend js). Here is my template.yml:
# This is the SAM template that represents the architecture of your serverless application
# https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-basics.html
# The AWSTemplateFormatVersion identifies the capabilities of the template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/format-version-structure.html
AWSTemplateFormatVersion: 2010-09-09
Description: >-
[Description goes here]
# Transform section specifies one or more macros that AWS CloudFormation uses to process your template
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/transform-section-structure.html
Transform:
- AWS::Serverless-2016-10-31
# Resources declares the AWS resources that you want to include in the stack
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resources-section-structure.html
Resources:
assets:
Type: AWS::Serverless::Function
Properties:
Handler: amplify/backend/function/assets/src/index.handler
Runtime: nodejs14.x
MemorySize: 512
Timeout: 100
Description: serves the assets
Events:
Api:
Type: Api
Properties:
Path: /assets/{folder}/{file}
Method: GET
views:
Type: AWS::Serverless::Function
Properties:
Handler: amplify/backend/function/views/src/index.handler
Runtime: nodejs14.x
MemorySize: 512
Timeout: 100
Description: serves the views
Events:
Api:
Type: Api
Properties:
Path: /
Method: GET
Outputs:
WebEndpoint:
Description: "API Gateway endpoint URL for Prod stage"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
And my code for the assets function:
index.js:
const awsServerlessExpress = require('aws-serverless-express');
const app = require('./app');
const server = awsServerlessExpress.createServer(app);
exports.handler = (event, context) => {
console.log(`EVENT: ${JSON.stringify(event)}`);
return awsServerlessExpress.proxy(server, event, context, 'PROMISE').promise;
};
app.js:
const express = require('express'),
app = express()
app.use(express.json())
app.use('/assets', express.static('assets'))
app.listen(3000);
module.exports = app
Is there some config option for the template.yml that I should know or do I have to change my code?
I made my own solution with fs in node js. I made a simple peice of code like this in the views function:
app.get('/assets/*', (req, res) => {
if (!fs.existsSync(__dirname + `/${req.url}`)) {
res.sendStatus(404).send(`CANNOT GET ${req.url}`);
return;
}
res.send(fs.readFileSync(__dirname + `/${req.url}`, 'utf-8'));
})
I also edited the template.yml to make it so the api with the path of /assets/{folder}/{file} is for the views function and deleted the assets function and move the assets folder with all the assets to the views function dir
EDIT:
For almost everything for some resson the content type http header is always being set text/html, but chnaging the code to this fixs it:
app.get('/assets/*', (req, res) => {
if (!fs.existsSync(`${__dirname}${req.url}`)) {
res.sendStatus(404).send(`CANNOT GET ${req.url}`);
return;
}
res.contentType(path.basename(req.url))
res.send(fs.readFileSync(__dirname + `${req.url}`, 'utf-8'));
})
All this does is use the contentType function on the res object. You just pass in the name of the file and it will automatically find the right content type.

Invoke Lambda function by SNS message on local serverless-offline environment

I'm using Serverless Framework & serverless-offline plugin to develop serverless web application locally, and trying to test the following procedure.
User pushes a button, which will call API
API will invoke Lambda function and it will publish a message to SNS topic
Several lambda functions subscribing the SNS topic will be invoked
serverless.yml
plugins:
- serverless-offline
- serverless-offline-sns
functions:
publisher:
handler: publisher.main
events:
- http:
path: publish
method: post
cors: true
authorizer: aws_iam
subscriber:
handler: subscriber.main
events:
- sns: test-topic
I tested it on AWS and it worked, but I don't know how to test it locally.
serverless-offline-sns does not support subscription by lambda for now.
serverless-offline-sns supports http, https, and sqs subscriptions. email, email-json, sms, application, and lambda protocols are not supported at this time.
https://www.npmjs.com/package/serverless-offline-sns
I think this is a very common use case for serverless & event-driven architecture. How do you test this on local environment?
I was able to simulate this offline recently using the following code/config
serverless.yml
functions:
########## SNS SUBSCRIPTIONS ##########
newUser:
memorySize: 128
timeout: 120
handler: src/sns-subscribers/newUser.handler
name: sns-newUser-dev
events:
- sns:
arn: arn:aws:sns:ap-southeast-2:13XXXXXXXXXX:new-user-dev
plugins:
- serverless-offline-sns
- serverless-offline
custom:
serverless-offline-sns:
port: 4002 # a free port for the sns server to run on
debug: true
# host: 0.0.0.0 # Optional, defaults to 127.0.0.1 if not provided to serverless-offline
# sns-endpoint: http://127.0.0.1:4002 # Optional. Only if you want to use a custom endpoint
accountId: 13XXXXXXXXXX # Optional
Here's the code that triggers my offline lambda
trigger.js
const AWS = require('aws-sdk');
const sns = new AWS.SNS({
endpoint: 'http://127.0.0.1:4002',
region: 'ap-southeast-2',
});
sns.publish(
{
Message: 'new user!',
MessageStructure: 'json',
TopicArn: `arn:aws:sns:ap-southeast-2:13XXXXXXXXXX:new-user-dev`,
},
() => console.log('new user published'),
);
Run the trigger normally
node trigger.js
Note:
In your example, the way you declared the sns subscription is not yet supported AFAIK.
events:
- sns: test-topic # try using ARN and sending this to the next line
You can check this github issue for more info and updates.