I have serverless.yaml script that use to work before - next after updating to newer version of SLS (2.72.0) I start getting warning:
Cannot resolve serverless.yaml: Variables resolution errored with:
- Cannot resolve variable at "custom.S3_BUCKET_NAME": Value not found at "self" source
my custom section looks like this:
custom:
S3_BUCKET_NAME: ${self:service}-data-${opt:stage, self:provider.stage}
s3Sync:
- bucketName: ${self:custom.S3_BUCKET_NAME}-website
localDir: ./dist
deleteRemoved: true
how I can fix this warning?
There is a slight change in variables resolution and in your case, the best way to resolve it would be to use the following syntax:
custom:
S3_BUCKET_NAME: ${self:service}-data-${sls:stage}
s3Sync:
- bucketName: ${self:custom.S3_BUCKET_NAME}-website
localDir: ./dist
deleteRemoved: true
for resolving the stage. Alternatively, you can use old syntax, but provide explicit fallback value for stage:
custom:
S3_BUCKET_NAME: ${self:service}-data-${opt:stage, self:provider.stage, 'dev'}
s3Sync:
- bucketName: ${self:custom.S3_BUCKET_NAME}-website
localDir: ./dist
deleteRemoved: true
I would recommend going with sls:stage version.
Changing the way you are writing the stage from:
self:provider.stage
To:
${sls:stage}
Should do the work!
You can find the updated documentation in: https://www.serverless.com/framework/docs/providers/aws/guide/variables or running serverless print for a more detailed response of the problem.
Related
I have a group of lambdas that I have most of the items within parameter store parameters. I have the sub trees seperated by envrionment.
Example
/prod/type/app1/parameter1
/prod/type/app1/parameter2
/prod/type/app2/parameter1
/dev/type/app1/parameter1
/dev/type/app1/parameter2
/dev/type/app2/parameter1
I would like to reference the path within the environment variables of a template.yml for a lambda function using SAM CLI.
I am trying to use !Sub but I am not having the results that I was hoping for.
Example:
Environment:
Variables:
ENV: "DEV"
SSM_PS_APP1_PATH: !Sub "/${ENV}/type/app1/"
The Results I get are:
/ENV/type/app1
My question is it possible to reference another variable within the Environment Variable Declaration using !Sub?
Sadly, its not possible. You would have to make ENV CloudFormation variable as well:
Parameters:
ENV:
Default: DEV
and then:
Environment:
Variables:
ENV: !Ref ENV
SSM_PS_APP1_PATH: !Sub "/${ENV}/type/app1/"
One of the biggest challenges that I've faced using serverless is in deploying AWS Lambda functions in a micro-service fashion (Each lambda individually - I've already tried individual packages, Webpack, and so on...).
I'm currently breaking my serverless app into multiple sub-serverless files and I'm trying to reference a main config serverless one. I'd like to inherit entire object trees so I don't have to be retyping them one by one (In addition, if there's a change, I can propagate it throughout all the lambdas).
Here's my current structure:
| serverless.yml
| lambda/
| /planning
| index.ts
| serverless.yml
| /generator
| index.ts
| serverless.yml
| /createStudents
| index.ts
| serverless.yml
Content of the main serverless file (Omitted for brevity):
## https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/
service: backend-appsync
provider:
name: aws
stage: ${opt:stage, 'dev'}
runtime: nodejs10.x
region: us-east-2
## https://serverless.com/framework/docs/providers/aws/guide/iam/
## https://serverless.com/blog/abcs-of-iam-permissions/
iamRoleStatements:
- Effect: Allow
Action:
- "dynamodb:BatchGetItem"
- "dynamodb:BatchWriteItem"
- "dynamodb:ConditionCheckItem"
- "dynamodb:GetItem"
- "dynamodb:DeleteItem"
- "dynamodb:PutItem"
- "dynamodb:Query"
Resource: "arn:aws:dynamodb:us-east-2:747936726382:table/SchonDB"
I'd like to read the entire provider object and insert it into the individual serverless.yml file.
Example: /lambda/planning/serverless.yml
service: "planning"
provider: ${file(../../serverless.yml):provider}
functions:
planning:
handler: ./index.handler
name: ${self:provider.stage}-planning
description: Handles the Planning of every teacher.
memorySize: 128
I get the following error:
Serverless Error ---------------------------------------
The specified provider "[object Object]" does not exist.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 12.14.1
Framework Version: 1.61.2
Plugin Version: 3.2.7
SDK Version: 2.2.1
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
I thought I could reference the entire property. Is this possible? What am I doing wrong?
Thanks :)
Serverless goes nuts when files are imported from outside the project directory.
To solve this problem, you can now use projectDir:
service: "planning"
projectDir: ../..
provider: ${file(../../serverless.yml):provider}
functions:
planning:
handler: ./index.handler
name: ${self:provider.stage}-planning
description: Handles the Planning of every teacher.
memorySize: 128
I have tried to upload my application using servless/lambda function AWS, but i got this issue:
An error occurred: AppLambdaFunction - Unzipped size must be smaller than 262144000 bytes (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: 8ea0d887-5743-4db1-96cd-6c5efa57b081).
What is the best way to resolve it?
Look my dependencies:
"dependencies": {
"ethereumjs-tx": "^1.3.7",
"aws-sdk": "^2.4.52",
"body-parser": "^1.18.3",
"compression": "^1.7.4",
"consign": "^0.1.6",
"cors": "^2.8.5",
"express": "^4.16.4",
"helmet": "^3.16.0",
"moment": "^2.24.0",
"openzeppelin-solidity": "^2.3.0",
"serverless": "^1.48.2",
"serverless-http": "^1.9.1",
"serverless-offline": "^4.9.4",
"truffle": "^5.1.9",
"truffle-hdwallet-provider": "^1.0.17",
"web3": "^1.2.5-rc.0"
},
Serverless.yml:
provider:
name: aws
runtime: nodejs8.10
stage: v1
region: us-east-1
timeout: 30
memorySize: 512
package:
excludeDevDependencies: true
exclude:
- .git/**
- .vscode/**
- venv/**
functions:
app:
handler: handler.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
plugins:
- serverless-offline
Use the directive exclude at your serverless.yml file. In case of Python, I've been used it as follows:
package:
exclude:
- node_modules/**
- venv/**
The build process will exclude them from the build before sending to AWS.
Tip I got in this issue at Github. The documentation for this directive is detailed here.
You can use module bundlers to package the code.
Using module bundlers such as webpack
You can consider using plugins like serverless-webpack. The serverless-webpack plugin is using webpack to build the project and it will only include the bare minimum files required to run your application. It will not include the entire node_modules directory. so that your deployment package will be smaller.
a note about using of Lambda layers
Like others mentioned, you can use the layers and move some of the libraries and code to the layer. Layers are mainly used to share code between functions. The unzipped deployed package including layers cannot exceed 250MB.
hope this helps.
References:
https://github.com/serverless-heaven/serverless-webpack
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path
I've had success resolving this error message using the serverless-esbuild plugin, and configuring it as follows in serverless.yml:
service: service_name
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs12.x
plugins:
- serverless-esbuild
custom:
esbuild:
bundle: true
minify: false
sourcemap: true
exclude: 'aws-sdk'
target: node14
define:
'require.resolve': undefined
platform: node
concurrency: 10
You can load larger packages into AWS Lambda indirectly using s3:
Load your package into a bucket/key on S3
In the Lambda console choose Function Code -> Code Entry Type -> Upload a file from S3
See my answer here
You can deploy a Lambda function using a Docker image and it bypasses this problem, allowing a function with its dependencies to be as large as 10 gb.
Adding exclude under package is deprecated. We can use pattern to remove node_modules.
Example to remove files in the serverless.yml
...remaining props
package:
patterns:
- '!.git/**'
- '!test/**'
- '!e2e/**'
- '!src/**'
- '!node_modules/**'
Deprecation for exclude and
Pattern
Recently i faced the same issue, my total package size was more than 40mbs and it was also including the venv (python virtual environment) folder that resides in the project directory. I excluded it and build size got reduced to 16 mbs. and the project was deployed successfully. I added the following in serverless.yaml
package:
patterns:
- '!node_modules/**'
- '!venv/**'
- '!apienv/**'
- '!__pycache__/**'
I have the following project tree
Where nodejs folder is a lambda layer defined in the following serverless.yaml
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
The layer1 only contains UUID package.
So when I try to run the lambda locally using serverless offline plugin, it says can't find module UUID.
But when I deploy the code to AWS, it run like a charm.
Any way we can get lambda layers running locally for testing purpose? and for speeding up the development?
Or is there any way where I can dynamically set the node_module path to point to the layer folder during the development and once I need to push to production, it change the path to the proper one
Ok after many trials, I figure out a working solution
I added a npm run command which export a temporary node_module path to the list of paths
"scripts": {
"offline": "export NODE_PATH=\"${PWD}/nodejs/node_modules\" && serverless offline"
},
So, node can lookup for the node modules inside the sub folders
I got around this by running serverless-offline in a container and copying my layers into the /opt/ directory with gulp. I set a gulp watch to monitor any layer changes and to copy them to the /opt/ directory.
I use layers in serverless offline via installing a layer from local file system as a dev dependency.
npm i <local_path_to_my_layer_package> --save-dev
BTW this issue was fixed in sls 1.49.0.
Just run:
sudo npm i serverless
Then you should specify package include in serverless.yml's layer section
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
package:
include:
- node_modules/**
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
Tested on nodejs10.x runtime
We are having projects created with aws codestar. these were working fine. but from today we are facing following issue:
Unable to upload artifact None referenced by CodeUri parameter of GetCompanyRecords resource.
zip does not support timestamps before 1980
Now when i removed aws-sdk module again it works fine. but when i add it again build fails. i am pretty much worried about this. Here is my lambda function.
GetCompanyRecords:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs6.10
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Timeout: 10
Events:
PostEvent:
Type: Api
Properties:
Path: /getCompanyRecords
Method: post
thanks in advance
At the moment following patch fixed my issue:
I added following lines to buildspec.yml after 'npm install'
-ls $CODEBUILD_SRC_DIR
-find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
As i was having issue just by adding aws-sdk so i want aws to fix to this issue. I am really disappointed aws-sdk is not working with aws..
You have forgotten to initialize the codebase with git:). It means It's trying to create a zip from git's head but failing
rm -rf .git
git init
git add .
git commit -am 'First commit'