AWS SAM Rust - Deploy multiple lambda functions from same crate - amazon-web-services

I'm trying to deploy a basic serverless application that contains two Rust lambda functions. I'm using SAM to deploy the application.
The issue is how to get SAM to pick up the correct "bootstrap" file. Because both functions are built in the same CodeUri path, SAM does not execute both the Make commands. Instead, it just copies the output of Function1 to Function2 (this seems like a design flaw in SAM?). Thus, both lambdas currently get deployed with the same code.
My build directory is
myapp/
- src/
- bin/
- function1.rs (note: function1 & 2 depend on lib.rs)
- function2.rs
- lib.rs
- Cargo.toml
- Makefile
- template.yaml
The template.yaml file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Handler: bootstrap.is.the.handler
Runtime: provided.al2
Architectures:
- x86_64
Resources:
Function1:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Function2:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
The Makefile is:
build-Function1:
cargo lambda build
cp ./target/lambda/function1/bootstrap $(ARTIFACTS_DIR)
build-Function2: # This never gets run!
cargo lambda build
cp ./target/lambda/function2/bootstrap $(ARTIFACTS_DIR)
Commands to build/deploy
sam build
sam deploy
I'm open to other build structures. I've also tried structuring the project using rust workspaces. But, because SAM copies the build source to a separate directory, I cannot find a way to add module dependencies.

After much struggle, I have come up with a hacky solution, that I'm sure cannot be the recommended way.
Use rust workspaces, so the folder structure is:
root/
common/
lib.rs
Cargo.toml
function1/
main.rs
Cargo.toml
Makefile
function2/
main.rs
Cargo.toml
Makefile
Cargo.toml
template.yaml
root/Cargo.toml:
[workspace]
members = [
"common"
"function1",
"function2",
]
Set template.yaml file to use different codeURIs:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Handler: bootstrap.is.the.handler
Runtime: provided.al2
Architectures:
- x86_64
Resources:
Function1:
Type: AWS::Serverless::Function
Properties:
CodeUri: function1
Function2:
Type: AWS::Serverless::Function
Properties:
CodeUri: function2
(The hack step) In each Makefile, cp back into the source dir so it builds with all the source. (This has added benefit of sharing the build cache between targets)
build-Function1:
cd $(PWD); cargo lambda build --release
cp $(PWD)/target/lambda/function1/bootstrap $(ARTIFACTS_DIR)
This solution is compatible with cargo lambda watch and sam build/sam deploy commands.
Binary size is large since all lambdas duplicate the base library. I'm uncertain if this is avoidable with Rust.
I'm yet to trial it with deployment from CI servers.

Related

Lambda function package still large despite using a layer for dependencies

I have a Python Lambda and since I started using AWS X-Ray the package size has ballooned from 445KB to 9.5MB.
To address this and speed up deployments of my code, I have packaged my requirements separately and added a layer to my template. The documentation suggests that this approach should work.
Packaging dependencies in a layer reduces the size of the deployment package that you upload when you modify your code.
pip install --target ../package/python -r requirements.txt
Resources:
...
ProxyFunction:
Type: AWS::Serverless::Function
Properties:
Architectures:
- x86_64
CodeUri: proxy/
Handler: app.lambda_handler
Layers:
- !Ref ProxyFunctionLibraries
Role: !GetAtt ProxyFunctionRole.Arn
Runtime: python3.8
Tracing: Active
ProxyFunctionLibraries:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: proxy-function-lib
Description: Dependencies for the ProxyFunction.
ContentUri: package/.
CompatibleRuntimes:
- python3.8
However, this doesn't seem to have prevented the Lambda from still packaging everything in the top layer, and every time I deploy the package is still 9.5MB. The new layer for some reason is 11MB in size, but that is only being deployed when a change is made.
How can I reduce the size of the Lambda function package?
Actually the solution here was quite simple, although not obvious to non-Lambda experts.
As described in the question, the first step was to build the package library.
pip install --target ../package/python -r requirements.txt
However, when building the Lambda using sam build -u the same 'requirements.txt' file is used and the required dependencies were again being installed, this time as part of the app.
So all I had to do was remove the requirements that I wish packaged in a separate layer and rebuild. It does mean that I have to maintain 2x 'requirements.txt' but that is entirely manageable.
I've opened an issue and hopefully AWS will update their documentation.
I am struggling with the same problem and solved it differently. Leaving this answer as a note to future question-seekers.
My current lambda structure is the following:
├── events
│ └── event.json
├── ingress
│ ├── app.py # lambda code
│ └── __init__.py
├── __init__.py
├── lib_layer # contains self written helpers and python dependencies
│ ├── helper.py
│ └── requirements.txt
├── samconfig.toml
├── template.yaml
└── tests
├── ...
In my template.yaml I have the following code snippets:
SQSIngestion:
Type: AWS::Serverless::Function
Properties:
CodeUri: ingress/
Handler: app.lambda_handler
Runtime: python3.9
Layers:
- !Ref PythonLibLayer
Architectures:
- x86_64
PythonLibLayer:
Type: AWS::Serverless::LayerVersion
Properties:
ContentUri: lib_layer
CompatibleRuntimes:
- python3.9
Metadata:
BuildMethod: python3.9
Since the Lambda Function already has the layer containing all dependencies + my helpers attached, the requirements.txt in /ingress can be omitted but works nevertheless when invoking the function in AWS. By calling
sam build will automatically build the dependency layer and ignore the function folder for building the dependencies.
As a tip, if working with VSCode, add in your settings.json the following line to fix pylances import error for self written packages.
{
"python.analysis.extraPaths": ["lib_layer"]
}

Why does AWS sam package causes - Unable to import module '' No module named '' - during sam deploy but sam build does not? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
The project I am currently working on creates a lambda layer which contains a file called app.py, within this file is a function named lambda_handler which is interest to be used as Handler for whatever lambda function includes the layer. The sam template I use to do this looks as follow:
Resources:
LamLayer:
Type: AWS::Serverless::LayerVersion
LayerName: !Join
- ''
- - 'LamLayer'
- - !Ref AWS::StackName
Properties:
ContentUri: ./lam_layer
CompatibleRuntimes:
- python3.8
Metadata:
BuildMethod: python3.8
LamFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./lam_function
Runtime: python3.8
Handler: app.lambda_handler
Layers:
- !Ref LamLayer
Timeout: 60
AutoPublishAlias: live
Now although the Handler: app.lambda_handler is not present in the lambda function itself, it is present in the included layer.
Now after creating this setup I tested it by calling sam build; sam deploy and it successfully deployed and worked. When I called the LamFunction it successfully found the Handler and ran it.
The problem arises when I push my changes to the CodePipeline we have setup. The build and deploy succeeded but when I now call the LamFunction it throws the following error:
Unable to import module 'app': No module named 'app'
After debugging this for a while I seem to have narrowed down the problem to the difference in the way I was building vs. how the pipeline is building the project.
I called: sam build; sam deploy
Whereas the pipeline calls: sam build; sam package --s3-bucket codepipeline-eu-central-1-XXXXXXXXXX --output-template-file packaged-template.yml and then uses the standard pipeline deploy stage to deploy from the S3 bucket.
But although I think I know that this difference is causing the problem I am not sure what the underlying reason is and what I need to change to fix it ?
---- EDIT ----
Here is the buildspec.yml in case this is the culprit:
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
build:
commands:
- sam build
- sam package --s3-bucket codepipeline-eu-central-1-XXXXXXXXXX --output-template-file packaged-template.yml
artifacts:
files:
- packaged-template.yml
In the end I managed to trace the issue back to the CodeBuild image used in the pipeline. Due to an oversight during the creation of the pipeline I used a managed image which used the CodeBuild standard 1 which does not support the building of nested stacks/templates. Since the stack mentioned above was being build as nested stack of a larger template it was not build and so with cause the error with the layer.
After changing to the CodeBuild standard 3 the stack build and packaged as expected.

Referencing an entire property in multiple serverless files - [object Object] does not exist

One of the biggest challenges that I've faced using serverless is in deploying AWS Lambda functions in a micro-service fashion (Each lambda individually - I've already tried individual packages, Webpack, and so on...).
I'm currently breaking my serverless app into multiple sub-serverless files and I'm trying to reference a main config serverless one. I'd like to inherit entire object trees so I don't have to be retyping them one by one (In addition, if there's a change, I can propagate it throughout all the lambdas).
Here's my current structure:
| serverless.yml
| lambda/
| /planning
| index.ts
| serverless.yml
| /generator
| index.ts
| serverless.yml
| /createStudents
| index.ts
| serverless.yml
Content of the main serverless file (Omitted for brevity):
## https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/
service: backend-appsync
provider:
name: aws
stage: ${opt:stage, 'dev'}
runtime: nodejs10.x
region: us-east-2
## https://serverless.com/framework/docs/providers/aws/guide/iam/
## https://serverless.com/blog/abcs-of-iam-permissions/
iamRoleStatements:
- Effect: Allow
Action:
- "dynamodb:BatchGetItem"
- "dynamodb:BatchWriteItem"
- "dynamodb:ConditionCheckItem"
- "dynamodb:GetItem"
- "dynamodb:DeleteItem"
- "dynamodb:PutItem"
- "dynamodb:Query"
Resource: "arn:aws:dynamodb:us-east-2:747936726382:table/SchonDB"
I'd like to read the entire provider object and insert it into the individual serverless.yml file.
Example: /lambda/planning/serverless.yml
service: "planning"
provider: ${file(../../serverless.yml):provider}
functions:
planning:
handler: ./index.handler
name: ${self:provider.stage}-planning
description: Handles the Planning of every teacher.
memorySize: 128
I get the following error:
Serverless Error ---------------------------------------
The specified provider "[object Object]" does not exist.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 12.14.1
Framework Version: 1.61.2
Plugin Version: 3.2.7
SDK Version: 2.2.1
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
I thought I could reference the entire property. Is this possible? What am I doing wrong?
Thanks :)
Serverless goes nuts when files are imported from outside the project directory.
To solve this problem, you can now use projectDir:
service: "planning"
projectDir: ../..
provider: ${file(../../serverless.yml):provider}
functions:
planning:
handler: ./index.handler
name: ${self:provider.stage}-planning
description: Handles the Planning of every teacher.
memorySize: 128

Serverless Offline undefined module when loaded from lambda layer

I have the following project tree
Where nodejs folder is a lambda layer defined in the following serverless.yaml
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
The layer1 only contains UUID package.
So when I try to run the lambda locally using serverless offline plugin, it says can't find module UUID.
But when I deploy the code to AWS, it run like a charm.
Any way we can get lambda layers running locally for testing purpose? and for speeding up the development?
Or is there any way where I can dynamically set the node_module path to point to the layer folder during the development and once I need to push to production, it change the path to the proper one
Ok after many trials, I figure out a working solution
I added a npm run command which export a temporary node_module path to the list of paths
"scripts": {
"offline": "export NODE_PATH=\"${PWD}/nodejs/node_modules\" && serverless offline"
},
So, node can lookup for the node modules inside the sub folders
I got around this by running serverless-offline in a container and copying my layers into the /opt/ directory with gulp. I set a gulp watch to monitor any layer changes and to copy them to the /opt/ directory.
I use layers in serverless offline via installing a layer from local file system as a dev dependency.
npm i <local_path_to_my_layer_package> --save-dev
BTW this issue was fixed in sls 1.49.0.
Just run:
sudo npm i serverless
Then you should specify package include in serverless.yml's layer section
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
package:
include:
- node_modules/**
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
Tested on nodejs10.x runtime

zip does not support timestamps before 1980 aws

We are having projects created with aws codestar. these were working fine. but from today we are facing following issue:
Unable to upload artifact None referenced by CodeUri parameter of GetCompanyRecords resource.
zip does not support timestamps before 1980
Now when i removed aws-sdk module again it works fine. but when i add it again build fails. i am pretty much worried about this. Here is my lambda function.
GetCompanyRecords:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs6.10
Role:
Fn::ImportValue:
!Join ['-', [!Ref 'ProjectId', !Ref 'AWS::Region', 'LambdaTrustRole']]
Timeout: 10
Events:
PostEvent:
Type: Api
Properties:
Path: /getCompanyRecords
Method: post
thanks in advance
At the moment following patch fixed my issue:
I added following lines to buildspec.yml after 'npm install'
-ls $CODEBUILD_SRC_DIR
-find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
As i was having issue just by adding aws-sdk so i want aws to fix to this issue. I am really disappointed aws-sdk is not working with aws..
You have forgotten to initialize the codebase with git:). It means It's trying to create a zip from git's head but failing
rm -rf .git
git init
git add .
git commit -am 'First commit'