I have multiple lambdas in my AWS serverless project and they all have the same CodeUri which is a folder in my project. For example, they all point to src/.
In the basic usage, or at least how I use it, the sam build option create a folder for each lambda. All of the folders contain the same code. Then, when running sam deploy each zip is uploaded to S3. The zips are all the same and it takes a lot of redundent time to upload them all.
Is there an option "to tell" sam to upload it only once?
I saw that I can build the zip manually, then uploading it to S3. How can I then set the uri in the CodeUri of the lambdas? Should I do it using external parameter or there is dedicated way to signal it?
Thank you
After hard effort, using David Conde's idea, I managed to find some solution for it. What we want to achieve is uploading the lambda's zip (with all of it's dependencies) once and point all the lambdas to use this zip.
This process is seperated into couple of steps which I'll try to describe in much
details as I can. Some of them might not be relevant exactly for your case.
General idea
The general idea is to create a layer which contains the code we want to upload once. Then, for each lambda, specify it uses this layer. Each lambda will have it handler points to somewhere in the source directory. Hence, running code "from the layer". But! we must specify code for the lambda to run/attach a zip - even if it's not what going to run. To skip that, we are going to attach it an "empty" code.
Build folder
First, create a build folder where we are going to work, for example: mkdir -p .build/
In addition, define the following variables:
s3_bucket="aaaaaaaa"
s3_prefix="aaaaaaaa"
s3_lambda_zip_suffix="$s3_prefix/lambda.zip"
s3_lambda_zip="s3://$s3_bucket/$s3_lambda_zip_suffix"
Creating the source zip
When lambda is unzipped, it's content is written to the working directory. When a layer is unzipped, it is unzipped into /opt as documented in AWS. Because our lambda needs to find our source code, which is "a depndencie" it needs to find it under /opt. To achieve it, we need it to be unzipped into /opt/python. We can do that by zipping python/... into a zip file.
First, we create the folder we will install the dependencies into the python folder:
mkdir -p .build/lambda_zip/python
pip3 install -q --target .build/lambda_zip/python -r requirements.txt
Then we zip it:
pushd .build/lambda_zip/ > /dev/null
zip --quiet -r ./lambda.zip ./python
popd > /dev/null
Now, you probably want to add your src direcory:
zip --quiet -r .build/lambda_zip/lambda.zip src
#Uploading to S3
Now, we have to upload the zip into S3 for our lambdas to load it.
aws s3 cp ".build/lambda_zip/lambda.zip" $s3_lambda_zip_path
Adding layer to template.yaml
Now, we need to add the layer into our template.yaml file, you can copy the following code after you read at AWS documentation:
Parameters:
LambdaCodeUriBucket:
Type: String
LambdaCodeUriKey:
Type: String
Resources:
OnUpHealthLayer:
Type: AWS::Lambda::LayerVersion
Properties:
CompatibleRuntimes:
- python3.8
Content:
S3Bucket: !Sub '${LambdaCodeUriBucket}'
S3Key: !Sub '${LambdaCodeUriKey}'
Create empty zip for the lambdas
Cloudformation must upload zip for lambdas, so we want it to create an empty zip. But it scans the dependencies from the requirements.txt file in the directory the same as the one of template.yaml. And we want it to upload something empty. Hence it must be in another folder.
To solve it, I copy the template.yaml to an empty directory and add empty requirements.txt file. After that, we can run sam build and sam deploy as usuall. Notice that we must pass it LambdaCodeUriBucket and LambdaCodeUriKey:
#create "empty" environment for the template to be built in
mkdir -p .build/empty_template
cp template.yaml .build/empty_template
pushd .build/empty_template > /dev/null
touch requirements.txt
sam build --template template.yaml
sam deploy \
--template-file .aws-sam/build/template.yaml \
--capabilities "CAPABILITY_IAM" \
--region $region \
--s3-bucket $s3_bucket \
--s3-prefix $s3_prefix \
--stack-name $stack_name \
--parameter-overrides LambdaCodeUriBucket=$s3_bucket LambdaCodeUriKey=$s3_lambda_zip_suffix
popd > /dev/null
Notice that although we copied the template.yaml and called sam build on the new one, we already uploaded to s3 the zip file.
Important thing you must do is specify . as the CodeUri for your lambdas. Because they now use the "empty zip".
In the future, we will be able to do:
InlineCode: |
def handler(event, context):
pass
And not specify folder ..
But, currently sam doesn't support inline code for python3.8 hence we use .. Anyway you will have to move it to seperate folder to remove it's dependencies.
Related
I am trying to use Cloudformation package to include the glue script and extra python files from the repo to be uploaded to s3 during the package step.
For the glue script it's straightforward where I can use
Properties:
Command:
Name: pythonshell #glueetl -spark # pythonshell -python shell...
PythonVersion: 3
ScriptLocation: "../glue/test.py"
But how would I be able to do the same for extra python files? The following does not work, it seems that I could upload the file using the Include Transform but not sure how to reference it back in extra-py-files?
DefaultArguments:
"--extra-py-files":
- "../glue/test2.py"
Sadly, you can't do this. package only supports for glue:
Command.ScriptLocation property for the AWS::Glue::Job resource
Packaging DefaultArguments arguments is not supported. This means that you have to do it "manually" (e.g. create bash script) outside of CloudFormation.
.. aaaand me again :)
This time with a very interesting problem.
Again AWS Lambda function, node.js 12, Javascript, Ubuntu 18.04 for local development, aws cli/aws sam/Docker/IntelliJ, everything is working perfectly in local and is time to deploy.
So I did set up an AWS account for tests, created and assigned an access key/secret and finally did try to deploy.
Almost at the end an error pop up aborting the deployment.
I'm showing the SAM cli version from a terminal, but the same happens with IntelliJ.
(of course I mask/change some names)
From a terminal I'm going where I have my local sandbox with the project and then :
$ sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: MyActualProjectName
AWS Region [us-east-1]: us-east-2
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
Save arguments to configuration file [Y/n]: y
SAM configuration file [samconfig.toml]: y
SAM configuration environment [default]:
Looking for resources needed for deployment: Not found.
Creating the required resources...
Successfully created!
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-7qo1hy7mdu9z
A different default S3 bucket can be set in samconfig.toml
Saved arguments to config file
Running 'sam deploy' for future deployments will use the parameters saved above.
The above parameters can be changed by modifying samconfig.toml
Learn more about samconfig.toml syntax at
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
Error: Unable to upload artifact MyFunctionName referenced by CodeUri parameter of MyFunctionName resource.
ZIP does not support timestamps before 1980
$
I spent quite some time looking around for this problem but I found only some old threads.
In theory this problems was solved in 2018 ... but probably some npm libraries I had to use contains something old ... how in the world I fix this stuff ?
In one thread I found a kind of workaround.
In the file buildspec.yml somebody suggested to add AFTER the npm install :
ls $CODEBUILD_SRC_DIR
find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
Basically the idea is to touch all the files installed after the npm install but still the error happens.
This my buildspec.yml file after the modification :
version: 0.2
phases:
install:
commands:
# Install all dependencies (including dependencies for running tests)
- npm install
- ls $CODEBUILD_SRC_DIR
- find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
pre_build:
commands:
# Discover and run unit tests in the '__tests__' directory
- npm run test
# Remove all unit tests to reduce the size of the package that will be ultimately uploaded to Lambda
- rm -rf ./__tests__
# Remove all dependencies not needed for the Lambda deployment package (the packages from devDependencies in package.json)
- npm prune --production
build:
commands:
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
I will continue to search but again I wonder if somebody here had this kind of problem and thus some suggestions/methodology about how to solve it.
Many many thanks !
Steve
I have angular client and Nodejs server deployed into one elasticBeanstalk.
The structure is that I put angular client files in 'html' folder and the proxy is defined in .ebextensions folder.
-html
-other serverapp folder
-other serverapp folder
-.ebextensions
....
-package.json
-server.js
Everytime when I do a release, I build angular app and put it into html folder in the node app, zip it and upload to elasticBeanstalk.
Now I want to move on to CICD. Basically I want to automate the above step, use two source(angular and node app), do the angular build and put it into html folder of node app and generate only one output artifact.
I've got to the stage where have separate pipeline works for each app. I'm not very familiar with AWS yet, I just have vague idea that I might need to use aws lambda.
Any help would be really appreciated.
The output artifact your CodeBuild job creates can be thought of as a directory location that you ask CodeBuild to zip as artifact. You can use regular UNIX commands to manipulate this directory before the packaging of the artifacts. Following 'buildspec.yml' is an example:
version: 0.2
phases:
build:
commands:
# build commands
#- command
post_build:
commands:
- mkdir /tmp/html
- cp -R ./ /tmp/html
artifacts:
files:
- '**/*'
base-directory: /tmp/html
Buildspec reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
I am a .net developer and using a .net core 2.x application to build and upload the release code to s3 bucket. later that code will be used to deploy to ec2 instance.
I am new to CI/CD using aws and in learning phase.
In order to create CI/CD for my sample project, I gone through some aws tutorials and was able to create the following buildspec.yml file. Using that file I am able to run the successful build.
The problem comes in the phase UPLOAD_ARTIFACTS. I am unable to understand how to create a zip file that will be used to upload to the s3 bucket specified in the build project.
My buildspec.yml files contains the following code, Please help me finding what is wrong or what I am missing.
version: 0.2
phases:
build:
commands:
- dotnet restore
- dotnet build
artifacts:
files:
- target/cicdrepo.zip
- .\bin\Debug\netcoreapp2.1\*
I think I have to add post_build and some commands that will generate the zip file. But don't know the commands.
Following is the output image from the build logs.
your file is good all what you need to do is to create a S3 bucket then
you need to configure your CodeBuild to generate zip (or not) your artifacts for you, and to store it to s3.
this is the step you need to configure:
Edit:
if you want all your files to be copied on the root of your Zip file you can use:
artifacts:
files:
- ...
discard-paths: yes
I am developing one lambda function, which use the ResumeParser library made in the python 2.7. But when I deploy this function including the library on the AWS it's throwing me following error:
Unzipped size must be smaller than 262144000 bytes
Perhaps you did not exclude development packages which made your file to grow that big.
I my case, (for NodeJS) I had missing the following in my serverless.yml:
package:
exclude:
- node_modules/**
- venv/**
See if there are similar for Python or your case.
This is a hard limit which cannot be changed:
AWS Lambda Limit Errors
Functions that exceed any of the limits listed in the previous limits tables will fail with an exceeded limits exception. These limits are fixed and cannot be changed at this time. For example, if you receive the exception CodeStorageExceededException or an error message similar to "Code storage limit exceeded" from AWS Lambda, you need to reduce the size of your code storage.
You need to reduce the size of your package. If you have large binaries place them in s3 and download on bootstrap. Likewise for dependencies, you can pip install or easy_install them from an s3 location which will be faster than pulling from pip repos.
The best solution to this problem is to deploy your Lambda function using a Docker container that you've built and pushed to AWS ECR. Lambda container images have a limit of 10 gb.
Here's an example using Python flavored AWS CDK
from aws_cdk import aws_lambda as _lambda
self.lambda_from_image = _lambda.DockerImageFunction(
scope=self,
id="LambdaImageExample",
function_name="LambdaImageExample",
code=_lambda.DockerImageCode.from_image_asset(
directory="lambda_funcs/LambdaImageExample"
),
)
An example Dockerfile contained in the directory lambda_funcs/LambdaImageExample alongside my lambda_func.py and requirements.txt:
FROM amazon/aws-lambda-python:latest
LABEL maintainer="Wesley Cheek"
RUN yum update -y && \
yum install -y python3 python3-dev python3-pip gcc && \
rm -Rf /var/cache/yum
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY lambda_func.py ./
CMD ["lambda_func.handler"]
Run cdk deploy and the Lambda function will be automagically bundled into an image along with its dependencies specified in requirements.txt, pushed to an AWS ECR repository, and deployed.
This Medium post was my main inspiration
Edit:
(More details about this solution can be found in my Dev.to post here)
A workaround that worked for me:
Install pyminifier:
pip install pyminifier
Go to the library folder that you want to zip. In my case I wanted to zip the site-packages folder in my virtual env. So I created a site-packages-min folder at the same level where site-packages was. Run the following shell script to minify the python files and create identical structure in the site-packages-min folder. Zip and upload these files to S3.
#/bin/bash
for f in $(find site-packages -name '*.py')
do
ori=$f
res=${f/site-packages/site-packages-min}
filename=$(echo $res| awk -F"/" '{print $NF}')
echo "$filename"
path=${res%$filename}
mkdir -p $path
touch $res
pyminifier --destdir=$path $ori >> $res || cp $ori $res
done
HTH
As stated by Greg Wozniak, you may just have imported useless directories like venv and node_modules.
package.exclude is now deprecated and removed in serverless 4, you should now use package.patterns instead:
package:
patterns:
- '!node_modules/**'
- '!venv/**'
In case you're using CloudFormation, in your template yaml file, make sure your 'CodeUri' property includes only your necessary code files and does not contain stuff like the .aws-sam directory (which is big) etc.