How to deploy to AWS Lambda quickly, alternative to the manual upload? - amazon-web-services

I am getting started with writing an Alexa Skill. My skill requires uploading a .ZIP file as it includes the alexa-sdk dependency being stored in the node_modules folder.
Is there a more efficient way to upload a new version of my Lambda function and files from my local machine without zipping and manually uploading the same files over and over again? Some like git push or a different way to deploy via Terminal with a single command?

You can use the update-function-code CLI command.
Note that his operation must only be used on an existing Lambda function and cannot be used to update the function configuration.

To add to Khalid's answer, I recently created this rudimentary batch script to ease a particular lambda function's deployment. This example is for a NodeJS Lambda function which has it's dependencies located in the node_modules folder.
Prerequisits:
Have 7zip installed. Found here
Have it available in CMD (have it on system PATH variable) as explained here
Have your local aws-cli set up with valid credentials that have access to uploading to AWS Lambda.
rm -rf target
mkdir -p target
cp -r index.js package.json node_modules/ target/
pushd target
7z a zip_file_name.zip -r
popd
aws lambda update-function-code \
--function-name YOUR_FUNCTION_NAME \
--zip-file fileb://target/zip_file_name.zip \
--region us-east-1

My one-liner for bash:
zip -u f.zip f.py; aws lambda update-function-code --zip-file fileb://f.zip --function-name f

Related

how to do cloudformation validation for templates in a directory?

I have some templates in my gitlab repo that need to be validated before transfered to S3 bucket. How can I let my gitlab ci run the cloudformation for the whole templates directory instead of naming each template individually?
I was thinking of something like
for file in /templates/*.yml; do aws cloudformation validate-template --template-body file://$file > validation.log; done
Any ideas on how to do this in the gitlab ci?
You can simplify this with find/exec command, also redirecting logs to file might be a bad idea since it's easier to see them in job output.
Using a shell runner (or docker runner) with aws tool installed you can have the following script block:
script:
- find path_to_templates -type f -name "*.yml" -exec aws cloudformation validate-template --template-body file://{} \;
replace path_to_templates with your actual directory in repo. It will execute your command for every file ending with .yml in that directory.

How can I update my AWS Lambda function from VSCode?

So I have an AWS Lambda function written in NodeJS, but I am tired of coding in the AWS Console or having to manually zip my code in my VSCode to manually upload it in he AWS Console.
I know that I can update my function with aws lambda update-function-code --function-name myFunction --zip-file "fileb://myZipFile". But how can I zip it and launch this command every time I save my work in VSCode ?
Also, I am on Windows.
You can't do this without some additional work.
A few options are:
use the Run on Save VS Code extension and configure a custom command to run when a file is saved
create a SAM project and install the AWS Toolkit for VS Code extension to provide deployment assistance
create a package.json that includes a script for zip/deployment and use the NPM extension for VS Code to execute the deploy script
build a CI/CD solution: use VS Code to commit and push your code, then the pipeline takes over and deploys it
use a shell script, or a Makefile with a target, that zips and deploys and then simply execute it, manually or otherwise, in the VS Code in-built terminal
I use a script with below and run it when need to update.
echo "Building zip file"
zip -rq testfunction.zip testfunctionfolder/
echo "update Lambda function"
FUNCTION_ARN=$(aws lambda update-function-code \
--function-name testfunction \
--zip-file fileb://testfunction.zip \
--query 'FunctionArn' \
--output text)
echo "Lambda function updated with ARN ${FUNCTION_ARN}"

How to deploy files from s3 to ec2 instance based on S3 event

Actually I am working on a pipeline. So I am having a scenario where I am pushing some artifacts into s3. Now I have wrote a shell script which download the folder and copy each file to its desired location in a wildfly server(Ec2 instance).
#!/bin/bash
mkdir /home/ec2-user/test-temp
cd /home/ec2-user/test-temp
aws s3 cp s3://deploy-artifacts/test-APP test-APP --recursive --region us-east-1
aws s3 cp s3://deploy-artifacts/test-COMMON test-COMMON --recursive --region us-east-1
cd /home/ec2-user/
sudo mkdir -p /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-COMMON/standalone/configuration/standalone.xml /opt/wildfly/standalone/configuration
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/microsoft/* /opt/wildfly/modules/system/layers/base/com/microsoft/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/mysql /opt/wildfly/modules/system/layers/base/com/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/psg/common/* /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-APP/standalone/deployments/HS.war /opt/wildfly/standalone/deployments
sudo cp -rf ./test-temp/test-APP/bin/resource /opt/wildfly/bin/resource
sudo cp -rf ./test-temp/test-APP/modules/system/layers/base/psg/* /opt/wildfly/modules/system/layers/base/psg/
sudo cp -rf ./test-temp/test-APP/standalone/deployments/* /opt/wildfly/standalone/deployments/
sudo chown -R wildfly:wildfly /opt/wildfly/
sudo service wildfly start
But every time I push new artifacts into s3. I have to go to the server and run this script manually. Is there a way to automate it? I was reading about lamda but after lamda knows the change in s3. where I am gonna define my shell script to run??
Any guidance will be help full.
To Trigger the lambda function on file uploading to s3 bucket, for this you have to set the event notification in s3 bucket.
Steps for setting up s3 event notification:-
1- you lambda and s3 bucket should be in the same region
2 - go to Properties tab of s3 bucket
3 - open up the Event and provide values for event types like put or copy
4 - Do specify the Lambda ARN in Send to option.
Now create one lambda function and add the s3 bucket as a trigger option. Just make sure your Lambda IAM policy is properly set.

How to increase the maximum size of the AWS lambda deployment package (RequestEntityTooLargeException)?

I upload my lambda function sources from AWS codebuild. My Python script uses NLTK so it needs a lot of data. My .zip package is too big and an RequestEntityTooLargeException occurs. I want to know how to increase the size of the deployment package sent via the UpdateFunctionCode command.
I use AWS CodeBuild to transform the source from a GitHub repository to AWS Lambda. Here is the associated buildspec file:
version: 0.2
phases:
install:
commands:
- echo "install step"
- apt-get update
- apt-get install zip -y
- apt-get install python3-pip -y
- pip install --upgrade pip
- pip install --upgrade awscli
# Define directories
- export HOME_DIR=`pwd`
- export NLTK_DATA=$HOME_DIR/nltk_data
pre_build:
commands:
- echo "pre_build step"
- cd $HOME_DIR
- virtualenv venv
- . venv/bin/activate
# Install modules
- pip install -U requests
# NLTK download
- pip install -U nltk
- python -m nltk.downloader -d $NLTK_DATA wordnet stopwords punkt
- pip freeze > requirements.txt
build:
commands:
- echo 'build step'
- cd $HOME_DIR
- mv $VIRTUAL_ENV/lib/python3.6/site-packages/* .
- sudo zip -r9 algo.zip .
- aws s3 cp --recursive --acl public-read ./ s3://hilightalgo/
- aws lambda update-function-code --function-name arn:aws:lambda:eu-west-3:671560023774:function:LaunchHilight --zip-file fileb://algo.zip
- aws lambda update-function-configuration --function-name arn:aws:lambda:eu-west-3:671560023774:function:LaunchHilight --environment 'Variables={NLTK_DATA=/var/task/nltk_data}'
post_build:
commands:
- echo "post_build step"
When I launch the pipeline, I have RequestEntityTooLargeException because there are too many data in my .zip package. See the build logs below:
[Container] 2019/02/11 10:48:35 Running command aws lambda update-function-code --function-name arn:aws:lambda:eu-west-3:671560023774:function:LaunchHilight --zip-file fileb://algo.zip
An error occurred (RequestEntityTooLargeException) when calling the UpdateFunctionCode operation: Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation
[Container] 2019/02/11 10:48:37 Command did not exit successfully aws lambda update-function-code --function-name arn:aws:lambda:eu-west-3:671560023774:function:LaunchHilight --zip-file fileb://algo.zip exit status 255
[Container] 2019/02/11 10:48:37 Phase complete: BUILD Success: false
[Container] 2019/02/11 10:48:37 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: aws lambda update-function-code --function-name arn:aws:lambda:eu-west-3:671560023774:function:LaunchHilight --zip-file fileb://algo.zip. Reason: exit status 255
Everything works correctly when I reduce the NLTK data to download (I tried with only the packages stopwords and wordnet.
Does anyone have an idea to solve this "size limit problem"?
You cannot increase the deployment package size for Lambda. AWS Lambda limits are described in AWS Lambda developer guide. More information on how those limits work can be seen here. In essence, your unzipped package size has to be less than 250MB (262144000 bytes).
PS: Using layers doesn't solve sizing problem, though helps with management & maybe faster cold start. Package size includes the layers - Lambda layers.
A function can use up to 5 layers at a time. The total unzipped size of the function and all layers can't exceed the unzipped deployment package size limit of 250 MB.
Update Dec 2020 : As per AWS blog, as pointed by user jonnocraig in this answer, you can overcome these restrictions if you build a container for your application & run it on Lambda.
If anyone stumbles across this issue post December 2020, there's been a major update from AWS to support Lambda functions as container images (up to 10GB!!). More info here
AWS Lambda functions can mount EFS. You can load libraries or packages that are larger than the 250 MB package deployment size limit of AWS Lambda using EFS.
Detailed steps on how to set it up are here:
https://aws.amazon.com/blogs/aws/new-a-shared-file-system-for-your-lambda-functions/
On a high level, the changes include:
Create and setup EFS file system
Use EFS with lambda function
Install the pip dependencies inside EFS access point
Set the PYTHONPATH environment variable to tell where to look for the dependencies
The following are hard limits for Lambda (may change in future):
3 MB for in-console editing
50 MB zipped as package for upload
250 MB when unzipped including layers
A sensible way to get around this is to mount EFS from your Lambda. This can be useful not only for loading libraries, but also for other storage.
Have a look through these blogs:
https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aws-lambda-in-your-serverless-applications/
https://aws.amazon.com/blogs/aws/new-a-shared-file-system-for-your-lambda-functions/
I have not tried this myself, but the folks at Zappa describe a trick that might help. Quoting from https://blog.zappa.io/posts/slim-handler:
Zappa zips up the large application and sends the project zip file up to S3. Second, Zappa creates a very minimal slim handler that just contains Zappa and its dependencies and sends that to Lambda.
When the slim handler is called on a cold start, it downloads the large project zip from S3 and unzips it in Lambda’s shared /tmp space. All subsequent calls to that warm Lambda share the /tmp space and have access to the project files; so it is possible for the file to only download once if the Lambda stays warm.
This way you should get 500MB in /tmp.
Update:
I have used the following code in the lambdas of a couple of projects, it is based on the method zappa used, but can be used directly.
# Based on the code in https://github.com/Miserlou/Zappa/blob/master/zappa/handler.py
# We need to load the layer from an s3 bucket into tmp, bypassing the normal
# AWS layer mechanism, since it is too large, AWS unzipped lambda function size
# including layers is 250MB.
def load_remote_project_archive(remote_bucket, remote_file, layer_name):
# Puts the project files from S3 in /tmp and adds to path
project_folder = '/tmp/{0!s}'.format(layer_name)
if not os.path.isdir(project_folder):
# The project folder doesn't exist in this cold lambda, get it from S3
boto_session = boto3.Session()
# Download zip file from S3
s3 = boto_session.resource('s3')
archive_on_s3 = s3.Object(remote_bucket, remote_file).get()
# unzip from stream
with io.BytesIO(archive_on_s3["Body"].read()) as zf:
# rewind the file
zf.seek(0)
# Read the file as a zipfile and process the members
with zipfile.ZipFile(zf, mode='r') as zipf:
zipf.extractall(project_folder)
# Add to project path
sys.path.insert(0, project_folder)
return True
This can then be called as follows (I pass the bucket with the layer to the lambda function via an env variable):
load_remote_project_archive(os.environ['MY_ADDITIONAL_LAYERS_BUCKET'], 'lambda_my_extra_layer.zip', 'lambda_my_extra_layer')
At the time when I wrote this code, tmp was also capped, I think to 250MB, but the call to zipf.extractall(project_folder) above can be replaced with extracting directly to memory: unzipped_in_memory = {name: zipf.read(name) for name in zipf.namelist()}
which I did for some machine learning models, I guess the answer of #rahul is more versatile for this though.
From the AWS documentation:
If your deployment package is larger than 50 MB, we recommend
uploading your function code and dependencies to an Amazon S3 bucket.
You can create a deployment package and upload the .zip file to your
Amazon S3 bucket in the AWS Region where you want to create a Lambda
function. When you create your Lambda function, specify the S3 bucket
name and object key name on the Lambda console, or using the AWS
Command Line Interface (AWS CLI).
You can use the AWS CLI to deploy the package, and instead of using the --zip-file argument to pass the deployment package, you can specify the object in the S3 bucket with the --code parameter. Ex:
aws lambda create-function --function-name my_function --code S3Bucket=my_bucket,S3Key=my_file
This aws wrangler zip file from github (https://github.com/awslabs/aws-data-wrangler/releases) includes many other libraries like pandas and pymysql. In my case it was the only layer I needed since it has so much other stuff. Might work for some people.
You can try the workaround used in the awesome serverless-python-requirements plugin.
Ideal solution is to use lambda layers if it solves the purpose. If the total dependency is greater than 250MB then you can sideload lesser used dependencies from S3 bucket during run time by utilizing the 512 MB provided in /tmp directory. The zipped dependencies are stored in S3 and lambda can fetch the files from S3 during initialisation. Unzip the dependecy pacakge and add the path to sys path.
Please note that the python dependencies need to be built on the Amazon Linux, which is the operating system for lambda containers. I used a EC2 instance to create the zip package.
You check the code used in serverless-python-requirements here
Before 2021, the best way was to deploy the jar file to S3, and create AWS lambda with it.
From 2021, AWS Lambda begin to support container image. Read here : https://aws.amazon.com/de/blogs/aws/new-for-aws-lambda-container-image-support/
So from now on, you should probably consider package and deploy your Lambda functions as container images(up to 10 GB).
The tips to use large lambda project into AWS is to use a docker image store in the AWS ECR service instead of a ZIP file. You can use a docker image up to 10GO.
The AWS documentation provide an example to help you here :
Create an image from an AWS base image for Lambda
May be late to the party but you can use a Docker Image to get around the lambda layer constraint. This can be done using serverless stack development or just through the console.
You cannot increase the package size, but you can use AWS Lambda layers to store some application dependencies.
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html#configuration-layers-path
Before this layers a common used pattern to workaround this limitation was to download huge dependencies from S3.

AWS lambda function with python "errorMessage": "Unable to import module 'index'"

i am trying to make a post call from lambda function but not able to run the code on aws console but it it working properly on my system.
You need to install the dependencies in the folder where you have index.py then you need to zip the contents of the folder and upload the zip file to AWS Lambda.
Please note that you need to zip the contents of the folder, do not zip the folder itself.
On windows, you can install the packages in the folder using below command:
pip install package-name -t "/path/to/project-dir"
I had this error today, and this is the first result on Google, so I'll add my answer. In short, I had specified the handler incorrectly on the command line when I uploaded the function.
aws lambda create-function --function-name python-test-lambda --runtime python3.7 --role arn:aws:iam::123123123123:role/service-role/rolearn --handler lambda_function.lambda_handler --zip-file fileb://lambda_function.zip
ie this part was incorrect
--handler