How to create and zip a docker container for AWS Lambda - amazon-web-services

I'm trying to create and then zip a Docker container to upload to S3 to be run by an AWS Lambda function. I was trying to work off an article but the instructions are sparse (https://github.com/abhisuri97/auto-alt-text-lambda-api).
I've installed Docker and the Amazon Linux image but I don't know how to create a Docker container that contains the github repo, and then zip it so that it can be accessed by Lambda.
This is what I've tried to piece together from other tutorials:
git clone https://github.com/abhisuri97/auto-alt-text-lambda-api.git
cd auto-alt-text-lambda-api
docker run -v -it amazonlinux:2017.12
zip -r -9 -q ~/main.zip
Any help would be greatly appreciated.

The instructions aren't clear but I suspect the reference to Docker is just for testing. You don't need Docker to run an AWS Lambda function. You will need an AWS API Gateway API though to execute the Lambda function over HTTPS.
I'd recommend starting with a CloudFormation stack using the AWS Serverless Application Mode (https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html).
Create an S3 bucket for the zip file and create a CloudFormation template similar to:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
LambdaFunction:
Type: AWS::Serverless::Function
Properties:
Handler: application.predict
Runtime: python2.7
Events:
HttpGet:
Type: Api
Properties:
Path: 'auto-alt-text-api'
Method: get
Package the Lambda function with:
aws cloudformation package --template-file template.yaml --output-template-file template-out.yaml --s3-bucket <your-bucket> --s3-prefix <your-prefix>
Then deploy it with:
aws cloudformation deploy --template-file template-out.yaml --stack-name auto-alt-text-lambda-api-stack --capabilities CAPABILITY_IAM
You will probably have to add IAM roles and Lambda permissions to the template for the application to work properly.

Related

Lambda code does not get zipped during `package` command when using substacks

I am using CloudFormation for creating lambda functions. The lambda functions are stored in a separate file and then recreated using aws cloudformation package command. This works fine and the stack gets deployed successfully:
# Filename: auth/auth.yml
# Lambda JS file: auth/lambda-pre-signup.js
Resources:
## Other resources here
MyPreSignupLambda:
Type: AWS::Lambda::Function
Properties:
Architectures:
- arm64
Code: 'lambda-pre-signup.js'
Handler: 'lambda-pre-signup.handler'
Runtime: nodejs16.x
PackageType: Zip
Role: !GetAtt MyRole.Arn
Command:
aws cloudformation package --template-file auth.yml --s3-bucket my-bucket --output-template-file generated-auth.yml
aws cloudformation deploy --template-file generated-auth.yml --stack-name test-stack --capabilities CAPABILITY_IAM
However, when I create a root stack template and reference lambda, I get an error:
Resource handler returned message: "Could not unzip uploaded file. Please check your file, then try to upload again. (Service: Lambda, Status Code: 400, Request ID: xxxxx)"
When I check the S3 bucket for the uploaded file, the source code is there but it is not zipped (I can download and directly view the code without needing to unzip it).
Here is my current CF template for root stack:
# Filename: root.yml
Resources:
MyAuth:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: ./auth/auth.yml
Command:
aws cloudformation package --template-file root.yml --s3-bucket my-bucket --output-template-file generated-root.yml
aws cloudformation deploy --template-file generated-root.yml --stack-name test-root-stack --capabilities CAPABILITY_IAM
Is there some option in the package command to make sure that the uploaded lambda code is zipped?
EDIT: Wrote a wrong argument

AWS Cloudformation | How to Configure Lambda to Use Latest Code in S3 Bucket

Tests3bucketLambda:
Type: "AWS::Lambda::Function"
Properties:
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
FunctionName: "test-lambda-function"
Handler: lambda-function-s3.lambda_handler
Role: !GetAtt LambdaExecutionRole.Arn
Runtime: python3.6
Issue: When I update the new code that is zipped and uploaded to the S3 bucket during code build, but the change is not deployed to the existing lambda functions.
If you deploy new code to the object with the same key, CF will not treat it like change, since template itself hasn't been modified. There are few ways to mitigate this.
Use bucket versioning and provide object version along with object key: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3.zip
S3ObjectVersion: blablabla....
Modify your object key on each deployment, with timestamp for example
Code:
S3Bucket: TestS3Bucket
S3Key: Tests3_2021-05-06T17:15:55+00:00.zip
Use automated tools like Terraform or AWS CDK to take care of these things
If you want the lambda to automatically pickup the latest code then it is not possible by using cloud formation.
To do that you could synch the code file to a s3 bucket and then try the approach mentioned here How can AWS Lambda pick the latest versions of of the script from S3 . I was able to achieve it and have mentioned the solution there.
Expanding on Oleksii's answer, I'll just add that I use a Makefile and an S3 bucket with versioning to handle this issue. A version-enabled S3 bucket creates a new object and a new version number every time a modified file is uploaded (keeping all the old versions and their version numbers). If you don't want a dependency on make in your build/deploy process this won't be of interest to you.
Make can examine the filesystem and look for updated files to trigger a target action based an updated file (as a dependency).
Here's a Makefile for a simple stack with one lambda function. The relevant parts of the Cloudformation (CFN) file will be shown below.
.DEFAULT_GOAL := deploy
# Bucket must exist and be versioning-enabled
lambda_bucket = lambda_uploads
deploy: lambda.zip
#set -e ;\
lambda_version=$$(aws s3api list-object-versions \
--bucket $(lambda_bucket) --prefix lambda.zip \
--query 'Versions[?IsLatest == `true`].VersionId | [0]' \
--output text) ;\
echo "Running aws cloudformation deploy with ZIP version $$lambda_version..." ;\
aws cloudformation deploy --stack-name zip-lambda \
--template-file test.yml \
--parameter-overrides LambdaVersionId=$$lambda_version \
--capabilities CAPABILITY_NAMED_IAM
lambda.zip: lambda/lambda_func.js
#zip lambda.zip lambda
#aws s3 cp lambda.zip s3://$(lambda_bucket)
The deploy target has a dependency on the lambda.zip target which itself has a dependency on lambda_func.js. This means that the rule for lambda.zip must be valid before the rule for deploy can be run.
So, if lambda_func.js has a timestamp newer than the lambda.zip file, an updated zip file is created and uploaded. If not, the rule is not executed because the lambda function has not been updated.
Now the deploy rule can be run. It:
Uses the AWS CLI to get the version number of the latest (or newest) version of the zip file.
Passes that version number as a parameter to Cloudformation as it deploys the stack, again using the AWS CLI.
Some quirks in the Makefile:
The backslashes and semicolons are required to run the deploy rule as one shell invocation. This is needed to capture the lambda_version variable for use when deploying the stack.
The --query bit is an AWS CLI capability used to extract information from JSON data that has been returned from the command. jq could also be use here.
The relevant parts of the Cloudformation (YAML) file look like this:
AWSTemplateFormatVersion: 2010-09-09
Description: Test new lambda ZIP upload
Parameters:
ZipVersionId:
Type: String
Resources:
ZipLambdaRole: ...
ZipLambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: zip-lambda
Role: !GetAtt ZipLambdaRole.Arn
Runtime: nodejs16.x
Handler: index.handler
Code:
S3Bucket: lambda_uploads
S3Key: lambda.zip
S3ObjectVersion: !Ref ZipVersionId
MemorySize: 128
Timeout: 3
The zip file is uniquely identified by S3Bucket, S3Key, and the S3ObjectVersion. Note that, and this is important, if the zip file was not updated (the version number remains the same as previous deploys) Cloudformation will not generate a change set--it requires a new version number to do that. This is the desired behavior--there is no new deploy unless the lambda has been updated.
Finally you'll probably want to put a lifecycle policy on the S3 bucket so that old versions of the zip file are periodically deleted.
These answers to other questions informed this answer.
This is a bit old, but this is in need of a concrete answer for those who are starting off.
Oleksii's answer is a correct guideline. However, The way to implement Option 2 would be as follows.
I used Java, but the same logic can apply to python too.
In your case imagine your cloud formation template for lambda that you pasted is named as cloud_formation_lambda.yml
Now in the code build stage where you are preparing this artifact that you mention Tests3 in your case. Prepare it with a unique identifier appended such as the epoch.
Then all you need to do in either the build phase or post-build phase is to use some simple linux command to accommodate those name changes.
The first step would be to rename your built artifact to append the unique value such as epoch
Use a sed command to replace the occurrence of Tests3 in your cloud formation template
Thus the buildspec.yml that implements this will be as follows
phases:
install:
runtime-versions:
java: corretto17
build:
commands:
- echo $(date +%s) > epoch.txt
- mvn package
post_build:
commands:
- mv target/Tests3.jar target/Tests3-$(cat epoch.txt).jar
- sed -i "s/Tests3.jar/Tests3-$(cat epoch.txt).jar/g" cloud_formation_lambda.yml
artifacts:
files:
- target/Tests3-$(cat epoch.txt).jar
- cloud_formation_lambda.yml

How do I destroy a aws SAM Local lambda?

Follow the read for an example it says:
# AWS SAM Hello World Example #
A simple AWS SAM template that specifies a single Lambda function.
## Usage ##
To create and deploy the SAM Hello World example, first ensure that you've met the requirements described in the [root README](../../README.md). Then
follow the steps below.
### Test your application locally ###
Use [SAM Local](https://github.com/awslabs/aws-sam-local) to run your Lambda function locally:
sam local invoke "HelloWorldFunction" -e event.json
### Package artifacts ###
Run the following command, replacing `BUCKET-NAME` with the name of your bucket:
sam package --template-file template.yaml --s3-bucket BUCKET-NAME --output-template-file packaged-template.yaml
This creates a new template file, packaged-template.yaml, that you will use to deploy your serverless application.
### Deploy to AWS CloudFormation ###
Run the following command, replacing `MY-NEW-STACK` with a name for your CloudFormation stack.
sam deploy --template-file packaged-template.yaml --stack-name MY-NEW-STACK --capabilities CAPABILITY_IAM
This uploads your template to an S3 bucket and deploys the specified resources using AWS CloudFormation.
Now what is the sam local command to delete the whole stack including s3 bucket and CF stack?
Edit for 2022:
Sam cli now has a delete command can see official docs here. Should be able run like this now:
sam delete --stack-name MY-NEW-STACK
Thanks #jesusnoseq for pointer to update 🍻.
Legacy 2018 Answer
There is currently no sam command to delete all of these resources. You would just use the relevant aws cli commands which sam-cli is just a wrapper around anyways. Example to delete you CFN stack.
aws cloudformation delete-stack --stack-name MY-NEW-STACK

Importing AWS information from CloudFormation to CodeBuild

I have a pipeline in AWS with Codestar, CodeBuild, CloudFormation, etc.
I am trying to figure out how to get information from the CloudFormation step returned to the CodeBuild step. Let me break it down:
I have a buildspec.yml for CodeBuild
# buildspec.yml
...
phases:
...
build:
commands:
- aws cloudformation package --region $REGION --template template.yml --s3-bucket $S3_BUCKET --output-template $OUTPUT_TEMPLATE
The above kicks off a CloudFormation build using our template.yml
# template.yml
...
S3BucketAssets:
Type: AWS::S3::Bucket
...
At this point, it creates a unique name for an S3 bucket. Awesome. Now, for step 2 in my buildspec.yml for CodeBuild, I want to push items to the S3 bucket just created in the CloudFormation template. BUT, I don't know how to get the dynamically created name of the S3 bucket from the CloudFormation template. I want something similar to:
# buildspec.yml
...
phases:
...
build:
commands:
# this is the same step as shown above
- aws cloudformation package --region $REGION --template template.yml --s3-bucket $S3_BUCKET --output-template $OUTPUT_TEMPLATE
# this is the new step
- aws s3 sync dist_files/ s3://{NAME_OF_THE_NEW_S3_BUCKET}
How can I accomplish getting the dynamically named S3 bucket so that I can push to it?
I am aware that within a CloudFormation template, you can reference the S3 bucket name with something like !GetAtt [ClientWebAssets, WebsiteURL]. But, I do not know how to get that information out of the cloudformation template and back into the codebuild template.
You could move to using CodePipeline. Stage one would be deploying the application via cloudformation with an output artifact being the stack creation output
http://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements

What is the syntax in serverless yml file to deploy lambda to multiple regions?

I have a requirement to deploy my lambda artifact to 3 different regions. I am using serverless framework.
My .yml file looks like this:
provider:
name: aws
runtime: nodejs4.3
stage: dev
region: us-east-1
AFAIK it's impossible to configure deployment to multiple regions via serverless.yml. However, you can do it via the cli, one region at a time:
serverless deploy --stage production --region eu-central-1
serverless deploy --stage production --region eu-west-1
...
You may want to automate it using your own script, implement it as a plugin, or submit a feature proposal.