AWS lambda versioning - amazon-web-services

I have a question about the lambda functions versioning capabilities.
I know how the standard way of versioning works out of the box in AWS but I thought there is a way for the publisher to specify the version number which would tag a specific snapshot of the function. More exactly what I was thinking of was including in the uploaded zip file a config.json where the version would be specified. And this would be used afterwards by AWS for tagging.
The reason I am asking is because I would like, for example, to keep in sync the version of the lambda function with the CI job build number that built (zipped) the lambda.
Any ideas?
Many thanks

A good option would be store your CI job build number as an environment variable on the Lambda function.
Its not exactly a recommended way to version AWS Lambda functions, but definitely helps in sticking to typical 1.x.x. versioning strategies and keeping them consistent across the pipeline.
Flipping the topic the other way around. Can we go with AWS Lambda versions 1.2.3., and then find a way to have our CI builds also use a single digit version no? Im not yet comfortable with this approach, and like the flexibility of 1.x.x as a versioning scheme to indicate major.minor.patch.
Standard Lambda versioning.
This is the most detailed blog I came across on this topic.
https://www.concurrencylabs.com/blog/configure-your-lambda-function-like-a-champ-sail-smoothly/

When you are deploying the Lambda function through CLI command or API, its not possible to give a custom version number. Its currently an automatically generated value by aws.
This makes it not possible to map the version number in a configuration file to the Lambda version supporting your use case.

Related

How to use AWS CLI to create a stack from scratch?

The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.

What is the proper way to build many Lambda functions and updated them later?

I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.

Keep two versions of AWS Lambdas running

Simple explanation of my current infrastructure: I am using AWS Lambdas (running python code there), which are deployed via the Gitlab CI using serverless framework.
Explanation of situation: I currently have an AWS Lambda which uses a specific library version (for now say version 1.x.x). At some point in time, this Lambda will start using a new version of the this library (say 2.x.x), but I want both of these lambdas to be still deployed and available to handle requests.
If at some other point in time the version 3.x.x of the library comes, I want to keep the lambda using version 3.x.x and 2.x.x running (basically the current version and current version - 1 lambdas). Lets call them Lambda_NEW and Lambda_OLD.
AWS lambdas have the concepts of versions and aliases which could be used, but unfortunately they are not supported by serverless directly.
Note: serverless supports multiple versions (which cannot be named) and there is a plugin called serverless-aws-aliases which can set aliases for you, but that one refer to the actual AWS Lambda versions (see https://github.com/serverless-heaven/serverless-aws-alias/issues/148).
Do you have any idea on how to tackle it?
My only valid thought so far is to keep two branches (NEW and OLD) which will use two different versions of the library, and each branch will have its different deploy CI. This is very counter intuitive though, since I don't know how to maintain develop and master branches. Also, when to deploy to which stage etc.
Also I somehow want both Lambda_NEW and Lambda_OLD to be deployed on the same time (e.g. when switching to library 5.x.x, I want version 5.x.x to be in the NEW and version 4.x.x in the old)
I'm not sure from your post, but I gather that you want a way to handle canary deployments, so that you could easily roll back changes? If that's not the case, could you edit your question and provide a bit more clarity?
If that's the case, I'd recommend following this guide and using the canary-deployments plugin, which will automatically create aliases for new versions, and allows you to define how traffic is shifted between deployed versions.

Mapping git hash to deployed lambda

We would like to be able to understand the version of our software that is currently deployed to a particular AWS lambda. What are the best practices for tracking a code commit hash to an AWS lambda? We've looked at AWS tagging and we've also looked at AWS Lambda Aliasing but neither approach seems convenient or user-friendly. Are there other services that AWS provides?
Thanks
Without context and a better understanding of your use case around the commit hash, its difficult to give a directly useful answer, and as other answers have shown, there are many ways you could accomplish this. That being said, the commit hash of particular code is ultimately metadata and AWS has a solution for dealing with resource metadata: tags.
Best practice is to tag your resources with metadata. Almost all, if not all, AWS resources (including Lambda) support tags. As stated in the AWS documentation “tagging allows you to quickly search, filter, and manage resources” and subject to AWS limits your tags can be pretty much any key-value pair you like, including “commit: hash”.
The tagging strategy here would be to assign the commit hash to a tag, “commit” with the value “e63abe27”. You can tag resources manually through the console or you can apply tags as part of your build process.
Once tagged, at a high level, you would then be able to identify which commit is being used by listing the tags for the lambda in question. The CLI command would be something like:
aws lambda list-tags —resource arn:aws:lambda:us-east-1:123456789012:function:myFunction
You can learn more about tags and tagging strategy by reviewing the AWS docs here and you can download the Tagging Best Practice whitepaper here.
One alternative could be to generate a file with the Git SHA as part of your build system that is packaged together with the other files in the build artifact. The following script generates a gitSha.json file in the ${outputDir}:
#!/usr/bin/env bash
gitSha=$(git rev-parse --short HEAD)
printf "{\"gitSha\": \"%s\"}" ${gitSha} > "${outputDir}/git-sha.js"
Consequently, the gitSha.json may look like:
{"gitSha": "5c805638"}
This file can then be accessed either by downloading the package. Alternatively, you can create a function that inspects the file in runtime and returns its value to the caller, writes it to a log, or similar depending on your use case.
This script was implemented using bash and git rev-parse, but you can use any scripting language in combination with a Git library that you are comfortable with.
Best way to version Lambda is to Create Version option and adding these versions into an Alias.
We use this mechanism extensively for mapping a single AWS Lambda into different API gateway endpoints. We use environment variables provided by AWS Lambda to move all configurations outside the Lambda function. In each version of the Lambda, we change these environment variables as required and create a new version. This version can be mapped to an alias, which will help to keep the API gateway or the integration points intact (without any change for the integration)
if using serverless
try in serverless.yml
provider:
versionFunctions: true
and
functions:
MyFunction:
tags:
try serverless plugins, one of them
serverless plugin install --name serverless-version-tracker
it uses git tags as version
you need to manage git tugs then

Alternative to AWS lambda when deployment package is greater than 250MB?

When I want to launch some code serverless, I use AWS Lambda. However, this time my deployment package is greater than 250MB.
So I can't deploy it on a Lambda...
I want to know what are the alternatives in this case?
I'd question your architecture. If you are running into problems with how AWS has designed a service (i.e. lambda 250mb max size) its likely you are using the service in a way it wasn't intended.
An anti-pattern I often see is people stuffing all their code into one function. Similar to how you'd deploy all your code to a single server. This is not really the use case for AWS lambda.
Does your function do one thing? If not, refactor it out into different functions doing different things. This may help remove dependencies when you split into multiple functions.
Another thing you can look at is can you code the function in a different language (another reason to keep functions small). I once had a lambda function in python that went over 250mb. When I looked at solving the same problem with node.js, my function size dropped to 20mb.
One thing you can do is before run the lambda function you can download the dependencies to /tmp folder from s3 bucket and then add it to python path, it would give you extra 512MB, although you need to take into consideration the download time for some of the lambda invocations