Mapping git hash to deployed lambda - amazon-web-services

We would like to be able to understand the version of our software that is currently deployed to a particular AWS lambda. What are the best practices for tracking a code commit hash to an AWS lambda? We've looked at AWS tagging and we've also looked at AWS Lambda Aliasing but neither approach seems convenient or user-friendly. Are there other services that AWS provides?
Thanks

Without context and a better understanding of your use case around the commit hash, its difficult to give a directly useful answer, and as other answers have shown, there are many ways you could accomplish this. That being said, the commit hash of particular code is ultimately metadata and AWS has a solution for dealing with resource metadata: tags.
Best practice is to tag your resources with metadata. Almost all, if not all, AWS resources (including Lambda) support tags. As stated in the AWS documentation “tagging allows you to quickly search, filter, and manage resources” and subject to AWS limits your tags can be pretty much any key-value pair you like, including “commit: hash”.
The tagging strategy here would be to assign the commit hash to a tag, “commit” with the value “e63abe27”. You can tag resources manually through the console or you can apply tags as part of your build process.
Once tagged, at a high level, you would then be able to identify which commit is being used by listing the tags for the lambda in question. The CLI command would be something like:
aws lambda list-tags —resource arn:aws:lambda:us-east-1:123456789012:function:myFunction
You can learn more about tags and tagging strategy by reviewing the AWS docs here and you can download the Tagging Best Practice whitepaper here.

One alternative could be to generate a file with the Git SHA as part of your build system that is packaged together with the other files in the build artifact. The following script generates a gitSha.json file in the ${outputDir}:
#!/usr/bin/env bash
gitSha=$(git rev-parse --short HEAD)
printf "{\"gitSha\": \"%s\"}" ${gitSha} > "${outputDir}/git-sha.js"
Consequently, the gitSha.json may look like:
{"gitSha": "5c805638"}
This file can then be accessed either by downloading the package. Alternatively, you can create a function that inspects the file in runtime and returns its value to the caller, writes it to a log, or similar depending on your use case.
This script was implemented using bash and git rev-parse, but you can use any scripting language in combination with a Git library that you are comfortable with.

Best way to version Lambda is to Create Version option and adding these versions into an Alias.
We use this mechanism extensively for mapping a single AWS Lambda into different API gateway endpoints. We use environment variables provided by AWS Lambda to move all configurations outside the Lambda function. In each version of the Lambda, we change these environment variables as required and create a new version. This version can be mapped to an alias, which will help to keep the API gateway or the integration points intact (without any change for the integration)

if using serverless
try in serverless.yml
provider:
versionFunctions: true
and
functions:
MyFunction:
tags:
try serverless plugins, one of them
serverless plugin install --name serverless-version-tracker
it uses git tags as version
you need to manage git tugs then

Related

How to use AWS CLI to create a stack from scratch?

The problem
I'm approaching AWS, and the first test project will be a website, but i'm struggling on how to approach the resource and the tools to accomplish this.
AWS documentation is not really beginner-friendly, so to me it is like to being punched in the face at the first boxe training session.
First attempt
I've installed bot AWS and SAM cli tools, so what I would expect is to be able to create an empty stack at first and adding the resource one by one as the specifications are given/outlined, but instead what I see is that i need to give a template to the tool to create the new stack, but that means I need to know how to write it beforehand and therefore the template specifications for each resource type.
Second attempt
This lead me to create the stack and the related resources from the online console to get the final stack template, but then I need to test every new resource or any updated resource locally, so I have to copy the template from the online console to my machine and run the cli tools with this, but obviously it is not the desired development flow.
What I expected
Coming from a standard/classical web development I would expect to be able to create the project locally, test the related resources locally, version it, and delegate the deployment to the pipeline.
So what?
All this made me understand that "probably" I'm missing somenthing on how to use the aws cli tools and how the development for an aws-hosted application is meant to be done.
I'm not seeking for a guide on specific resource types like every single tutorial I've found online, but something on a higher level on how to handle a project development on aws, best practices and stuffs like that, I can then dig deeper on any resource later when needed.
AWS's Cloud Development Kit ticks the boxes on your specific criteria.
Caveat: the CDK has a learning curve in line with its power and flexibility. There are much easier ways to deploy a web app on AWS, like the higher-level AWS Amplify framework, with abstractions tailored to front-end devs who want to minimise the mental energy spent on the underlying infrastructure.
Each of the squillion AWS and 3rd Party deploy tools is great for somebody. Nevertheless, looking at your explicit requirements in "What I expected", we can get close to the CDK as an objective answer:
Coming from a standard/classical web development
So you know JS/Python. With the CDK, you code infrastructure as functions and classes, rather than 500 lines of YAML as with SAM. The CDK's reference implementation is in Typescript. JS/Python are also supported. There are step-by-step AWS online workshops for these and the other supported languages.
create the project locally
Most of your work will be done locally in your language of choice, with a cdk deploy CLI command to
bundle the deployment artefacts and send them up to the cloud.
test the related resources locally
The CDK has built-in testing and assertion support.
version it
"Deterministic deploy" is a CDK design goal. Commit your code and the generated deployment artefacts so you have change control over your infrastructure.
delegate the deployment to the pipeline
The CDK has good pipeline support: i.e. a push to the remote main branch can kick off a deploy.
AWS SAM is actually a good option if you are just trying to get your feet wet with AWS. SAM is an open-source wrapper around the aws-cli, which allows you to create aws resources like Lambda in say ~10 lines of code vs ~100 lines if you were to use the aws-cli directly. Yes, you'll need to learn SAM specific things like SAMtemplate and SAM-cli but it is pretty straightforward using this doc.
Once you get the hang of it, it would be easier to start looking under the hood of what/how SAM is doing things and get into the weeds with aws-cli if you wanted. Which will then allow you to build out custom solutions (using aws-cli) for your complex use cases that SAM may not support. Caveat: SAM is still pretty new and has open issues that could be a blocker for advanced features/complex use cases.

How to deploy only changed lambda functions in github with aws codepipeline and cloudformation?

I'm using CodePipeline to deploy my CloudFormation templates that contain Lambda functions as AWS::SAM::Functions.
The CodePipeline is triggered by a commit in my main branch on GitHub.
The Source Stage in the CodePipeline retrieves the source files from GitHub. Zero or more Lambda functions could change in a commit. There are several Lambda Functions in this repository.
I intend on running through taskcat for CloudFormation Templates and Unit Tests for Lambda Python code during a test stage and then deploy the CloudFormation templates and Lambda Functions to production. The problem is, I can't figure out how to differentiate between changed and unchanged Lambda functions or automate the deployment of these Lambda functions.
I would like to only test and deploy new or update changed Lambda functions along with my CloudFormation templates - what is the best practice for this (ideally without Terraform or hacks)?
Regarding testing: Best practice is actually to simply test all lambda code in the repo on push before deploying. You might skip some work for example with github actions that you only test the files that have changed, but it definitely takes some scripting and it hardly ever adds much value. Each testing tool has its own way of dealing with that (sometimes you can simply pass the files you want to test as an argument and then its easy, but sometimes test tools are more of a all-or-nothing approach and it gets quite complicatedreal fast).
Also, personally I'm not a big fan of taskcat since it doesn't really add a lot of value and it's not a very intuitive tool (also relatively outdated IMO). Is there a reason you need to do these types of testing?
Regarding deployment: There are a few considerations when trying to only update lambdas that have changed.
Firstly, cloudformation already does this automatically: as long as the cloudformation resource for the lambda doesn't change, the lambda will not be updated.
However, SAM has a small problem there, since it will re-package the lambda code on every pipeline run and update the CodeUri property of the lambda. And thus the lambda gets updated (even though the code might stay the same).
To work around this, you have several options:
Simply accept that SAM updates your function even though the code might not have changed.
Build SAM locally, and use the --cached and --cache-dir option when deploying in your pipeline. Make sure to push the folder that you set as cache-dir.
Use a different file packaging tool than SAM. Either some custom script that or something else that only pushes your code to s3 when the files have changed.
If you're into programming I'd suggest you take a look into CDK. It's a major upgrade from cloudformation/SAM, and it handles code bundling better (only updates when files have changed). Also the testing options are much wider for CDK.

What is the proper way to build many Lambda functions and updated them later?

I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.

Is there a way to do a dry run with serverless CLI?

I want to test the resolution of variables inside my serverless.yaml file, e.g. some come from command line, some from a file and others from s3.
e.g.
environment
whitelist: ${file(config/forwardproxy.sit.yaml):Common.defaultWhitelist}
I want to do a deploy with a dry run. The --nodeploy option only seems to be available with azure provider.
Is there a way to do this with the AWS provider?
I use serverless package for the purpose of testing variable resolution, syntax checking, plugin configuration, etc. If you have no use for the zip it creates, you can just trash it afterwards.
You can use plugin serverless-offline to run lambdas (if that's what you are trying to do).
Usage: $ serverless offline

AWS lambda versioning

I have a question about the lambda functions versioning capabilities.
I know how the standard way of versioning works out of the box in AWS but I thought there is a way for the publisher to specify the version number which would tag a specific snapshot of the function. More exactly what I was thinking of was including in the uploaded zip file a config.json where the version would be specified. And this would be used afterwards by AWS for tagging.
The reason I am asking is because I would like, for example, to keep in sync the version of the lambda function with the CI job build number that built (zipped) the lambda.
Any ideas?
Many thanks
A good option would be store your CI job build number as an environment variable on the Lambda function.
Its not exactly a recommended way to version AWS Lambda functions, but definitely helps in sticking to typical 1.x.x. versioning strategies and keeping them consistent across the pipeline.
Flipping the topic the other way around. Can we go with AWS Lambda versions 1.2.3., and then find a way to have our CI builds also use a single digit version no? Im not yet comfortable with this approach, and like the flexibility of 1.x.x as a versioning scheme to indicate major.minor.patch.
Standard Lambda versioning.
This is the most detailed blog I came across on this topic.
https://www.concurrencylabs.com/blog/configure-your-lambda-function-like-a-champ-sail-smoothly/
When you are deploying the Lambda function through CLI command or API, its not possible to give a custom version number. Its currently an automatically generated value by aws.
This makes it not possible to map the version number in a configuration file to the Lambda version supporting your use case.