I am new to serverless framework, I used to create and write lambda codes through aws console. But I am little bit confused about the serverless framework structure. Should I need to create a serverless yml for each lambda function, Or I can use a single yml file for my whole aws lambda functions? I don't know which is the best way to start, because each api gateway end point will point to different lambda functions. Please suggest the best way to start.
My few cents here, based on some experience in the last 6 months. I started using a single Git repo with all lambda functions each in a folder. During the first 2 months, I had a single YML file to define all functions.
Then after some issues, I moved to separate YML files inside each folder for lambda function.
You can use plugins specific to each functions. In my use case, I had a NodeJS and GraphQL function for which I use Webpack for compressing the packages.
The Webpack didn't work for my other function which uses some pre-built binary for running specifically in AWS.
Not all functions need to be deployed every time. So managing in separate folders gave lot of flexibility in CI/CD and deployment version management. Otherwise, you will have releases for all functions.
Related
I'm using CodePipeline to deploy my CloudFormation templates that contain Lambda functions as AWS::SAM::Functions.
The CodePipeline is triggered by a commit in my main branch on GitHub.
The Source Stage in the CodePipeline retrieves the source files from GitHub. Zero or more Lambda functions could change in a commit. There are several Lambda Functions in this repository.
I intend on running through taskcat for CloudFormation Templates and Unit Tests for Lambda Python code during a test stage and then deploy the CloudFormation templates and Lambda Functions to production. The problem is, I can't figure out how to differentiate between changed and unchanged Lambda functions or automate the deployment of these Lambda functions.
I would like to only test and deploy new or update changed Lambda functions along with my CloudFormation templates - what is the best practice for this (ideally without Terraform or hacks)?
Regarding testing: Best practice is actually to simply test all lambda code in the repo on push before deploying. You might skip some work for example with github actions that you only test the files that have changed, but it definitely takes some scripting and it hardly ever adds much value. Each testing tool has its own way of dealing with that (sometimes you can simply pass the files you want to test as an argument and then its easy, but sometimes test tools are more of a all-or-nothing approach and it gets quite complicatedreal fast).
Also, personally I'm not a big fan of taskcat since it doesn't really add a lot of value and it's not a very intuitive tool (also relatively outdated IMO). Is there a reason you need to do these types of testing?
Regarding deployment: There are a few considerations when trying to only update lambdas that have changed.
Firstly, cloudformation already does this automatically: as long as the cloudformation resource for the lambda doesn't change, the lambda will not be updated.
However, SAM has a small problem there, since it will re-package the lambda code on every pipeline run and update the CodeUri property of the lambda. And thus the lambda gets updated (even though the code might stay the same).
To work around this, you have several options:
Simply accept that SAM updates your function even though the code might not have changed.
Build SAM locally, and use the --cached and --cache-dir option when deploying in your pipeline. Make sure to push the folder that you set as cache-dir.
Use a different file packaging tool than SAM. Either some custom script that or something else that only pushes your code to s3 when the files have changed.
If you're into programming I'd suggest you take a look into CDK. It's a major upgrade from cloudformation/SAM, and it handles code bundling better (only updates when files have changed). Also the testing options are much wider for CDK.
I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.
When I want to launch some code serverless, I use AWS Lambda. However, this time my deployment package is greater than 250MB.
So I can't deploy it on a Lambda...
I want to know what are the alternatives in this case?
I'd question your architecture. If you are running into problems with how AWS has designed a service (i.e. lambda 250mb max size) its likely you are using the service in a way it wasn't intended.
An anti-pattern I often see is people stuffing all their code into one function. Similar to how you'd deploy all your code to a single server. This is not really the use case for AWS lambda.
Does your function do one thing? If not, refactor it out into different functions doing different things. This may help remove dependencies when you split into multiple functions.
Another thing you can look at is can you code the function in a different language (another reason to keep functions small). I once had a lambda function in python that went over 250mb. When I looked at solving the same problem with node.js, my function size dropped to 20mb.
One thing you can do is before run the lambda function you can download the dependencies to /tmp folder from s3 bucket and then add it to python path, it would give you extra 512MB, although you need to take into consideration the download time for some of the lambda invocations
I am working on serverless. I have created a project using serverless create. Then I have added multiple lambda functions to it. Whenever I deploy, every function is having the entire tree structure as
which is not my requirement. I used indivudually: true under package. Still no use. Please help me solve this.
Thank you...
We have accomplished similar need as follows:
"AWS Lambda Functions Code in Java 8 with Maven as dependency and build tool"
All the functions are defined in single deployment package and at project level all dependencies are controlled through single Maven.
This way, build generates one deployment bundle and same bundle can be used for all Lambda functions for confirmation, with lambda handler set differently for each function.
This has greatly reduced the deployment process.
I have a serverless framework service with (say)five aws lambda functions using python. By using github I have created a CodePipeline for CI/CD.
When I push the code changes, it deploys all the functions even only function is changed.
I want to avoid the deployment of all functions and the CI/CD should determine the changed function and deploy it. Rest of functions should not be deployed again.
Moreover, is there anyway to deal with such problems using AWS SAM, as at this stage I have an option to switch towards SAM by quitting serverless framework
Unfortunately there is no "native" way to do it. You would need to write a bash that will loop through the changed files and call sls deploy -s production -f for each one of them.
I was also faced this issue, and eventually it drove me to create an alternative.
Rocketsam takes advantage of sam local to allow deploying only changed functions instead of the entire microservice.
It also supports other cool features such as:
Fetching live logs for each function
Sharing code between functions
Template per function instead of one big template file
Hope it solves your issue :)