Alternative to AWS lambda when deployment package is greater than 250MB? - amazon-web-services

When I want to launch some code serverless, I use AWS Lambda. However, this time my deployment package is greater than 250MB.
So I can't deploy it on a Lambda...
I want to know what are the alternatives in this case?

I'd question your architecture. If you are running into problems with how AWS has designed a service (i.e. lambda 250mb max size) its likely you are using the service in a way it wasn't intended.
An anti-pattern I often see is people stuffing all their code into one function. Similar to how you'd deploy all your code to a single server. This is not really the use case for AWS lambda.
Does your function do one thing? If not, refactor it out into different functions doing different things. This may help remove dependencies when you split into multiple functions.
Another thing you can look at is can you code the function in a different language (another reason to keep functions small). I once had a lambda function in python that went over 250mb. When I looked at solving the same problem with node.js, my function size dropped to 20mb.

One thing you can do is before run the lambda function you can download the dependencies to /tmp folder from s3 bucket and then add it to python path, it would give you extra 512MB, although you need to take into consideration the download time for some of the lambda invocations

Related

Lambda Function as a Zip with Layer as a Docker Image?

My entire lambda functions architecture is built around the .zip package type. I create a separate layer.zip for dependencies that are shared across but it recently crossed the 250 MB limit.
I'm not keen on moving my lambda functions to docker container images.
But I can happily docker containerize my layers.
Is there any way to use create/use a docker container image as a layer and attach it to a .zip lambda function?
Does this serve as a solution to get past the 250 MB layer limit?
Look forward to any feedback and resources! :)
Short answer: no.
Longer answer: they're two different ways of implementing your Lambda. In the "traditional" style, your code is deployed into an AWS-managed runtime that invokes your functions. In the "container" style, your container must implement that runtime.
While I think that 250 MB is an indication that Lambda is the wrong technology choice, you can work-around it by downloading additional content into /tmp when your Lambda first starts.
This is easier for languages such as Python, where you can update sys.path on the fly. It's much less easy (but still doable) for languages like Java, where you'd need a "wrapper" function that then creates a classpath for the downloaded JARs.
It's also easier if the bulk of your Lambda is a static resource, like an ML model, rather than code.

Can I load code from file in a AWS Lambda?

I am thinking of creating 2 generic AWS Lambda functions, one as an "Invoker" to run the other Lambda function. The invoked Lambda function loads the code of the Lambda from a file based on the parameter that is passed to it.
Invoker: Calls the invoked Lambda with a specified parameter, e.g. ID
Invoked: Based on the ID, load the appropriate text file containing
the actual code to run
Can this be done?
The reason for this thinking is that I don't want to have to deploy 100 Lambda functions if I could just save the code in 100 text files in S3 bucket and load them as required.
The code is uploaded constantly by users and so I cannot include it in the lambda. And the code can be in all languages supported by AWS (.NET, NodeJs, Python, etc.)
For security, is there a way to maybe "containerized" running the code?
Any recommendation and ideas are greatly appreciated.
Thanking you in advance.
The very first I'd like to mention is that you should pay a lot of attention to the security aspects of your app as you are going to execute code uploaded by users, meaning that they will potentially be able to access sensitive data.
My example is based on NodeJS, but I think something similar may be achieved using other runtimes, not sure. There are main two things you need to know:
AWS Lambda execution environment provides you with the /tmp folder with capacity of 512 MB and you are allowed to put there any necessary resources needed for the current particular invocation.
NodeJS allows you to require modules dynamically at any place in the app.
So, basically, you may download the desired js file into the /tmp folder and then require it from your code. I am not going to write the real code now as it could be quite big, but here is some general steps just to make things clear:
Lambda receives fileId as a parameter in event.
Lambda searches S3 for the file named fileId and then downloads it to the /tmp folder as fileId.js
Now in the app you may require that file and consider it as a module:
const dynamicModule = require("/tmp/fileId.js");
Use the the module loaded
You certainly won't be able to run Python code, or .Net code, in a Node lambda. Can you load files and dynamically run the code? Probably. Should you? Probably not. Even if trust the source of that code you don't want them all running in the same function. 1) they would share the same permissions. That means that, at a minimum, they would all have access to the same S3 bucket where the code is stored. 2) they would all log to the same place. Good luck debugging.
We have several hundred lambda functions deployed in our account. I would never even entertain this idea as an alternative.

What is the proper way to build many Lambda functions and updated them later?

I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.

Why might an AWS Lambda function run in docker-lambda, but crashes on AWS?

There are a set of Docker images available called docker-lambda that aim to replicate the AWS Lambda environment as closely as possible. It's third party, but it's somewhat endorsed by Amazon and used in their SAM Local offering. I wrote a Lambda function that invokes an ELF binary, and it works when I run it in a docker-lambda container with the same runtime and kernel as the AWS Lambda environment. When I run the same function on AWS Lambda, the method mysteriously crashes with no output. I've given the function more RAM than the process could possibly use, and the dependencies are clearly there because the I know that it's not really possible to debug this without access to the binaries, but are there any common/possible reasons why this might be happening?
Additionally, are there any techniques to debug something like this? When running in Docker, I can add a --cap-add SYS_PTRACE flag to invoke a command using strace and see what might be causing an error. I'm not aware of an equivalent debugging technique on AWS Lambda.
Willing to bet that the AWS Lambda execution environment isn't playing nicely with that third party image. I've had no problems with Amazon's amazon/aws-lambda-python:3.8 image.
Only other thing I can think of is tweaking the timeout of your lambda.

AWS lambda versioning

I have a question about the lambda functions versioning capabilities.
I know how the standard way of versioning works out of the box in AWS but I thought there is a way for the publisher to specify the version number which would tag a specific snapshot of the function. More exactly what I was thinking of was including in the uploaded zip file a config.json where the version would be specified. And this would be used afterwards by AWS for tagging.
The reason I am asking is because I would like, for example, to keep in sync the version of the lambda function with the CI job build number that built (zipped) the lambda.
Any ideas?
Many thanks
A good option would be store your CI job build number as an environment variable on the Lambda function.
Its not exactly a recommended way to version AWS Lambda functions, but definitely helps in sticking to typical 1.x.x. versioning strategies and keeping them consistent across the pipeline.
Flipping the topic the other way around. Can we go with AWS Lambda versions 1.2.3., and then find a way to have our CI builds also use a single digit version no? Im not yet comfortable with this approach, and like the flexibility of 1.x.x as a versioning scheme to indicate major.minor.patch.
Standard Lambda versioning.
This is the most detailed blog I came across on this topic.
https://www.concurrencylabs.com/blog/configure-your-lambda-function-like-a-champ-sail-smoothly/
When you are deploying the Lambda function through CLI command or API, its not possible to give a custom version number. Its currently an automatically generated value by aws.
This makes it not possible to map the version number in a configuration file to the Lambda version supporting your use case.