Is there any way to connect two different AWS Lambda layers?
Usually, we could invoke one lambda function by another lambda function. Is that possible in the lambda layer as well?
Lambda layers are used for dependencies only and do not include application code that can be directly invoked. This provides the ability to create one set of dependencies and share them across lambda functions reducing the chance of issues with versioning of dependencies as well as reducing the over all amount of lambda code storage used by your account in the region. Per this link, AWS Provides 75GB of storage for lambda layers and function code per region.
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
You can attach more than one layer to a lambda function. They will apply in a layer order until all layers have been added. This can be done using the web console. There is a "layers" button in the center of the console. Select it, then select a layer you have created and the version of the layer code.
To learn how to create a lambda layer for python, or see an example of lambda layers in use, please see these step by step video instructions: https://geektopia.tech/post.php?blogpost=Create_Lambda_Layer_Python
Related
How is it currently done the handling of multiple lambda functions for a single stack/application?
Considering a use case with more than one function is it better to stick all together in the same repository or have one for each?
Having a single repository for all the functions would be much easier for me coming from old/classic backend development with a single codebase for all the business logic, but moving on the AWS ecosystem means I can no longer "deploy" my entire business logic with a single command since I need to zip a single function and update the archive with the aws cli, and that is impossible with standard merge requests or pipeline due the impossibility of automation for these steps (every time it could be a different function or multiple ones).
From the other side, having e.g. 5 or 6 repositories one for each lambda alongside the ones for frontend and AWS stack would be very impractical to manage.
Bundle your different lambda functions together as a Cloudformation stack. Cloudformation allows you to create multiple AWS services, bridge them together as you wish. There are many tools you can use to achieve this. AWS Cloudformation, AWS SAM (serverless application model) or third party tools like serverless and Terraform. Base concept is known as Infrastructure as Code (IAC).
As per respositories, you can have a single repository per stack. (AWS SAM provides sample codes with a good directory structure) You can try sam init as an example.
Consider AWS Serverless Application Model for your development. It allows you to bash script build, package and deploy using sam cli based on the yaml template. SAM will figure out the diff in your code by itself (because it runs CloudFormation under the hood). It allows not only to combine several functions into one package, but also add API gateways, dynamoDB tables and so much more! Another cool feature is that your functions will appear as an integrated application in Lambda console so you can monitor them all at the same time.
I understand that Lambda is serverless and it will create an Execution Environment (MicroVMs) on event invocations.
So, when an event is invoked, Lambda will spin up an execution environment that will have selected programming language runtime inside it.
So far, it is clear that these Execution Environments (MicroVMs) are created on demand, and terminated if found idle for long.
Now, original question comes.
My understanding is that, Lambda have a Runtime API. So, whenever we create a Lambda resource in AWS, it can be accessed by Lambda Runtime API. And these API end-points are invoked by Event Sources such as SQS, SNS, etc.
My question is that, is there any compute that run all the time, just to host these Lambda Runtime APIs. And if it is there, why there is not much detail about that, and why are not we charged for that?
Please correct my understanding here.
In a very simplified explanation, Lambda should be considered as a service with two components:
Data Plane: EC2 instances where the functions are executed.
Control Plane: Service that contains all the metadata related to each Lambda deployed, including event mapping.
When an event occurs, it will be processed by the control plane. The control plane will validate the security and check if there is an available copy of the function already instantiated.
If one is available, it will forward the event to the Lambda and pass instructions to send back the result. If there is no function available the control plane will download the function code, together with its runtime, instantiate a new function in the data plane and forward the event.
At all times, there will be control plane and data plane machines online. The AWS Lambda service will increase or decrease the number of each based on usage.
ers,
I have a Step Function pipeline which links several Lambda's. The Step Function is started using a AWS API Gateway. All aforementioned items are in the same region.
However, based on the clients IP origin, I would like to use one lambda of the step function in a different region. I did some research but it seems there is no way of invoking a lambda in a different region as compared to the step function region.
So basically this would mean that I would have to make different API Gateway entrypoints and different Step Function pipelines, different Lambdas for every region I would like to employ to right? Are there some consequences for the S3 storage I use?
If Step Functions don't support cross-region Lambda's (and it appears they don't) my idea would be to use a 'proxy-lambda' in the same region as the Step Function and within the proxy-lambda invoke the cross-region lambda. It's not ideal, you will have to handle pass-through and you will pay twice for the duration of the lambda (1x proxy 1x actual lambda) but it seems a lot easier than having all the elements in all regions.
I know I'm too late to this, but proposing my idea anyway.
Maybe use SQS? Put an event into an SQS in the other region and add a lambda trigger.
To reduce the cost on instances, we were looking for options.
AWS lambda seems to be a good option for us.
Its still in the preliminary stage of searching for available alternatives.
My concern is if we switch some of our applications to lambda, we will be confined to use AWS environments only , and in future it might become a boundation for a scenario , which we cant predict at the moment.
So my question is, is there a way that we can still use lambda in an environment which is not an AWS environment.
Thanks!
AWS Lambda functions are basically containers, where its lifecycle is managed by Amazon.
When you use Lambda, there are several best practices you can follow, to avoid full locking. One of the recommended practice is to separate the business logic from Lambda handler. When you separate the Lambda handler, it only works as the controller which points to the executing code.
/handler.js
/lib
/create-items
/list-items
For example, if you design a web application API this way with NodeJS in Lambda, you can later move the business logic to an ExpressJS server by moving the handler code to ExpressJS Routes.
As you can see, you will still require putting additional effort to move an application from Lambda to another environment. By properly designing, you can only reduce the efforts.
As per my knowledge,
Its AWS lambda function, so it is suppose to be deployed on AWS instances only, because they support the needed environment.
From AWS site there are couple of options ...
https://docs.aws.amazon.com/lambda/latest/dg/deploying-lambda-apps.html
I was wondering if there are any AWS Services or projects which allow us to configure a data pipeline using AWS Lambdas in code. I am looking for something like below. Assume there is a library called pipeline
from pipeline import connect, s3, lambda, deploy
p = connect(s3('input-bucket/prefix'),
lambda(myPythonFunc, dependencies=[list_of_dependencies])
s3('output-bucket/prefix'))
deploy(p)
There can be many variations of this idea of course. This use case assumes only one s3 bucket for e.g. There could be a list of input s3 buckets.
Can this be done by AWS Data Pipeline? The documentation I have(quickly) read says that Lambda is used to trigger a pipeline.
I think the closest thing that is available is the State Machine functionality within the newly released Lambda Step Functions. With these you can coordinate multiple steps that transform your data. I don't believe that they support standard event sources, so you would have to create a standard lambda function (potentially using the Serverless Application Model) to read from S3 and trigger your State Machine.