amazon load balancer for lambda - amazon-web-services

I am new to use aws.
Normally, I use load balance like bottom with 2 servers.
For L4 Load balancing, there are more than 2 servers
But ALB - Lambda is not I think
I am curious about ALB - Lambda relationship
Is it 1:1? not like L4 switch? or VPCs stand for the server?
And I want to know benefit to use ALB for lambda.

Since Lambda is a short running piece of code (FAAS) - Function as a service. The service executes quickly in milli-seconds and dies out. You need to change the way you are thinking about using Lambda, as this doesn't compare to a VPS(virtual private server) or EC2 instance. You have to go with a different approach, called as serverless computing.
Instead, you can have the API Gateway sit on top of the Lambda functions and you can invoke these API's to execute your code. Each lambda function must do only 1 single task and nothing more.
As a matter of fact, the longer a lambda function runs, the costlier it becomes in terms of billing. So having a short running function is the way to keep your bills in check.
If you want to use lambda - try this serverless-stack tutorial - Ref: https://serverless-stack.com/.
Lambda does have outage issues - and the way to handle this is using Route53 Service as a load balancer.
Another good Reference:
Ref: https://serverless.com/

You can invoke lambda via api gateway and alb also. The difference lies in the cost. API gateway is way costlier

Related

Is it possible to connect an AWS Lambda function without a VPC connection to AWS EFS?

I want to connect AWS EFS to my AWS Lambda function, without connecting the Lambda function to VPC. Is it possible to do this?
This is simply No. It's impossible.
EFS file systems are always created within a customer VPC, so Lambda functions using the EFS file system must all reside in the same VPC.
Like stated here (https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aws-lambda-in-your-serverless-applications)
Lambda should be placed within same VPC where EFS is created.
There might be different reasons you didn't like to place your Lambda function in VPC:
Very slow initialization (Creating ENI, Attaching Lambda to it.. This takes long time significantly)
Additional configuration to place in VPC etc..
One solution is to use provisioned concurrency feature of Lambda (It comes with more costing)
In this way, you can get multiple Lambda functions ready to use any time by keeping it warm.
Cheers

AWS Lambda and ECS Tasks - what is the best way to orchestrate?

I'm creating my application with the as much serverless as possible premise.
Long story short, 2 services cannot be implemented as lambda functions, hence I bet on ECS tasks with EC2 autoscaling groups, due to GPU requirements, etc.
After doing my homework on the Lambda + VPC resources lesson I was shocked there's no easy and pleasant way to expose VPC services extension of AWS services. So the official approach stands for incorporating a lambda function into a VPC plus establishing a NAT gateway/instance or VPC endpoints in order to reach the internet and AWS services. Moreover, I can read this is not recommended and should be treated as the final solution. It slows down lambda and increases cold starts.
Generally, I need access to the internet and reach other AWS services from the lambda, which must make requests to ECS tasks. Those tasks are crucial contributors to my flow I'd like them to be easily callable from lambda functions. I'm not sure if VPC lambdas would make sense if I need to pay for NAT, which is comparatively expensive. Maybe I missed something.
Is it possible to avoid incorporating lambdas into VPC and still be able to call ECS services? If not, what is the best way to cuts costs related to NAT?
I'd appreciate any form of help.

EC2 to Lambda forwarding based on current usage

I am having long cold start times on our Lambda functions. We have tried "pinging" the Lambdas to keep them warm but that can get costly and seems like a poor way to keep performance up. We also have an EC2 instance running 24/7. I could theoretically "mirror" all of our Lambda functions to our EC2 instance to respond with the same data for our API call. Our Lambdas are on https://api.mysite.com and our EC2 is https://dev.mysite.com.
My question is could we "load balance?" traffic between the two. (Create a new subdomain to do the following) To have our dev subdomain (EC2) respond to all requests up until a certain "requests per minute" is hit. Then start to route traffic to our dev subdomain (Lambda) since we have enough traffic coming in to keep the Lambdas hot. Once traffic slows down, we move the traffic back over to our EC2.. Is this possible?
No, you can not do that with AWS Load balancer. What you can do is setup cloudwatch trigger to the lambda function which will map api.mysite.com to the lambda load balancer dns. Similarly, you can add trigger for low traffic which will redo dns entry.
If you are aware of rise in traffic, you can scheduled instances. Else you can also try AWS Fargate.
Hope this helps you.
Cloudwatch allows you to choose the number of triggers you wanna set for a lambda i.e. whenever an API is called, cloudwatch will trigger those many of the lambdas. This is one way you could achieve it. There's no way yet, to load balance between lambdas and instances.
You can configure your API to use Step functions which will orchestrate your lambda.

Connecting Cassandra from AWS Lambda

We are checking the feasibility of migrating one of our application to Amazon Web Services (AWS) . We decide to use AWS API Gateway to expose the services and AWS Lambda (java) for back end data processing. The lambda function has to fetch a large amount of data from our database.
Currently using Cassandra for data storage, which has been set up with in an EC2 instance and it has no public ip.
Can anyone suggest a way to access Cassandra(EC2) from AWS Lambda using the private Ip ( 10.0.x.x)?
Is it a right choice to use AWS Lambda for large scale applications?
Since your Cassandra instance is using private IP, you will need to configure your AWS lambda Network to use a VPC. It could be the VPC you are running Cassandra in, or a VPC you create for the purpose of your lambdas, and that you VPC-peer to your cassandra VPC. A few things to note from the documentation :
when your lambda runs in a VPC, it doesn't have internet access by default, you will need to configure a NAT for that.
There is an additional latency due to the configuration of the ENI (you only pay that penalty on cold start)
You need to make sure your lambda has the right permission to manage the ENI, you should use this role: AWSLambdaVPCAccessExecutionRole
Your plan to use API / AWS lambda has at least 3 potential issues which you need to consider carefully:
Cost. API gateway per request cost is higher than AWS lambda per request cost. Make sure you are familiar with the cost.
cold start. When AWS start an underlying container to execute your lambda, you pay a cold start latency (which get worse when using VPC due to the management of the ENI). If you execute your lambda concurrently, there will be multiple underlying containers. Each of them will have this cold start the first time. AWS tends to keep the underlying containers ready for a warm start, for a few minutes (users report 5 to 40 minutes). You might try to keep your container warm by pinging your aws lambda, obviously if you have multiple container in parallel, it is getting tricky.
Cassandra session. You will probably want to avoid creating and destroying your Cassandra session each time you invoke your lambda (costly). I haven't tried yet, but there are reports of keeping the session alive in a warm container, you might want to check this SO answer.
Having say all that, currently the biggest limitation in using AWS lambda is concurrent execution and cold start latency. For data processing, that's usually fine. For user-facing usage, the percentage of slow cold start might affect your user experience.

AWS Lambda Provisioning Business Logic

In AWS Lambda, there is no need of provisioning done by us. But I was wondering how AWS Lambda might be provisioning machines to run for the requests. Is it creating a EC2 server for each request and execute the request and then kill the server? Or it keeps some EC2 servers always on to serve the request by executing the lambda function? If its doing the former point, then I am wondering it would also affect performance of AWS Lambda to serve the request. Can anyone guide me on this?
Lambdas run inside of a docker-like container, on EC2 servers (using Firecracker) that are highly, highly optimized. AWS has thousands of servers running full time to serve all of the Lambda functions that are running.
A cold start Lambda (one that's never been run before) starts up in a few seconds, depending on how big it is. An EC2 server takes 30+ seconds to startup. If it had to startup an EC2 server, you'd never be able to use a Lambda through API gateway (because API Gateway has a 30 second timeout). But obviously you can.
If you want your Lambdas to startup super duper fast (100ms), use Provisioned Concurrency.
AWS Lambda is known to reuse the resources. It will not create an EC2 server for each request so that will not be a performance concern
But you should note that the disk space provided for your function sometimes not cleanup properly. As some users reported
You can read more on the execution life cycle of Lambda here: https://docs.aws.amazon.com/lambda/latest/dg/running-lambda-code.html