Inter-lambda communication on AWS without polling - amazon-web-services

Given two distinct lambda functions (A) and (B).
A client calls lambda (A), the call blocks until another client calls Lambda (B) and then returns. [Assuming within 1 minute]
Evidently, Lambda (B) could write a flag to a database, and Lambda (A) could poll on this flag until it's set, and then return. But this approach seems inelegant. Can someone suggest a better approach?

Ok,Lambda is server less/on the go compute. If you trigger a lambda and are polling for something else ,well, that is not what lambda is built for. We have SNS , SQS services which help us in avoiding polling and also help in decoupling +scaling.
May be you try something like this.
1.Let the event (which is triggering lambda A ) post to SQS.
2.Let the Lambda B be triggered.
3.Let Lambda B trigger lambda A and lambda A can get the info/event details from SQS.

Related

Patterns for HTTP endpoint Lambda calling other Lambdas and returning value to user

i have a question about lambda anti patterns, and how to address my specific situation.
My current setup is this:
user/webpage -> ApiGateway -> Lambda1 -> synchronously calls Lambda2 (my other microservice) -> back to lambda1 -> back to user
Currently my lambda2 is behind an apigateway as well, but I toyed with idea of invoking directly. Either way it's basically another microservice that I control.
I understand that generally, lambdas calling other lambdas are considered an antipattern. All the blogs/threads/etc online mention using stepfunctions instead, or sqs, or something else.
In my situation, I don't see how I could use stepfunctions, since I have to return something to the webpage/user. If I used a stepfunction, it seems like I would have to then poll for the results, or maybe use websockets; basically in my webpage I would not be able to just call my endpoint and get a result, I'd have to have some way to asynchronously get my result.
Similarly with a queue, or any other solution I saw online, it's basically all asynchronous.
Is there any other pattern or way of doing this?
Thanks.
While invoking a lambda from another lambda, everything will work fine except when the second lambda timeouts or it throttles. If your business logic is built in such a way that failures are handled gracefully and has idempotent behaviour built in, then a lambda calling another lambda (via API gateway or direct invocation) should work fine. Having said that, AWS has come out with Synchronous Express Workflows for AWS Step Functions. The linked blog has detailed examples of using it. The only caveat here is that your entire operation should get over in 5 minutes. The maximum duration an express workflow can run is 5min. So if your application flow is completing within that time limit then this is the recommended way of orchestrating services.

Is there any way to connecto multiple request/trigger form SQS to single thread lambda function?

My app is using lambda function (1) to import data to a third database server. Sometime (1) will throw errors, and I use SQS to store messages throw from (1). And I use lambda function (2) to read all messages in SQS and re-import by recall (1). (2) will triggered whenever SQS receives the message.
Full error flow: Lambda (1) => SQS => Lambda (2) => Lambda (1).
The problem is, if DB server is maintained, it will be infinite loop until DB server active again.
My solution is, create a lambda function (3) doing like a flag, checks DB server status. It will run when SQS receives new message, run repeatedly until DB server active again. This time Lambda (2) is called.
And I want this Lambda (3) is a single thread (singleton ?), all request from SQS are in one thread.
=> With this solution, system only need retry one thread if DB server down.
New flow: Lambda (1) => SQS => Single thread Lambda (3) => Lambda (2) => Lambda (1)
My question is:
My solution is possible or not?
If it's possible then how to setup Lambda (3) ?
If it's impossilbe then is there any way to do resolve my problem?
Please help, Thank you!
It is possible by using throttling and CloudWatch scheduled event triggers.
You can set up CloudWatch scheduled event to periodically run lambda function 3 (the one responsible for DB status checking). I am not sure what you mean by single threaded but I guess that you mean that at most one instance of that function will be run simultaneously. This is easy because CloudWatch scheduled event will run that function just once per x - amount of time which you can specify.
Once the above function (3) detects that the DB is unhealthy, it can set concurrency limit on you lambda function that reads messages from SQS (2) and throttle it down to 0 so that lambda function (2) cannot be executed at all.
When the function (3) detects that the DB is healthy, it will remove this concurrency limit from function (2).
So the code of the lambda function (3) could look something like this
if db_is_not_healthy:
lambda.put_function_concurrency(
FunctionName=function_2,
ReservedConcurrentExecutions=0
)
else:
lambda.delete_function_concurrency(
FunctionName=function_2
)
How exactly you are going to setup your lambda health checks, when to start them, when to stop them, how often to ping the DB depends on your particular use case and how much you are willing to pay for it.
For example, you could start pinging the DB only after there are some errors with it. Once the lambda function (1) receives error response, it can then enable health checks - lambda (3) by unthrottling it and once lambda (3) decides that DB is healthy again, it can throttle itself so that this health checks are performed only when there are problems with the DB.
This is definitely not the most elegant solution but it should work after some tweaking.

Invoke AWS Lambda function when multiple Lambda function is done

What is the best way to invoke aws lambda function when multiple lambda functions have successfully finished?
So for example, LambdaA should run when LambdaB1, LambdaB2, ... LambdaBn have successfully returned success. Also, the last LambdaB function is not guaranteed to finish last...
To answer your specific question, you need to use JavaScript Promises (I'm assuming you are using NodeJS) in your Lambda function. When all of the promises are fulfilled, you can proceed.
However, I do not recommend doing it that way, as your initial Lambda function is sitting idle, and being billed, waiting for the responses from the other functions.
IMO, the best way of achieving this parallel execution is using AWS Step Functions. Here you map out the order of events, and you will want to use Parallel States to make sure all tasks are complete before proceeding.

Make Lambda function execute now, and/or in an hour

I'm trying to implement an AWS Lambda function that should send an HTTP request. If that request fails (response is anything but status 200) I should wait another hour before retrying (longer that the Lambda stays hot). What the best way to implement this?
What comes to mind is to persist my HTTP request in some way and being able to trigger the Lambda function again in a specified amount of time in case of a persisted HTTP request. But I'm not completely sure which AWS service that would provide that functionality for me. Is SQS an option that can help here?
Or, can I dynamically schedule Lambda execution for this? Note that the request to be retried should be identical to the first one.
Any other suggestions? What's the best practice for this?
(Lambda function is my option. No EC2 or such things are possible)
You can't directly trigger Lambda functions from SQS (at the time of writing, anyhow).
You could potentially handle the non-200 errors by writing the request data (with appropriate timestamp) to a DynamoDB table that's configured for TTL. You can use DynamoDB Streams to detect when DynamoDB deletes a record and that can trigger a Lambda function from the stream.
This is obviously a roundabout way to achieve what you want but it should be simple to test.
As jarmod mentioned, you cannot trigger Lambda functions directly by SQS. But a workaround (one I've used personally) would be to do the following:
If the request fails, push an item to an SQS Delay Queue (docs)
This SQS message will only become visible on the queue after a certain delay (you mentioned an hour).
Then have a second scheduled lambda function which is triggered by a cron value of a smaller timeframe (I used a minute).
This second function would then scan the SQS queue and if an item is on the queue, call your first Lambda function (either by SNS or with the AWS SDK) to retry it.
PS: Note that you can put data in an SQS item, since you mentioned you needed the lambda functions to be identical you can store your first function's input in here to be reused after an hour.
I suggest that you take a closer look at the AWS Step Functions for this. Basically, Step Functions is a state machine that allows you to execute a Lambda function, i.e. a task in each step.
More information can be found if you log in to your AWS Console and choose the "Step Functions" from the "Services" menu. By pressing the Get Started button, several example implementations of different Step Functions are presented. First, I would take a closer look at the "Choice state" example (to determine wether or not the HTTP request was successful). If not, then proceed with the "Wait state" example.

Reliably pushing events from API Gateway to Lambda using a queue

I currently have a 3rd party application pushing messages to a Lambda function through API gateway. The Lambda function needs to serialize, log, and push the message to another ESB that I have very little control over.
I'm trying to ensure that there is some kind of recovery mechanism in the case that the Lambda function is either at max load or cannot communicate with the ESB. I've read about Kinesis being a good option for exactly this, but the ESB does not support batching for my use case.
This would cause me to run into the scenario where some messages might make it to ESB, while others don't, which would ultimately cause the batch to fail. Then, when the batch is retried, the messages would be duplicated in the ESB.
Is there a way I could utilize the functionality that Kinesis offers without the batching? Is there another AWS offering that better fits my use case? Ideally I would have one message being handled by the Lambda function that stays in the queue until it is successfully pushed into the ESB.
Any tips would be much appreciated.
Thanks,
Matt
Following might be of help to you:
1) setup api-gateway to log to sqs and 2) then set up a lambda function on that sqs queue to serialize, log, and push the message to the external endpoint.
For the first part: How to integrate API Gateway with SQS this will be of help. (as already mentioned in comments)
This article might help you more for second part: https://dzone.com/articles/integrate-sqs-and-lambda-serverless-architecture-f
Note that you can also choose what kind of trigger you would like (based on usecase)- cron based poll/ or event based, you also have control over when you are deleting from sqs in your lambda function. (you can also find the very basic code in lambda blueprint with name "sqs-poller").
Thanks!