I have a lambda that is invoked by Kinesis Stream. The lambda then calls an API endpoint and depending on the response, attempts to retry the call to the API. If the response is 200, it won't retry. However, if it receives, say a 500 error back from the API, it will retry.
I want to know if this is an anti-pattern when calling REST endpoints from a lambda or not. In my specific case, the Lambda retry should be determined by the response from the API its calling.
Related
The api gateway url is triggered/called by a form submit. Now, as this is a synchronous invocation, how do I handle a lambda retries and handle throttles?
Note: I am deploying api gateway along with lambda and use the url generated as webhook for form submit. So basically, it would be a one-way communication.
Flow:
form(payload)-> (api gateway)-> lambda-> lambda-> sqs-> lambda-> dynamoDb
There won't be an automatic Lambda retry when it's invoked from API Gateway afaik. If you are throttled or the request fails (perhaps because of a Lambda function error), your client is responsible for deciding how to recover and whether or not to retry, likely based on the HTTP response code.
Also worth reading:
A Detailed Overview of AWS API Gateway
How AWS Lambda Retry really works
Synchronous invocation: If the function is invoked synchronously and
is throttled, Lambda returns a 429 error and the invoking service is
responsible for retries. The ThrottledReason error code explains
whether you ran into a function level throttle (if specified) or an
account level throttle (see note below). Each service may have its own
retry policy. For example, CloudWatch Logs retries the failed batch up
to five times with delays between retries. For a list of event sources
and their invocation type, see Supported Event Sources.
Reference
I not sure my understanding about the above sentence is right, If I am wrong please fix me.
When a lambda is throttled, it returns 429 Error to API-gateway.
The invoking service, in here API-gateway, retries the request.
However, It doesn't work as expected. The below is the API-gateway log from cloudWatch when a lambda is throttled.
API-Gateway-Execution-Logs_3f1frvtwe4/sam-sm-test 2a38a4a9316c49e5a833517c45d31070 (bededbf0-73ae-11e8-87a2-f51933ef104f) Endpoint response body before transformations: {"Reason":"ReservedFunctionConcurrentInvocationLimitExceeded","Type":"User","message":"Rate Exceeded."}
API-Gateway-Execution-Logs_3f1frvtwe4/sam-sm-test 2a38a4a9316c49e5a833517c45d31070 (bededbf0-73ae-11e8-87a2-f51933ef104f) Endpoint response headers: {Connection=keep-alive, x-amzn-RequestId=bedfc624-73ae-11e8-bd28-6345cb3606c4, x-amzn-ErrorType=TooManyRequestsException, Content-Length=104, Date=Tue, 19 Jun 2018 10:51:39 GMT, Content-Type=application/json}
API-Gateway-Execution-Logs_3f1frvtwe4/sam-sm-test 2a38a4a9316c49e5a833517c45d31070 (bededbf0-73ae-11e8-87a2-f51933ef104f) Execution failed due to configuration error: Malformed Lambda proxy response
In practical, Lambda returns {"Reason":"ReservedFunctionConcurrentInvocationLimitExceeded","Type":"User","message":"Rate Exceeded."} which is wrong format for API-gateway(proxy integration) then, as the result, API-gateway returns 502 Error to client calling the API.
I want the failed request to be retried. how could I handle it?
Each service may have its own retry policy.
API Gateway will not retry a failed invocation of a Lambda. If you want to handle a retry, this will have to be done in the client calling the API Gateway.
A 502 error is, as you suggested, returned by the API Gateway when it receives a malformed Lambda proxy response (see https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/).
I currently have a Web hook that's calling AWS API Gateway -> AWS Lambda function proxy. I'd like to make the web hook more responsive and return an early reply while continuing processing in the Lambda.
I went ahead and did this early reply from the Lambda (Node v6.10) but it didn't appear to have improved responsiveness. Is API Gateway somehow waiting for the Lambda to finish executing despite having the response from the callback already?
The other idea is to post an SNS notification from Lambda and have a second Lambda listen and continue processing but would rather avoid that complication if there's a simpler way.
API Gateway currently only supports synchronous invocation (aka InvocationType: RequestResponse) of Lambda functions, so yes, it is waiting for the full response from the Lambda.
To support your use case, you could use SNS or an another intermediary AWS service like Kinesis, SQS, etc. But you could also do it with Lambda alone. Have the first Lambda function trigger a second Lambda function asynchronously with InvocationType: 'Event', this will achieve the effect you desire.
See this post for more details: https://stackoverflow.com/a/31745774/5705481
I have been experimenting with AWS API Gateway and AWS Lambda to try out a serverless architecture. Have been going through blogs and AWS documentation. Have tried out sample GET/POST. But, I have the following requirement w.r.t tracking user events from my custom application
Events are posted from my application to API end point
I wanted the API to respond back with a custom response (Say {'fine'})
(acknowledging that the request has been received)
After the response is sent, hand over the event payload to a AWS Lambda function
As per the documentation, I understand,
a) I can post events to API end point
b) On GET/POST trigger an AWS Lambda Function
- Respond back from AWS Lambda function to API request
I wanted to change the above and modify it to
a) Post events to API end point
a.0) Respond back acknowledging that request is received [Say {'fine'} ]
b) Trigger AWS Lambda function to process the event payload
Please share across suggestions on how to achieve the same.
Another asynchronous model many customers have used:
Set up an API configured to send requests to Amazon Kinesis. This API could acknowledge the request.
Set up AWS Lambda to consume your Kinesis stream.
This setup has some advantages for high workload APIs as fetches from the Kinesis stream can be batched and don't require a 1-to-1 scaling of both your API Gateway limits and Lambda limits.
Update
To answer your questions about scalability:
Kinesis
Kinesis scales by adding what it calls "shards" to the stream. Each shard handles a portion of your traffic, based on a partition key. Each shard scales up to 1000 rps or 1MBps (see limits). Even with the lower default 25 shards, this would support up to 25,000 rps or 25MBps with an evenly distributed partition key.
API Gateway
API Gateway has a default account level limit of 500 rps, but this can easily be extended by requesting a limit increase. We have customers in production that are using the service at limits above your current suggested scale.
If you want a fast response from the API and not have to wait for the processing of data, you could:
post an event to an API Gateway endpoint
trigger an AWS Lambda Function A
call asynchronously a Lambda Function B using the AWS SDK in the Lambda Function A
Call context.succeed() or context.done() or the callback function in the Lambda Function A so it respond back to API Gateway
the Lambda Function B can process the data while API Gateway already received a response
You should first run some tests to see what type of real world response times you are getting from having your lambda function complete all the logic. If the times are above what you feel are acceptable for your use case, here is another asynchronous solution utilizing an SNS Topic to trigger a secondary Lambda function.
Client Request to API Gateway -> Calls Lambda function A
Lambda A verifies payload and then publishes to SNS Topic X
Lambda A returns {fine} success message -> API Gateway -> client
SNS Topic X triggers Lambda function B
Lambda function B implements given logic
I'm using API Gateway-to-Lambda for a few micro-services but in at least one case the service will take 20-30 seconds to complete so in cases like this I'd like to pass back an immediate response to the client, something like:
status: 200
message: {
progressId: 1234
}
and then allow the Lambda Function to continue on (and periodically updating the "processId" somewhere that is accessible to a client. The problem is that if you call context.succeed(), context.fail(), or context.done() that apparently stops the lambda function from further execution and yet it's the only way I know to flush the stdout buffer back to the API Gateway.
This has led me to a second approach which I haven't yet try to tackle (and for simplicity sake would love to avoid) which involves API Gateway calling a "Responder" Lambda function that then asynchronously fires off the Microservice and then immediately responds to the API Gateway.
I've tried to illustrate these two options in sketch format below. I'd love to hear how anyone's been able to solve this problem.
Currently API Gateway requires that the AWS Lambda integration is synchronous. If you desire asynchronous invocation of your Lambda function, you have 2 options:
Invoking the Lambda asynchrously, either with an AWS integration calling InvokeAsync on Lambda, or using an intermediate service such as SNS or Kinesis to trigger the Lambda function.
You're #2 diagram, using a synchronous Lambda invoke to initiate the asynchronous invoke.
As of Apr/2016 is it is possible to create async Lambda execution through API Gateway by using AWS Service Proxy. See http://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-lambda.html
You can send the X-Amz-Invocation-Type header, it supports async calls through the Event value
You can optionally request asynchronous execution by specifying Event as the InvocationType
http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestSyntax
Also, if you can't send it via your micro-service, you can configure this header to be passed by default through the Method Execution -> Integration Request -> HTTP Headers in your API Gateway Resource
This worked for me on a micro-service -> API Gateway -> Lambda scenario, like the mentioned on the question.