I have node/express + serverless backend api which I deploy to Lambda function.
When I call an api, request goes to API gateway to lambda, lambda connects to S3, reads a large bin file, parses it and generates an output in JSON object.
The response JSON object size is around 8.55 MB (I verified using postman, running node/express code locally). Size can vary as per bin file size.
When I make an api request, it fails with the following msg in cloudwatch,
LAMBDA_RUNTIME Failed to post handler success response. Http response code: 413
I can't/don't want to change this pipeline : HTTP API Gateway + Lambda + S3.
What should I do to resolve the issue ?
the AWS lambda functions have hard limits for the sizes of the request and of the response payloads. These limits cannot be increased.
The limits are:
6MB for Synchronous requests
256KB for Asynchronous requests
You can find additional information in the official documentation here:
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
You might have different solutions:
use EC2, ECS/Fargate
use the lambda to parse and transform the bin file into the desired JSON. Then save this JSON directly in an S3 public bucket. In the lambda response, you might return the client the public URL/URI/FileName of the created JSON.
For the last solution, if you don't want to make the JSON file visible to whole the world, you might consider using AWS Amplify in your client or/and AWS Cognito in order to give only an authorised user access to the file that he has just created.
As noted in other questions, API Gateway/Lambda has limits on on response sizes. From the discussion I read that latency is a concern additionally.
With these two requirements Lambda are mostly out of the question, as they need some time to start up (which can be lowered with provisioned concurrency) and do only have normal network connections (whereas EC2,EKS can have enhanced networking).
With this requirements it would be better (from AWS Point Of View) to move away from Lambda.
Looking further we could also question the application itself:
Large JSON objects need to be generated on demand. Why can't these be pre-generated asynchronously and then downloaded from S3 directly? Which would give you the best latency and speed and can be coupled with CloudFront
Why need the JSON be so large? Large JSONs also need to be parsed on the client side requiring more CPU. Maybe it can be split and/or compressed?
Related
My REST API occasionally needs to return a 413 'Payload too large' response.
As context: I use AWS with API Gateway and Lambda. Lambda has a maximum payload of 6Mb. Sometimes - less than 0.1% of requests - the payload is greater that 6Mb and my API returns a 413 status.
The way I deal with this is to provide an alternative way to request the data from the API - as a URL with the URL linked to the data stored as a json file on S3. The S3 is in a bucket with a lifecycle rule that automatically deletes the file after a short period.
This works OK, but has the unsatisfying characteristic that a large payload request results in the client making 3 separate calls:
Make a standard request to the API and receive the 413 response
Make a second request to the API for the data stored at an S3 URL. I use an asURL=true parameter in the GET request for this.
Make a third request to retrieve the data from the S3 bucket
An alternative I'm considering is embedding the S3 URL in the 413 response. For example, embedding it in a custom header. This would avoid the need for the second call.
I could also change the approach so that every request is returned as an S3 URL but then 99.9% of the requests would unnecessarily make 2 calls rather than just 1.
Is there a best practice here, or equally, bad practices to avoid?
I would do the way you said - embed S3 URL in the 413 response. Then the responsibility of recovering from 413 will be on the client to check for 413 in the response and call s3. If the consumer is internal then it would be ok. It could be an inconvenience if the consumer is external.
I'm working on an application and our Back-end is written in .NET Web APi Core and Front-end in React. I make an endpoint which gets a JSON list and the size of the list is almost 83 MB. When I, deploy my back-end into AWS Lambda and call my endpoint it gives me an error (Error converting the response object of type Amazon.Lambda.APIGatewayEvents.APIGatewayProxyResponse from the Lambda function to JSON: Unable to expand the length of this stream beyond its capacity.: JsonSerializerException). I already check the Lambda payload response limit is 6 MB, (storing data into S3 from lambda endpoint and then call into Front-End will not work for me), so is there any way I can get that much data through Lambda.
Not with a single call, sorry, you cannot. As described in this link, the payload limit (for synchronous call) is, as you say, 6MB. Asynchronous calls have even lower limits.
I'd suggest you modify your UI/API to narrow your results first, or trap this error and alert the user (or calling service) that the payload is too large (and hence should be aborted, narrowed, or split into multiple calls).
In a single call we cannot retrieve more than 6.2 mb data from AWS lambda or through AWS API.Im also facing the same issue,Either we have to filter data or should do multiple calls
I have a serverless backend running on lambda. The runtime usually varies betweeen 40-250s which is over the apigateway max allowed runtime (29s). As such I think my only option is to resort to asynchronous processing. I get the idea behind it, but help online seems sparse and I'd like to know if there are any best practices out there? Or what would be the simplest way for me to get around this timeout problem–using asynchronous processing or other?
It really depends on your use case. But probably an asynchronous approach is best fitted for this scenario given that it's not usually a good idea from the calling side of your API to wait 250 seconds to get the reply back (probably that's why the 29s limitation on API Gateway).
Asynchronous simply means that you will be replying back from Lambda saying that you received the request and you are going to work on it but it will be available only later.
Then, you will be changing the logic on the client side, too, to check back after some time or perform some checks in a loop until the requested resource is ready.
Depending on what work needs to be done you could create an S3 bucket on the fly and reply back to the client with an S3 presigned URL. Then your worker will upload their results to the S3 bucket and the client will poll that bucket for the results until they are present.
We are working on setting up an API Management portal for one of our Web API. We are using eventhubs for logging the events and we are transferring the event messages to Azure Blob storage using Azure functions.
We would like to know how can we find the Time taken by API Management portal for providing the response for a message (we are capturing the time taken at the back end api layer but not from the API Management layer).
Regards,
John
The simpler solution is to enable Azure Monitor Diagnostic Logs for the Apimanagement service. You will get raw logs for each request including
durationMs - interval between receiving request line and headers from a client and writing last chunk of response body to a client. All writes and reads include network latency.
BackendTime - time spent waiting on backend response
ClientTime - time spent with client for request and response
CacheTime - time spent on fetching from cache
You can also refer this video.
Not the correct way of doing this, but still get an idea of how much time each request is taking. We can actually use the context variable to set the start time in the inbound policy node and then calculate the end time in the outbound node.
I have used AWS API Gateway with the endpoint as a lambda function. I have enabled the cache functionality provided by API Gateway, to reduce both response time & the number of calls forwarded to my lambda function.
The lambda function queries another data store to return data. If data is not found, an asynchronous call is made to update the data store and "data not found" is returned to the caller. Now the API Gateway is even caching this result, which we do not want to happen. This results in the cache always returning "data not found" for its lifetime (1 hr TTL), even though data is updated asynchronously in the data store.
I'm aware of the request header (Cache-Control: max-age=0) which can invalidate cache and get response directly from Lambda, as mentioned in this documentation page.
But this will not be useful because the caller is unaware whether data is present in data store or not and hence cannot selectively send such request header.
So, my 2 questions are:
Does the API Gateway caches other HTTP response as well, like 404 (apart from 200)?
Is it possible to tell the API Gateway not to cache specific responses?
As you observed, API Gateway caches the result of your endpoint regardless of the status code.
No, not at this time. I can bounce the idea to see if this is something we can support in the future.
Even if API Gateway conditionally did not cache the 404, the caller would need to call the endpoint again anyway, so why not return the result synchronously? This pattern is how most cached APIs that I'm aware of behave and would allow your API to work with what API Gateway offers today.