I have been trying to read the AWS Lambda#Edge documentation, but I still cannot figure out if the following option is possible.
Assume I have an object (image.jpg, with size 32922 bytes) and I have setup AWS as static website. So I can retrieve:
$ GET http://example.com/image.jpg
I would like to be able to also expose:
$ GET http://example.com/image
Where the response body would be a multipart/related file (for example). Something like this :
--myboundary
Content-Type: image/jpeg;
Content-Length: 32922
MIME-Version: 1.0
<actual binary jpeg data from 'image.jpg'>
--myboundary
Is this something supported out of the box in the AWS Lambda#Edge API ? or should I use another solution to create such response ? In particular it seems that the response only deal with text or base64 (I would need binary in my case).
I finally was able to find complete documentation. I eventually stumble upon:
API Gateway - PORT multipart/form-data
which refers to:
Enabling binary support using the API Gateway console
The above documentation specify the steps to handle binary data. Pay attention that you need to base64 encode the response from lambda to pass it to API Gateway.
Related
I have node/express + serverless backend api which I deploy to Lambda function.
When I call an api, request goes to API gateway to lambda, lambda connects to S3, reads a large bin file, parses it and generates an output in JSON object.
The response JSON object size is around 8.55 MB (I verified using postman, running node/express code locally). Size can vary as per bin file size.
When I make an api request, it fails with the following msg in cloudwatch,
LAMBDA_RUNTIME Failed to post handler success response. Http response code: 413
I can't/don't want to change this pipeline : HTTP API Gateway + Lambda + S3.
What should I do to resolve the issue ?
the AWS lambda functions have hard limits for the sizes of the request and of the response payloads. These limits cannot be increased.
The limits are:
6MB for Synchronous requests
256KB for Asynchronous requests
You can find additional information in the official documentation here:
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
You might have different solutions:
use EC2, ECS/Fargate
use the lambda to parse and transform the bin file into the desired JSON. Then save this JSON directly in an S3 public bucket. In the lambda response, you might return the client the public URL/URI/FileName of the created JSON.
For the last solution, if you don't want to make the JSON file visible to whole the world, you might consider using AWS Amplify in your client or/and AWS Cognito in order to give only an authorised user access to the file that he has just created.
As noted in other questions, API Gateway/Lambda has limits on on response sizes. From the discussion I read that latency is a concern additionally.
With these two requirements Lambda are mostly out of the question, as they need some time to start up (which can be lowered with provisioned concurrency) and do only have normal network connections (whereas EC2,EKS can have enhanced networking).
With this requirements it would be better (from AWS Point Of View) to move away from Lambda.
Looking further we could also question the application itself:
Large JSON objects need to be generated on demand. Why can't these be pre-generated asynchronously and then downloaded from S3 directly? Which would give you the best latency and speed and can be coupled with CloudFront
Why need the JSON be so large? Large JSONs also need to be parsed on the client side requiring more CPU. Maybe it can be split and/or compressed?
I'm using AWS API Gateway integrated with Lambda.
Note: I am NOT using Lambda Proxy.
I need to return a binary response from API Gateway. I have successfully set this up as follows:
Encoded my binary data as base64 UTF-8 string and returning ONLY that from my lambda function return "base64 encoded binary data"
Enabled CONVERT_TO_BINARY on the API Gateway Integration Response
Mapped the Content-Type header on the API Gateway Method Response to the binary media type of my binary content
Added the media type of my binary content to API Gateway's list of Binary Media Types
The issue is that as well as sending the binary data (which I can do successfully from the above steps), I need to include a x-my-header custom header in the API Response.
I know how to set-up header mapping in the API Gateway, but the header has to be calculated from database data, and therefore this value needs to also be returned from lambda.
My understanding of lambda integration (remember, I'm not using lambda proxy here) is that API gateway makes a HTTP request to trigger lambda. Lambda then returns a HTTP response to API Gateway, adding the functions output to the body, and also adding internal aws headers to the response.
Now it is possible to map a header to the Method Response using:
integration.response.header.header-name
My question is...
Can I tell lambda to add my custom header to a binary response, when I'm
using custom lambda integration (not proxy) ?
Note: IF i was using lambda proxy, I know that the return object looks as below, and then I would be able to send custom headers. But for reasons out of my control, I cant use lambda proxy.
Lambda return object solution IF i was using lambda proxy:
return {
'body': "base64 encoded binary data",
'headers': 'x-my-header': 'my-value',
'isBase64Encoded': True
}
For Lambda Integration (not proxy) I have tried modifying my lambda output...
return {
"base64-data": "base64 encoded binary data",
"x-my-header: "some value"
}
And setting up a mapping template in the integration response...
$input.json("$.base64-data")
And setting up a header mapping using...
integration.response.body.x-my-header
But API Gateway returns an error:
Execution failed due to configuration error: Unable to transform response
I believe this error occurs because there cannot be a mapping template when you have CONVERT_TO_BINARY enable. From AWS docs:
When converting a text payload to a binary blob, API Gateway assumes that the text data is a Base64-encoded string and outputs the binary data as a Base64-decoded blob. If the conversion fails, it returns a 500 response indicating an API configuration error. You do not provide a mapping template for such a conversion, although you must enable the passthrough behaviors on the API.
I realize this is an old question, but I ran into a similar header mapping issue recently, even though I'm not using binary data.
To my mind the return from Lambda could look like this
{
"base64-data": "base64 encoded binary data",
"x-my-header: "some value"
}
Based on that Lambda Response, you could apply the following mapping (which I modified based on an AWS example)
$input.json("$.base64-data")
#set($context.responseOverride.header.x-my-header = "$input.json('$.x-my-header')")
$input.json("$") references your response and $.[key] is the right way to reference your sub-keys.
You also have to preconfigure your header "x-my-header" in your method response.
In the integration Response the header mapping can just be an empty value - e.g.
""
The real value will be provided by the override ($context.responseOverride.header) which is set by the mapping template.
I have been trying to perform AWS s3 rest api call to upload document to s3 bucket. The document is in the form of a byte array.
PUT /Test.pdf HTTP/1.1
Host: mybucket.s3.amazonaws.com
Authorization: **********
Content-Type: application/pdf
Content-Length: 5039151
x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD
x-amz-date: 20180301T055442Z
When we perform the api call, it gives the response status 411 i.e Length Required. We have already added the Content-Length header with the byte array length as value. But still the issue is repeating. Please help to resolve the issue.
x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD is only used with the non-standards-based chunk upload API. This is a custom encoding that allows you to write chunks of data to the wire. This is not the same thing as the Multipart Upload API, and is not the same thing as Transfer-Encoding: chunked (which S3 doesn't support for uploads).
It's not clear why this would result in 411 Length Required but the error suggests that S3 is not happy with the format of the upload.
For a standard PUT upload, x-amz-content-sha256 must be set to the hex-encoded SHA-256 hash of the request body, or the string UNSIGNED-PAYLOAD. The former is recommended, because it provides an integrity check. If for any reason your data were to become corrupted on the wire in a way that TCP failed to detect, S3 would automatically reject the corrupt upload and not create the object.
See also https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html
I'm trying to return a 1px gif from an AWS API Gateway method.
Since binary data is now supported, I return an image/gif using the following 'Integration Response' mapping:
$util.base64Decode("R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7")
However, when I look at this in Chrome, I see the following binary being returned:
Instead of:
Could anyone help me understand why this is garbled and the wrong length? Or what I could do to return the correct binary? Is there some other what I could always return this 1px gif without using the base64Decode function?
Many thanks in advance, this has being causing me a lot of pain!
EDIT
This one gets stranger. It looks like the issue is not with base64Decode, but with the general handling of binary. I added a Lambda backend (previously I was using Firehose) following this blog post and this Stack Overflow question. I set images as binaryMediaType as per this documentation page.
This has let me pass the following image/bmp pixel from Lambda through the Gateway API, and it works correctly:
exports.handler = function(event, context) {
var imageHex = "\x42\x4d\x3c\x00\x00\x00\x00\x00\x00\x00\x36\x00\x00\x00\x28\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x18\x00\x00\x00\x00\x00\x06\x00\x00\x00\x27\x00\x00\x00\x27\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00";
context.done(null, { "body":imageHex });
};
However the following images representing an image/png or a image/gif get garbled when passed through:
exports.handler = function(event, context) {
//var imageHex = "\x47\x49\x46\x38\x39\x61\x01\x00\x01\x00\x80\x00\x00\x00\x00\x00\xff\xff\xff\x21\xf9\x04\x01\x00\x00\x00\x00\x2c\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x01\x44\x00\x3b";
//var imageHex = "\x47\x49\x46\x38\x39\x61\x01\x00\x01\x00\x80\x00\x00\xff\xff\xff\x00\x00\x00\x21\xf9\x04\x01\x00\x00\x00\x00\x2c\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x02\x44\x01\x00\x3b";
var imageHex = "\x47\x49\x46\x38\x39\x61\x01\x00\x01\x00\x80\x00\x00\x00\x00\x00\x00\x00\x00\x21\xf9\x04\x01\x00\x00\x00\x00\x2c\x00\x00\x00\x00\x01\x00\x01\x00\x00\x02\x02\x44\x01\x00\x3b\x0a"
context.done(null, { "body":imageHex });
};
This seems to be the same issue as another Stack Overflow question, but I was hoping this would be fixed with the Gateway API binary support. Unfortunately image/bmp doesn't work for my use case as it can't be transparent...
In case it helps anyone, this has been a good tool for converting between base64 and hex.
To anyone else having problems with this: I was also banging my head against the wall trying to retrieve a binary image over API Gateway proxy integration from lambda, but then I noticed that it says right there in the Binary Support section of Lambda Console:
API Gateway will look at the Content-Type and Accept HTTP headers to decide how to handle the body.
So I added Accept: image/png to the request headers and it worked. Oh the joy, and joyness!
No need to manually change content handling to CONVERT_TO_BINARY or muck about with the cli. Of course this rules out using, for example, <img src= directly (can't set headers).
So, in order to get a binary file over API Gateway from lambda with proxy integration:
List all supported binary content types in the lambda console (and deploy)
The request Accept header must include the Content-Type header returned from the lambda expression
The returned body must be base64 encoded
The result object must also have the isBase64Encoded property set to true
Code:
callback(null, {
statusCode: 200,
headers: { 'Content-Type': 'image/png' },
body: buffer.toString('base64'),
isBase64Encoded: true
}
It looks like this was a known issue previously:
https://forums.aws.amazon.com/thread.jspa?messageID=668306򣊒
But it should be possible now that they've added support for binary data:
http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings.html
It looks like this is the bit we need: "Set the contentHandling property of the IntegrationResponse resource to CONVERT_TO_BINARY to have the response payload converted from a Base64-encoded string to its binary blob". Then we shouldn't need the base64Decode() function.
Working on a test now to see if this works.
EDIT: I was finally able to get this working. You can see the binary image here:
https://chtskiuz10.execute-api.us-east-1.amazonaws.com/prod/rest/image
I updated the method response as follows:
I updated the integration response to include a hard-coded image/png header:
The last step was tricky: setting the contentHandling property to "CONVERT_TO_BINARY". I couldn't figure out how to do in the AWS console. I had to use the CLI API to accomplish this:
aws apigateway update-integration-response \
--profile davemaple \
--rest-api-id chtskiuzxx \
--resource-id ki1lxx \
--http-method GET \
--status-code 200 \
--patch-operations '[{"op" : "replace", "path" : "/contentHandling", "value" : "CONVERT_TO_BINARY"}]'
I hope this helps.
Check out this answer. It helped me with exposing PDF file for download through GET request without any additional headers.
I'm trying to call a webservice using the WSClient API from Play Framework.
The main issue is that I want to transfer huge JSON payloads (more than 2MB) without exceeding the maximal payload size.
To do so, I would like to compress the request using gzip (with the HTTP header Content-Encoding: gzip). In the documentation, the parameter play.ws.compressionEnabled is mentioned, but it only seems to enable WSResponse compression.
I have tried to manually compress the payload (using a GZipOutputStream) and to put the header Content-Encoding:gzip, but the server throws a io.netty.handler.codec.compression.DecompressionException : Unsupported compression method 191 in the GZIP header.
How could I correctly compress my request ?
Thanks in advance
Unfortunately I don't think you can compress the request (it is not supported by Netty, the underlying library). You can find more info in https://github.com/AsyncHttpClient/async-http-client/issues/93 and https://github.com/netty/netty/issues/2132