AWS Lambda / API Gateway Not passing through encoding - amazon-web-services

I'm trying to convert a working Lumen API service to AWS, and am stumped on getting an external REST API service to work. The service returns data compressed, but this fact isn't being passed through back to the app (Vue) in the browser properly. I tried adding in the headers in the response, as shown below, but it still isn't working. I can see the headers in the response in the browser console, but the browser still isn't interpreting it, so the data still looks like garbage. Any clues as to how to make this work?
var req = require('request');
exports.handler = function (event, context, callback) {
const params = {
url: 'http://api.service',
headers: { 'Authorization': 'code',
'Accept-Encoding': 'gzip,deflate',
'Content-Type': 'application/json' },
json: {'criteria': {
'checkInDate': '2019-10-22',
'checkOutDate': '2019-10-25',
'additional': {'minimumStarRating': 0},
'cityId': 11774}}
};
req.post(params, function(err, res, body) {
if(err){
callback(err, null);
} else{
callback(null, {
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Content-Encoding": "gzip"
},
"body": body
});
}
});
};

In the case you are seeing all scrambled characters, chance is you have not let API Gateway treat your Lambda answer as binary yet ( Since it is gzip-ed from your lambda )
Take a look at the document
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-console.html
And this article
Unfortunately, the API Gateway is currently oblivious of gzip. If
we’re using a HTTP proxy, and the other HTTP endpoint returns a
gzipped response, it’ll try to reencode it, garbling the response.
We’ll have to tell the API Gateway to treat our responses as binary
files — not touching it in any way.
https://techblog.commercetools.com/gzip-on-aws-lambda-and-api-gateway-5170bb02b543

Related

Empty response on Hasura auth hook using AWS Lambda

I got some troubles configuring an Hasura auth hook using a Lambda. I need such a function as I am storing my JWT token in an HTTP-only cookie, for security reasons.
I'm using a serverless function which returns a correct response (either when testing a curl request directly, or even when logging lambda):
{
"statusCode":200,
"body":"{\"X-Hasura-User-Id\":\"74d3bfa9-0983-4f09-be02-6a36888b382e\",\"X-Hasura-Role\":\"user\"}"
}
Yet, Hasura hook doesn't seem to recognize the response:
{
"type": "webhook-log",
"timestamp": "2020-02-07T10:27:34.844+0000",
"level": "info",
"detail": {
"response": null,
"url": "http://serverless:3000/auth",
"method": "GET",
"http_error": null,
"status_code": 200
}
}
These two lines of logs are adjacent in my logs. I just reformatted them a little bit to ease reading.
My lambda code looks like:
export const handler = async (event) => {
const cookies = getCookiesFromHeader(event.headers);
const { access_token: accessToken } = cookies;
let decodedToken = null;
try {
const cert = fs.readFileSync("./src/pem/dev.pem");
decodedToken = jwt.verify(accessToken, cert);
} catch (err) {
console.error(err);
return {
statusCode: 401,
};
}
const hasuraClaims = decodedToken['https://hasura.io/jwt/claims'];
return {
statusCode: 200,
body: JSON.stringify({
"X-Hasura-User-Id": hasuraClaims['x-hasura-user-id'],
"X-Hasura-Role": hasuraClaims['x-hasura-default-role']
})
}
}
Any idea on what is going on? Note that I'm using serverless offline, in case of. :)
In AWS Lambda, the spec requires the response body to be stringified and the actual response will be a parsed JSON object which is what Hasura will receive from the auth webhook.
When you are using serverless-offline, the response body is returned as a String (since JSON.stringify is used) without getting parsed. A simple curl will give you the difference.
The above code will work on Lambda but not on local development using serverless-offline. You will have to use the event object to see if isOffline is true and return JSON directly and if not return the stringified version.
Example code:
if(event.isOffline) {
// make it work with serverless-offline
return { "x-hasura-role": "user" ....};
} else {
// make it work with lambda
return { statusCode: 200, body: JSON.stringify({"x-hasura-role": "user"}) };
}
Official example in the serverless-offline repo along with error handling.
Related issues:
https://github.com/dherault/serverless-offline/issues/530
https://github.com/dherault/serverless-offline/issues/488

change return type in AWS Lambda function?

I am trying out AWS Lambda.
I have enabled Lambda proxy integration.
When I try to query the endpoint with a post request I get an internal server error.
However if I were to do it in Javascript I get the response as a string.
The function is as follows:
exports.handler = async (event, context) => {
// TODO implement
const response = {
statusCode: 200,
headers: {'Control-Access-Allow-Origin': '*', 'Content-Type': 'application/json'},
body: {
event,
context,
}
};
return response;
};
First, your body should be a string, not an object:
const response = {
statusCode: 200,
headers: {'Control-Access-Allow-Origin': '*', 'Content-Type': 'application/json'},
body: JSON.stringify({
event,
context,
})
};
return response;
Then you can take a look at CloudWatch log to see what the problem is.
If you have checked lambda proxy integration you need this format as a responce
{
statusCode: 200,
body: JSON.stringify(message),
headers: {'Content-Type': 'application/json'}
}
The returned object must have a statusCode, body, and headers attribute. In my example above, I included Content-Type in the headers object, but it can be empty if you’d like. body’s value must be a string, if we were to pass in the user object here without turning it into a JSON-encoded string, this would fail.
If your return object doesn’t have these attributes, when you test the Lambda-API Gateway connection an error like this will pop up:
message: "Internal server error".
If you uncheck the lambda proxy integration you can pass whatever you want as a responce.

CORS error with API Gateway and Lambda **only** when using Proxy Integration

I am trying to add an Item to DynamoDB upon a post request from API Gateway using Lambda.
This is what my Lambda code looks like:
var AWS = require('aws-sdk');
var dynamoDB = new AWS.DynamoDB();
exports.handler = (event, context, callback) => {
var temp_id = "1";
var temp_ts = Date.now().toString();
var temp_calc = event['params']['calc'];
var params = {
TableName:"calc-store",
Item: {
Id: {
S: temp_id
},
timestamp: {
S: temp_ts
},
calc: {
S: temp_calc
}
}
};
dynamoDB.putItem(params,callback);
const response = {
statusCode: 200,
headers: {
'content-type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
body: event['params']['calc']
};
callback(null, response);
};
This is how I am calling the function from my client
axios.post(apiURL, {params:{calc:calc}})
.then ((res) => {
console.log(res);
})
I have enabled CORS over 30 times on my API Gateway, and I've also double checked by adding headers to the response. But no matter what I do, I keep getting a CORS error and for some reason, in my response, I can see that the "Access-Control-Allow-Origin" header is not being appended.
POST https://egezysuta5.execute-api.us-east-1.amazonaws.com/TEST 502
localhost/:1 Failed to load https://egezysuta5.execute-api.us-east-
1.amazonaws.com/TEST: No 'Access-Control-Allow-Origin' header is
present on the requested resource. Origin 'http://localhost:3000' is
therefore not
allowed access. The response had HTTP status code 502.
createError.js:17 Uncaught (in promise) Error: Network Error
at createError (createError.js:17)
at XMLHttpRequest.handleError (xhr.js:87)
I tried not using Lambda Proxy Integration, and it worked then, however, I was unable to access the params I passed.
EDIT: After spending hours on this, here is what I've boiled the problem down to. My client is making a successful pre-flight request to OPTIONS. The OPTIONS is successfully returning the correct CORS headers, but for some reason, these are not being passed to my POST request!
EDIT2: (This does not solve the problem) If I change the response body to a string there is no error!! There is something wrong with
event['params]['calc']
Your problem is with the flow of the code. Basically you're not waiting for putItem to complete before callback gets executed...Try this...
dynamoDB.putItem(params,(err,data) => {
if(err){
return callback(err, null)
}
const response = {
statusCode: 200,
headers: {
'content-type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
body: JSON.parse(event.body).calc
};
return callback(null, response);
});
There are 2 issues going on here:
your code is crashing because you are probably trying to access a null property in the event object
because your code fails before you can return the full response, the proper cors headers don’t get sent back to the browser.
Always try and catch errors in your lambda code. Always log the error and return the full response with a status code of 500, in the case of an error. Also, it’s important to handle async functions, like putItem with promises. Really grasp that concept before working with JavaScript!

Is it theoretically possible to use `putRecord` in Kinesis via singular http POST request?

I've been experiencing some issues with AWS Kinesis inasmuch as I have a stream set up and I want to use a standard http POST request to invoke a Kinesis PutRecord call on my stream. I'm doing this because bundle-size of my resultant javascript application matters and I'd rather not import the aws-sdk to accomplish something that should (on paper) be possible.
Just so you know, I've looked at this other stack overflow question about the same thing and It was... sort of informational.
Now, I already have a method to sigv4 sign a request using an access key, secret token, and session token. but when I finally get the result of signing the request and send it using the in-browser fetch api, the service tanks with (or with a json object citing the same thing, depending on my Content-Type header, I guess) as the result.
Here's the code I'm working with
// There is a global function "sign" that does sigv4 signing
// ...
var payload = {
Data: { task: "Get something working in kinesis" },
PartitionKey: "1",
StreamName: "MyKinesisStream"
}
var credentials = {
"accessKeyId": "<access.key>",
"secretAccessKey": "<secret.key>",
"sessionToken": "<session.token>",
"expiration": 1528922673000
}
function signer({ url, method, data }) {
// Wrapping with URL for piecemeal picking of parsed pieces
const parsed = new URL(url);
const [ service, region ] = parsed.host.split(".");
const signed = sign({
method,
service,
region,
url,
// Hardcoded
headers : {
Host : parsed.host,
"Content-Type" : "application/json; charset=UTF-8",
"X-Amz-Target" : "Kinesis_20131202.PutRecord"
},
body : JSON.stringify(data),
}, credentials);
return signed;
}
// Specify method, url, data body
var signed = signer({
method: "POST",
url: "https://kinesis.us-west-2.amazonaws.com",
data : JSON.stringify(payload)
});
var request = fetch(signed.url, signed);
When I look at the result of request, I get this:
{
Output: {
__type: "com.amazon.coral.service#InternalFailure"},
Version: "1.0"
}
Now I'm unsure as to whether Kinesis is actually failing here, or if my input is malformed?
here's what the signed request looks like
{
"method": "POST",
"service": "kinesis",
"region": "us-west-2",
"url": "https://kinesis.us-west-2.amazonaws.com",
"headers": {
"Host": "kinesis.us-west-2.amazonaws.com",
"Content-Type": "application/json; charset=UTF-8",
"X-Amz-Target": "Kinesis_20131202.PutRecord",
"X-Amz-Date": "20180613T203123Z",
"X-Amz-Security-Token": "<session.token>",
"Authorization": "AWS4-HMAC-SHA256 Credential=<access.key>/20180613/us-west-2/kinesis/aws4_request, SignedHeaders=content-type;host;x-amz-target, Signature=ba20abb21763e5c8e913527c95a0c7efba590cf5ff1df3b770d4d9b945a10481"
},
"body": "\"{\\\"Data\\\":{\\\"task\\\":\\\"Get something working in kinesis\\\"},\\\"PartitionKey\\\":\\\"1\\\",\\\"StreamName\\\":\\\"MyKinesisStream\\\"}\"",
"test": {
"canonical": "POST\n/\n\ncontent-type:application/json; charset=UTF-8\nhost:kinesis.us-west-2.amazonaws.com\nx-amz-target:Kinesis_20131202.PutRecord\n\ncontent-type;host;x-amz-target\n508d2454044bffc25250f554c7b4c8f2e0c87c2d194676c8787867662633652a",
"sts": "AWS4-HMAC-SHA256\n20180613T203123Z\n20180613/us-west-2/kinesis/aws4_request\n46a252f4eef52991c4a0903ab63bca86ec1aba09d4275dd8f5eb6fcc8d761211",
"auth": "AWS4-HMAC-SHA256 Credential=<access.key>/20180613/us-west-2/kinesis/aws4_request, SignedHeaders=content-type;host;x-amz-target, Signature=ba20abb21763e5c8e913527c95a0c7efba590cf5ff1df3b770d4d9b945a10481"
}
(the test key is used by the library that generates the signature, so ignore that)
(Also there are probably extra slashes in the body because I pretty printed the response object using JSON.stringify).
My question: Is there something I'm missing? Does Kinesis require headers a, b, and c and I'm only generating two of them? Or is this internal error an actual failure. I'm lost because the response suggests nothing I can do on my end.
I appreciate any help!
Edit: As a secondary question, am I using the X-Amz-Target header correctly? This is how you reference calling a service function so long as you're hitting that service endpoint, no?
Update: Followinh Michael's comments, I've gotten somewhere, but I still haven't solved the problem. Here's what I did:
I made sure that in my payload I'm only running JSON.stringify on the Data property.
I also modified the Content-Type header to be "Content-Type" : "application/x-amz-json-1.1" and as such, I'm getting slightly more useful error messages back.
Now, my payload is still mostly the same:
var payload = {
Data: JSON.stringify({ task: "Get something working in kinesis" }),
PartitionKey: "1",
StreamName: "MyKinesisStream"
}
and my signer function body looks like this:
function signer({ url, method, data }) {
// Wrapping with URL for piecemeal picking of parsed pieces
const parsed = new URL(url);
const [ service, region ] = parsed.host.split(".");
const signed = sign({
method,
service,
region,
url,
// Hardcoded
headers : {
Host : parsed.host,
"Content-Type" : "application/json; charset=UTF-8",
"X-Amz-Target" : "Kinesis_20131202.PutRecord"
},
body : data,
}, credentials);
return signed;
}
So I'm passing in an object that is partially serialized (at least Data is) and when I send this to the service, I get a response of:
{"__type":"SerializationException"}
which is at least marginally helpful because it tells me that my input is technically incorrect. However, I've done a few things in an attempt to correct this:
I've run JSON.stringify on the entire payload
I've changed my Data key to just be a string value to see if it would go through
I've tried running JSON.stringify on Data and then running btoa because I read on another post that that worked for someone.
But I'm still getting the same error. I feel like I'm so close. Can you spot anything I might be missing or something I haven't tried? I've gotten sporadic unknownoperationexceptions but I think right now this Serialization has me stumped.
Edit 2:
As it turns out, Kinesis will only accept a base64 encoded string. This is probably a nicety that the aws-sdk provides, but essentially all it took was Data: btoa(JSON.stringify({ task: "data"})) in the payload to get it working
While I'm not certain this is the only issue, it seems like you are sending a request body that contains an incorrectly serialized (double-encoded) payload.
var obj = { foo: 'bar'};
JSON.stringify(obj) returns a string...
'{"foo": "bar"}' // the ' are not part of the string, I'm using them to illustrate that this is a thing of type string.
...and when parsed with a JSON parser, this returns an object.
{ foo: 'bar' }
However, JSON.stringify(JSON.stringify(obj)) returns a different string...
'"{\"foo\": \"bar\"}"'
...but when parsed, this returns a string.
'{"foo": "bar"}'
The service endpoint expects to parse the body and get an object, not a string... so, parsing the request body (from the service's perspective) doesn't return the correct type. The error seems to be a failure of the service to parse your request at a very low level.
In your code, body: JSON.stringify(data) should just be body: data because earlier, you already created a JSON object with data: JSON.stringify(payload).
As written, you are effectively setting body to JSON.stringify(JSON.stringify(payload)).
Not sure if you ever figured this out, but this question pops up on Google when searching for how to do this. The one piece I think you are missing is that the Record Data field must be base64 encoded. Here's a chunk of NodeJS code that will do this (using PutRecords).
And for anyone asking, why not just use the SDK? I currently must stream data from a cluster that cannot be updated to a NodeJS version that the SDK requires due to other dependencies. Yay.
const https = require('https')
const aws4 = require('aws4')
const request = function(o) { https.request(o, function(res) { res.pipe(process.stdout) }).end(o.body || '') }
const _publish_kinesis = function(logs) {
const kin_logs = logs.map(function (l) {
let blob = JSON.stringify(l) + '\n'
let buff = Buffer.from(blob, 'binary');
let base64data = buff.toString('base64');
return {
Data: base64data,
PartitionKey: '0000'
}
})
while(kin_logs.length > 0) {
let data = JSON.stringify({
Records: kin_logs.splice(0,250),
StreamName: 'your-streamname'
})
let _request = aws4.sign({
hostname: 'kinesis.us-west-2.amazonaws.com',
method: 'POST',
body: data,
path: '/?Action=PutRecords',
headers: {
'Content-Type': 'application/x-amz-json-1.1',
'X-Amz-Target': 'Kinesis_20131202.PutRecords'
},
}, {
secretAccessKey: "****",
accessKeyId: "****"
// sessionToken: "<your-session-token>"
})
request(_request)
}
}
var logs = [{
'timeStamp': new Date().toISOString(),
'value': 'test02',
},{
'timeStamp': new Date().toISOString(),
'value': 'test01',
}]
_publish_kinesis(logs)

how can I get raw body string in Lambda (API gateway Lambda proxy)

After enabled CORS on API gateway, here is the request I sent to the http end point:
$.ajax({
type: 'put',
url: 'https://xxxxx.execute-api.us-east-1.amazonaws.com/dev/artist/application/julian_test',
data: {params: {param1: "543543", param2: "fdasghdfghdf", test: "yes"}},
success: function(msg){
console.log(msg);
},
error: function(msg){
console.log(msg);
}
});
Here is the lambda function(I'm using node serverless package, and no mistakes on the code):
module.exports.julian_test = (event, context, callback) => {
console.log(event);
console.log(event.body);
var response_success = {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*"
},
body: JSON.stringify({
firstUser: {
username: "Julian",
email: "awesome"
},
secondUser: {
username: "Victor",
email: "hello world"
},
thirdUser: {
username: "Peter",
email: "nice"
}
})
};
callback(null, response_success);
};
The console.log(event.body) logs out :
params%5Bparam1%5D=543543&params%5Bparam2%5D=fdasghdfghdf&params%5Btest%5D=yes
, which is not the format I want. I checked the OPTIONS Integration Request / body mapping template, here is the snapshot.
I tried to delete the "application/json" but after that I receive the following response:
XMLHttpRequest cannot load https://xxxxxx.execute-api.us-east-1.amazonaws.com/dev/artist/application/julian_test. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access. The response had HTTP status code 500.
Does anyone know how to get a raw string request body in the back-end lambda? Please help!
The screenshot is for OPTIONS method which is used for CORS.
Your actual Lambda function call is likely made via POST method. Please check the integration request Body mapping templates under the POST method. If it is setup to pass through body (which is default), the body from your input request should be passed to Lambda as is.
Please use test feature in API Gateway console to see how the input is transformed to integration request. That should help you to debug.
Two things here:
Firstly, under your API Gateway resource, go to Method Response and make sure you have a response for 200 OK mapped.
Secondly, under your API Gateway resource, go to Integration Request, under Body Mapping Templates select application/json and enter the following code in the template text editor:
{
"method": "$context.httpMethod",
"body" : $input.json('$'),
"headers": {
#foreach($param in $input.params().header.keySet())
"$param": "$util.escapeJavaScript($input.params().header.get($param))" #if($foreach.hasNext),#end
#end
},
"queryParams": {
#foreach($param in $input.params().querystring.keySet())
"$param": "$util.escapeJavaScript($input.params().querystring.get($param))" #if($foreach.hasNext),#end
#end
},
"pathParams": {
#foreach($param in $input.params().path.keySet())
"$param": "$util.escapeJavaScript($input.params().path.get($param))" #if($foreach.hasNext),#end
#end
}
}
When this is applied, you can access your request object as such: (Node.js)
console.log('Body:', event.body);
console.log('Headers:', event.headers);
console.log('Method:', event.method);
console.log('Params:', event.params);
console.log('Query:', event.query);
Took me a while to figure this out, props to Kenn Brodhagen for the explanation:
tutorial
The only way get real raw AWS Lambda request is via Stream for Handler Input.
As per documentation:
The input payload must be valid JSON but the output stream does not carry such a restriction. Any bytes are supported.
Example of streaming AWS Lambda input and output in Java:
public void handler(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
int letter;
while((letter = inputStream.read()) != -1)
{
outputStream.write(Character.toUpperCase(letter));
}
}
Tom's answer above adds everything except the raw request body. Adding this line to that mapping template allowed me to verify requests as per Slack's documentation:
"rawBody": $util.escapeJavaScript($input.body)