Rest-API call fails in lambda function - amazon-web-services

I'm new about lambda functions in AWS and I need some suggestions to figure out the nature of the problem.
AWS Lambda function based on Javascript using node.js 12.x.
I did set up local development environment based on SAM (sam cli/aws cli/docker/IntelliJ) on Ubuntu 18.04 and MacOs Catalina and a simple basic lambda function work, on both systems.
I can set up logs and see them on via IntelliJ when docker runs.
The function was created using sam init command from a terminal and selecting a simple hello world.
I did add a Rest-API call in it.
Nothing fancy, using request 2.88.2 (I know is deprecated and I did try to use other ways, all of them fails anyway so I'm stick with request for now).
Basically what is happening is that the call to the API "seems" not happening at all.
Logs placed before and after the API call are showing up.
Logs inside the API call, like to show the errors or results, are never showing up.
So far only in one case I was able to see an error message coming from the API, when I removed the URI.
And as expected the API returned an error message saying : invalid URI.
Otherwise NOTHING. Here some code.
This function is called from the lambda handler.
function getToken() {
const request = require('request');
const qs = require('querystring');
console.log("getToken function called");
let bodyData = qs.stringify({
username: 'test',
password: 'xxxxx',
grant_type: 'password'
});
console.log("getToken bodyData : " + bodyData);
let options = {
url: "https://blahblahblah/function",
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Authorization': 'Basic xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
},
form: bodyData
};
console.log("getToken options : " + options);
var request1 = request(options, function(err, response, body) {
console.log("returned from API call");
if (err) {
console.log("Error from getToken : " + err);
return null;
} else {
console.log("Answer from getToken : " + body);
return body;
}
});
}
I did test the connection and API using Postman and is working.
All the logs inside the request are NEVER coming up.
No matter changing options (did try many many different ways).
What am I doing wrong ?
Any suggestion on how to track this problem ?
Thanks
STeve

This is the correct behaviour. Because the function getToken is not waiting for the http request to complete. you should either convert the function to use promise/async-await or simply callback.
Promise
async function getToken() {
...
return new Promise((resolve, reject) => {
request(options, function (err, response, body) {
console.log("returned from API call");
if (err) {
console.log("Error from getToken : " + err);
resolve(null);
} else {
console.log("Answer from getToken : " + body);
resolve(body);
}
});
});
}
// Then in the lambda Handler:
// await getToken()
Callback
function getToken(callback) {
...
request(options, function (err, response, body) {
console.log("returned from API call");
if (err) {
console.log("Error from getToken : " + err);
callback(null);
} else {
console.log("Answer from getToken : " + body);
callback(body);
}
});
}
// Then in the lambda
getToken(() => {
// handle the http request response
})

Related

AWS Lambda function invoked from another Lambda function runs stale request in queue

I have a Lambda function (A) that calls another Lambda function (B) and returns while the other one is still executing. Everything works perfectly locally (using sls offline), but when I deploy it, the following behavior occurs:
I call A for the first time. I see on Cloudwatch that A runs successfully and returns, but nothing on B, which apparently doesn't run.
I call A another time. Cloudwatch now shows that both A and B run, but the logs show that the invocation of B had request parameters corresponding to the first run of A.
Ensuing runs of A result in runs of B with request params from the previous run of A (e.g. B always lags behind by one and never runs the current invocation like it is supposed to, almost like each call of A pushes out the previous B invocation from a queue of size 1)
Some additional info:
No VPC's are involved (if that makes a difference?)
I am using sls
Anyone have any idea why this is happening? I know that AWS async invocations go into a queue, but I'm pretty sure they should just run ASAP if there's nothing in the queue.
Here's the code:
module.exports.A = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
try {
... some code ...
await connectToDatabase();
... some code ...
const newReq = req;
newReq['newSessionId'] = savedSession._id.toString();
const params = {
FunctionName: config.apiName + '-' + config.apiVersion + '-B',
InvocationType: 'Event',
Payload: JSON.stringify(newReq)
};
lambda.invoke(params, (err, res) => {
if (err) {
console.log(err);
} else {
console.log(res);
}
});
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin' : '*',
'Access-Control-Allow-Credentials' : true
},
body: JSON.stringify(savedSession)
}
} catch (err) {
console.log(err);
return {
statusCode: err.statusCode || 500,
headers: { 'Content-Type': 'text/plain' },
body: 'Could not create session.'
}
}
}
module.exports.B = async (event, context) => {
context.callbackWaitsForEmptyEventLoop = false;
try {
... some code ....
// Connect to Atlas
await connectToDatabase();
... some code ...
for (let i = 0; i < applicants.length; i++) {
... some code ...
const savedResume = await DownloadResume(resumeId, accessToken, fileName, oauthSrcs.GSCRIPT);
conversionPromises.push(ConvertAndExportToS3(fileName, savedResume, sessionKey));
}
await Promise.all(conversionPromises);
... some code ...
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin' : '*',
'Access-Control-Allow-Credentials' : true
},
body: JSON.stringify('Converted all resumes successfully!')
}
} catch (err) {
console.log(err);
return {
statusCode: err.statusCode || 500,
headers: { 'Content-Type': 'text/plain' },
body: 'Could not create session.'
}
}
}
I see you are returning from A without waiting for lambda.invoke to finish. I think you should.
To clarify, you are indeed invoking an asynchronous lamdda (meaning you don't have to wait for the child lambda to finish), but the invocation itself is synchronous. Not sure what would happen if you returns A in the middle of B's invocation.
Also, did you wait long enough so you are sure that the 2nd run of A triggers anything ? It could be that a cold start on B makes it look that way if you don't wait long enough on first call

testing multiple http request using mocha

I've been trying to solve this issue for days;
create the test for this case using mocha:
app.post('/approval', function(req, response){
request.post('https://git.ecommchannel.com/api/v4/users/' + req.body.content.id + '/' + req.body.content.state + '?private_token=blabla', function (error, resp, body) {
if (resp.statusCode == 201) {
//do something
} else {
response.send("failed"), response.end();
}
});
} else {
response.send("failed"), response.end();
}
});
});
I've tried several ways, using supertest to test the '/approval' and using nock to test the post request to git api. But it always turn "statusCode" is undefined. I think that's because the request to git api in index.js is not inside a certain function(?)
So I can't implement something like this :
https://codeburst.io/testing-mocking-http-requests-with-nock-480e3f164851 or
https://scotch.io/tutorials/nodejs-tests-mocking-http-requests
const nockingGit = () => {
nock('https://git.ecommchannel.com/api/v4/users')
.post('/1/yes', 'private_token=blabla')
.reply(201, { "statusCode": 201 });
};
it('approval', (done) => {
let req = {
content: {
id: 1,
state: 'yes'
},
_id: 1
}
request(_import.app)
.post('/approval')
.send(req)
.expect(200)
.expect('Content-Type', /html/)
.end(function (err, res) {
if (!err) {
nockingGit();
} else {
done(err);
}
});
done();
})
Then I tried to use supertest as promise
it('approve-block-using-promise', () => {
return promise(_import.app)
.post('/approval')
.send(req = {
content: {
id: 1,
state: 'yes'
},
_id: 1
})
.expect(200)
.then(function(res){
return promise(_import.app)
.post("https://git.ecommchannel.com/api/v4/users/")
.send('1/yes', 'private_token=blabla')
.expect(201);
})
})
But it gives error: ECONNEREFUSED: Connection refused. I didn't find any solution to solve the error. Some sources said that it needs done() .. but it gives another error message, 'ensure "done()" is called" >.<
So then I've found another way, using async (https://code-examples.net/en/q/141ce32)
it('should respond to only certain methods', function(done) {
async.series([
function(cb) { request(_import.app).post('/approval')
.send(req = {
content: {
id: 1,
state: 'yes'
},
_id: 1
})
.expect(200, cb); },
function(cb) { request(_import.app).post('/https://git.ecommchannel.com/api/v4/users/').send('1/yes', 'private_token=blabla').expect(201, cb); },
], done);
});
and it gives this error : expected 201 "Created", got 404 "Not Found". Well, if I open https://git.ecommchannel.com/api/v4/users/1/yes?private_token=blabla in the browser it does return 404. But what I expect is I've injected the response to 201 from the unit test; so whatever the actual response is, the statusCode suppose to be 201, right?
But then since it gives that error, is it means the unit test really send the request to the api?
Pls help me to solve this; how to test the first code I shared.
I really new into unit test.
There are a few things wrong with your posted code, I'll try to list them out but I'm also including a full, passing example below.
First off, your call to git.ecommchannel in the controller, it's a POST with no body. While this isn't causing the errors you're seeing and is technically not incorrect, it is odd. So you should double check what the data you should be sending is.
Next, I'm assuming this was a copy/paste issue when you created the question, but the callback for the request in your controller is not valid JS. The brackets don't match up and the send "failed" is there twice.
Your Nock setup had two issues. First the argument to nock should only have origin, none of the path. So /api/v4/users had to be moved into the first argument of the post method. The other issue was with the second argument passed to post that is an optional match of the POST body. As stated above, you aren't currently sending a body so Nock will always fail to match and replace that request. In the example below, the private_token has been moved to match against the query string of the request, as that what was shown as happening.
The calling of nockingGit was happening too late. Nock needs to register the mock before you use Supertest to call your Express app. You have it being called in the end method, by that time it's too late.
The test labeled approve-block-using-promise has an issue with the second call to the app. It's calling post via Supertest on the Express app, however, the first argument to that post method is the path of the request you're making to your app. It has nothing to do with the call to git.ecommchannel. So in that case your Express app should have returned a 404 Not Found.
const express = require('express')
const nock = require('nock')
const request = require('request')
const supertest = require('supertest')
const app = express()
app.use(express.json())
app.post('/approval', function(req, response) {
const url = 'https://git.ecommchannel.com/api/v4/users/' + req.body.content.id + '/' + req.body.content.state
request.post({
url,
qs: {private_token: 'blabla'}
// body: {} // no body?
},
function(error, resp, body) {
if (error) {
response.status(500).json({message: error.message})
} else if (resp.statusCode === 201) {
response.status(200).send("OK")
} else {
response.status(500).send("failed").end();
}
});
});
const nockingGit = () => {
nock('https://git.ecommchannel.com')
.post('/api/v4/users/1/yes')
.query({private_token: 'blabla'})
.reply(201, {"data": "hello world"});
};
it('approval', (done) => {
const reqPayload = {
content: {
id: 1,
state: 'yes'
},
_id: 1
}
nockingGit();
supertest(app)
.post('/approval')
.send(reqPayload)
.expect(200)
.expect('Content-Type', /html/)
.end(function(err) {
done(err);
})
})

Postman: how to send asynchronous request by script

I got two requests: A and B in postman. I want to send the request A first then send the request B while request A is still waiting for the response. It's quite easy to do this mannually because request A take 15s to finish.
But are there anyway I can do this automatically, because I am going to test this case a lot.
I have tried to use runner in postman but it always wait for request A to be finish before sending request B.
After that I found a document about sending asynchronous request in postman here.
I wrote a script that use pm.sendRequest to send request B and put that script in the pre-request of request A.
let confirmRequest = {
url: url + "/confirm",
method: "POST",
body: {
"requestId": (new Date()).getTime(),
"statusCode": "0",
}
}
setTimeout(function() {
pm.sendRequest(confirmRequest, function (err, res) {
console.log(err ? err : res.json());
});
}, 1500);
the problem is even I've wrapped it inside a setTimeout function, the request A still wait for pre-request to finish first. So in the end request B have been send before request A.
Are there any solution to this problem?
I tried but could not achieve asynchronously process requests using Postman or Newman. I found it easier to write a nodeJS code using async-await-promise concepts. Here is the sample code:
Sample Code that works for me:
var httpRequest "your raw request body";
var headersOpt = {
"content-type": "application/json",
};
const promisifiedRequest = function(options) {
return new Promise((resolve,reject) => {
request(options, (error, response, body) => {
if (response) {
return resolve(response);
}
if (error) {
return reject(error);
}
});
});
};
var output;
async function MyRequest(httpRequest,j, fileName) {
var options = {
uri: "url",
method: "POST",
body: httpRequest,
json: true,
time: true,
headers: headersOpt
}
try {
console.log('request posted!');
let response = await promisifiedRequest(options);
console.log('response recieved!');
output = output + ',' +response.elapsedTime;
console.log(response.elapsedTime);
console.log(output);
//return response;
} catch (err) {
console.log(err);
}
finally
{
//this code is optional and it used to save the response time for each request.
try{
fileName = (j+1)+'_'+fileName;
fs.writeFile('/logs-async/scripts/output/'+fileName+'.csv', output, (err) => {
//throws an error, you could also catch it here
if (err) throw err;
});
}
catch (err){
console.log(err);
}
}
}

AWS - Sending 1000's of emails from Lambda / Node.js

I have a "main" Lambda function that gets triggered by SNS. It pulls a list of recipients from the database and it needs to send each of them a message based on a template, replacing things like first name and such.
The way I have it setup is I created another Lambda function called "email-send" which is subscribed to "email-send" topic. The "main" Lambda then loops through the recipients list and publishes messages to "email-send" with a proper payload (from, to, subject, message). This might eventually need to process 1000's of emails in a single batch.
Is this a good approach to my requirements? Perhaps Lambda/SNS is not a way to go? If so, what would you recommend.
With this setup I am running into issues when my "main" function finishes running and somehow "sns.publish" does not get triggered in my loop. I assume because I am not letting it finish. But I am not sure how to fix it, being a loop.
Here is the snippet from my Lambda function:
exports.handler = (event, context, callback) => {
// code is here to pull data into "data" array
// process records
for (var i = 0; i < data.length; i++) {
var sns = new aws.SNS();
sns.publish({
Message: JSON.stringify({ from: data[i].from, to: data[i].to, subject: subject, body: body }),
TopicArn: 'arn:aws:sns:us-west-2:XXXXXXXX:email-send'
}, function(err, data) {
if (err) {
console.log(err.stack);
} else {
console.log('SNS pushed!');
}
});
}
context.succeed("success");
};
Thanks for any assistance.
Your code is doing this...
Begin calling sns.publish() 1000 times
Return (through context.succeed())
You didn't wait for those 1000 calls to finish!
What your code should do is...
Begin calling sns.publish() 1000 times
When all calls to sns.publish() has returned, then return. (context.succeed is old so we should use callback() instead).
Something like this...
// Instantiate the client only once instead of data.length times
const sns = new aws.SNS();
exports.handler = (event, context, callback) => {
const snsCalls = []
for (var i = 0; i < data.length; i++) {
snsCalls.push(sns.publish({
Message: JSON.stringify({
from: data[i].from,
to: data[i].to,
subject: subject,
body: body
}),
TopicArn: 'arn:aws:sns:us-west-2:XXXXXXXX:email-send'
}).promise();
}
return Promise.all(snsCalls)
.then(() => callback(null, 'Success'))
.catch(err => callback(err));
};
I think that a better approach is using AWS Lambda API.
That way, you don't need SNS.
For example:
var lambda = new AWS.Lambda({region: AWS_REGION});
function invokeWorkerLambda(task, callback) {
var params = {
FunctionName: WORKER_LAMBDA_NAME,
InvocationType: 'Event',
Payload: JSON.stringify({.....})
};
lambda.invoke(params, function(err, data) {
if (err) {
console.error(err, err.stack);
callback(err);
} else {
callback(null, data);
}
});
}
As you can see, you don't need SNS for lambda function's invocation.
Important: Another suggestion is to create an Array of invocations (functions) and later execute them as follow:
async.parallel(invocations, function(err) {
if (err) {
console.error(err, err.stack);
callback(err);
}
});
Take a look at this link where I got a lot of knowledge about Lambda invocation: https://cloudonaut.io/integrate-sqs-and-lambda-serverless-architecture-for-asynchronous-workloads/

AWS Lambda function timing out

In my local mocha tests the following handler function works just fine. However, when I upload to AWS (using Serverless framework) it times out (unless you don't provide a uid parameter where it then correctly responds immediately).
What's particularly odd is that in less than 3 seconds (timeout is set at 5 seconds), the job completes and even the "post-facto" log message is output but it somehow calling the callback and that is not completing the Lambda function
Here's the cloudwatch log:
]1
And here's the handler function:
export const handler = (event: IRequestInput, context: IContext, cb: IGatewayCallback) => {
console.log('EVENT:\n', JSON.stringify(event, null, 2));
const uid = _.get(event, 'queryStringParameters.uid', undefined);
if(!uid) {
cb(null, {
statusCode: 412,
body: 'no User ID was provided by frontend'
});
return;
}
oauth.getRequestToken()
.then(token => {
console.log('Token is:\n', JSON.stringify(token, null, 2));
console.log('User ID: ', uid);
token.uid = uid;
return Promise.resolve(token);
})
.then((token) => {
console.log('URL: ', token.url);
cb(null, {
statusCode: 200,
body: token.url
});
console.log('post-facto');
})
.catch((err: PromiseError) => {
console.log('Problem in getting promise token: ', err);
cb(err.message);
});
};
Add the following as the first line of your handler function:
context.callbackWaitsForEmptyEventLoop = false
I guess that you're using lambda with "Node.js Runtime 0.10"
So you should add
context.done(null, 'Terminate Lambda');
to terminate the execution.
As the AWS lambda document, it mentions that:
The callback is supported only in the Node.js runtime v4.3. If you
are using the earlier runtime v0.10.42, you need to use the context
methods (done, succeed, and fail) to properly terminate the Lambda
function.
Please refer this link for above information