I am trying to create an aws lambda function that will read rows from multiple Google Sheets documents using the Google Sheet API and will merge them afterwards and write in another spreadsheet. To do so I did all the necessary steps according to several tutorials:
Create credentials for the AWS user to have the key pair.
Create a Google Service Account, download the credentials.json file.
Share each necessary spreadsheet with the Google Service Account client_email.
When executing the program locally it works perfectly, it successfully logins using the credentials.json file and reads & writes all necessary documents.
However when uploading it to AWS Lambda using the serverless framework and google-spreadsheet, the program fails silently in the authentication step. I've tried changing the permissions as recommended in this question but it still fail. The file is read properly and I can print it to the console.
This is the simplified code:
async function getData(spreadsheet, psychologistName) {
await spreadsheet.useServiceAccountAuth(clientSecret);
// It never gets to this point, it fails silently
await spreadsheet.loadInfo();
... etc ...
}
async function main() {
const promises = Object.entries(psychologistSheetIDs).map(async (psychologistSheetIdPair) => {
const [psychologistName, googleSheetId] = psychologistSheetIdPair;
const sheet = new GoogleSpreadsheet(googleSheetId);
psychologistScheduleData = await getData(sheet, psychologistName);
return psychologistScheduleData;
});
//When all sheets are available, merge their data and write back in joint view.
Promise.all(promises).then(async (psychologistSchedules) => {
... merge the data ...
});
}
module.exports.main = async (event, context, callback) => {
const result = await main();
return {
statusCode: 200,
body: JSON.stringify(
result,
null,
2
),
};
I solved it,
While locally having a Promise.all(promises).then(result =>...) eventually returned the value and executed what was inside the then(), aws lambda returned before the promises were resolved.
This solved it:
const res = await Promise.all(promises);
mergeData(res);
Related
I have a Next.js application where authentication is set up with the Auth0 Next.js SDK.
Currently the AUTH0_CLIENT_SECRET is being set as an environment variable when deploying.
I would like to use Google Cloud Secret Manager to get the AUTH0_CLIENT_SECRET during runtime and set it using the initAuth0 method.
I'm following this example: https://github.com/auth0/nextjs-auth0/blob/main/EXAMPLES.md#create-your-own-instance-of-the-sdk
But I can't figure out how I can await the response from secret manager when I need to have the secret ready for calling the method initAuth0({clientSecret...}) and I need that in place to setup the auth end points with auth0.handleAuth().
This is my attempt: /pages/api/auth/[...auth].ts
import { initAuth0 } from "#auth0/nextjs-auth0";
const asyncHandleAuth = async () => {
const clientSecret = await getSecret("AUTH0_CLIENT_SECRET");
const auth0 = initAuth0({
clientSecret // The rest of the config is set with environment variables
});
return auth0.handleAuth();
};
export default asyncHandleAuth();
After some hair pulling I found the problem. Next.js expects the export default function to be of type NextApiHandler but I was returning Promise<NextApiHandler>.
I solved it by wrapping it in another function that takes the request and response arguments and use them to call handleAuth before returning it.
This worked for me:
const asyncHandleAuth =
() => async (req: NextApiRequest, res: NextApiResponse) => {
const clientSecret = await getSecret("AUTH0_CLIENT_SECRET");
const auth0 = initAuth0({
clientSecret, // The rest of the config is set with environment variables
});
return auth0.handleAuth()(req, res);
};
export default asyncHandleAuth();
In the code you posted in your answer:
const clientSecret = await getSecret("AUTH0_CLIENT_SECRET");
you are already waiting until the secret is returned: your code will suspend on that line until getSecret ends. As a consequence, the secret should be ready when using the initAuth0 function.
Perhaps, and according to your comments, the problem could be motivated by your export. You are exporting the asyncHandleAuth function like this:
export default asyncHandleAuth();
But I think it should be instead:
export default asyncHandleAuth;
Your answer makes perfect sense: the actual problem is that you need to provide the appropriate arguments, the request and response representations, to your handler function to perform the actual invocation. But please, be aware that the proposed export default still is valid, in your code you are executing a function that returns the thing that is being exported. Probably you could simplify it like this:
import { initAuth0 } from "#auth0/nextjs-auth0";
const asyncHandleAuth = async (req: NextApiRequest, res: NextApiResponse) => {
const clientSecret = await getSecret("AUTH0_CLIENT_SECRET");
const auth0 = initAuth0({
clientSecret // The rest of the config is set with environment variables
});
return auth0.handleAuth()(req, res);
};
export default asyncHandleAuth;
Note that there is no need for the first arrow function.
I was working on something that would modify my promotional sms messages. I read that it is possible via CampaignHook in Pinpoint. But from the documentation, I couldn't gather how this will actually work. I have followed it until adding permission and linking the pinpoint app id and with it. I have followed this link: https://github.com/Ryanjlowe/lambda-powered-pinpoint-templates
For some reason, I am not able to follow what I need to do on the Lambda (boto3) function side to try and make this work. Is there an example code (python) or well-documented example for this? It would help me a lot. Thanks!
The Pinpoint developer guide describes how to setup a CampaignHook under the chapter "Customizing segments with AWS Lambda".
https://docs.aws.amazon.com/pinpoint/latest/developerguide/segments-dynamic.html
"To assign a Lambda function to a campaign, you define the campaign's CampaignHook settings by using the Campaign resource in the Amazon Pinpoint API. These settings include the Lambda function name. They also include the CampaignHook mode, which specifies whether Amazon Pinpoint receives a return value from the function."
The docs show an example Lambda function:
'use strict';
exports.handler = (event, context, callback) => {
for (var key in event.Endpoints) {
if (event.Endpoints.hasOwnProperty(key)) {
var endpoint = event.Endpoints[key];
var attr = endpoint.Attributes;
if (!attr) {
attr = {};
endpoint.Attributes = attr;
}
attr["CreditScore"] = [ Math.floor(Math.random() * 200) + 650];
}
}
console.log("Received event:", JSON.stringify(event, null, 2));
callback(null, event.Endpoints);
};
"In this example, the handler iterates through each endpoint in the event.Endpoints object, and it adds a new attribute, CreditScore, to the endpoint. The value of the CreditScore attribute is simply a random number."
My express server has a credentials.json containing credentials for a google service account. These credentials are used to get a jwt from google, and that jwt is used by my server to update google sheets owned by the service account.
var jwt_client = null;
// load credentials form a local file
fs.readFile('./private/credentials.json', (err, content) => {
if (err) return console.log('Error loading client secret file:', err);
// Authorize a client with credentials, then call the Google Sheets API.
authorize(JSON.parse(content));
});
// get JWT
function authorize(credentials) {
const {client_email, private_key} = credentials;
jwt_client = new google.auth.JWT(client_email, null, private_key, SCOPES);
}
var sheets = google.sheets({version: 'v4', auth: jwt_client });
// at this point i can call google api and make authorized requests
The issue is that I'm trying to move from node/express to npm serverless/aws. I'm using the same code but getting 403 - forbidden.
errors:
[ { message: 'The request is missing a valid API key.',
domain: 'global',
reason: 'forbidden' } ] }
Research has pointed me to many things including: AWS Cognito, storing credentials in environment variables, custom authorizers in API gateway. All of these seem viable to me but I am new to AWS so any advice on which direction to take would be greatly appreciated.
it is late, but may help someone else. Here is my working code.
const {google} = require('googleapis');
const KEY = require('./keys');
const _ = require('lodash');
const sheets = google.sheets('v4');
const jwtClient = new google.auth.JWT(
KEY.client_email,
null,
KEY.private_key,
[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/spreadsheets'
],
null
);
async function getGoogleSheetData() {
await jwtClient.authorize();
const request = {
// The ID of the spreadsheet to retrieve data from.
spreadsheetId: 'put your id here',
// The A1 notation of the values to retrieve.
range: 'put your range here', // TODO: Update placeholder value.
auth: jwtClient,
};
return await sheets.spreadsheets.values.get(request)
}
And then call it in the lambda handler. There is one thing I don't like is storing key.json as file in the project root. Will try to find some better place to save.
I have a lambda function which does a series of actions. I have a react application which triggers the lambda function.
Is there a way I can send a partial response from the lambda function after each action is complete.
const testFunction = (event, context, callback) => {
let partialResponse1 = await action1(event);
// send partial response to client
let partialResponse2 = await action2(partialResponse1);
// send partial response to client
let partialResponse3 = await action3(partialResponse2);
// send partial response to client
let response = await action4(partialResponse3);
// send final response
}
Is this possible in lambda functions? If so, how we can do this. Any ref docs or sample code would be do a great help.
Thanks.
Note: This is fairly a simple case of showing a loader with % on the client-side. I don't want to overcomplicate things SQS or step functions.
I am still looking for an answer for this.
From what I understand you're using API Gateway + Lambda and are looking to show the progress of the Lambda via UI.
Since each step must finish before the next step begin I see no reason not to call the lambda 4 times, or split the lambda to 4 separate lambdas.
E.g.:
// Not real syntax!
try {
res1 = await ajax.post(/process, {stage: 1, data: ... });
out(stage 1 complete);
res2 = await ajax.post(/process, {stage: 2, data: res1});
out(stage 2 complete);
res3 = await ajax.post(/process, {stage: 3, data: res2});
out(stage 3 complete);
res4 = await ajax.post(/process, {stage: 4, data: res3});
out(stage 4 complete);
out(process finished);
catch(err) {
out(stage {$err.stage-number} failed to complete);
}
If you still want all 4 calls to be executed during the same lambda execution you may do the following (this especially true if the process is expected to be very long) (and because it's usually not good practice to execute "long hanging" http transaction).
You may implement it by saving the "progress" in a database, and when the process is complete save the results to the database as well.
All you need to do is query the status every X seconds.
// Not real syntax
Gateway-API --> lambda1 - startProcess(): returns ID {
uuid = randomUUID();
write to dynamoDB { status: starting }.
send sqs-message-to-start-process(data, uuid);
return response { uuid: uuid };
}
SQS --> lambda2 - execute(): returns void {
try {
let partialResponse1 = await action1(event);
write to dynamoDB { status: action 1 complete }.
// send partial response to client
let partialResponse2 = await action2(partialResponse1);
write to dynamoDB { status: action 2 complete }.
// send partial response to client
let partialResponse3 = await action3(partialResponse2);
write to dynamoDB { status: action 3 complete }.
// send partial response to client
let response = await action4(partialResponse3);
write to dynamoDB { status: action 4 complete, response: response }.
} catch(err) {
write to dynamoDB { status: failed, error: err }.
}
}
Gateway-API --> lambda3 -> getStatus(uuid): returns status {
return status from dynamoDB (uuid);
}
Your UI Code:
res = ajax.get(/startProcess);
uuid = res.uuid;
in interval every X (e.g. 3) seconds:
status = ajax.get(/getStatus?uuid=uuid);
show(status);
if (status.error) {
handle(status.error) and break;
}
if (status.response) {
handle(status.response) and break;
}
}
Just remember that lambda's cannot exceed 15 minutes execution. Therefore, you need to be 100% certain that whatever the process does, it never exceeds this hard limit.
What you are looking for is to have response expose as a stream where you can write to the stream and flush it
Unfortunately its not there in Node.js
How to stream AWS Lambda response in node?
https://docs.aws.amazon.com/lambda/latest/dg/programming-model.html
But you can still do the streaming if you use Java
https://docs.aws.amazon.com/lambda/latest/dg/java-handler-io-type-stream.html
package example;
import java.io.InputStream;
import java.io.OutputStream;
import com.amazonaws.services.lambda.runtime.RequestStreamHandler;
import com.amazonaws.services.lambda.runtime.Context;
public class Hello implements RequestStreamHandler{
public void handler(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
int letter;
while((letter = inputStream.read()) != -1)
{
outputStream.write(Character.toUpperCase(letter));
}
}
}
Aman,
You can push the partial outputs into SQS and read the SQS messages to process those message. This is a simple and scalable architecture. AWS provides SQS SDKs in different languages, for example, JavaScript, Java, Python, etc.
Reading and writing into SQS is very easy using SDK and that too can be implemented in serverside or in your UI layer (with proper IAM).
I found AWS step function may be what you need:
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly.
Check this link for more detail:
In our example, you are a developer who has been asked to create a serverless application to automate handling of support tickets in a call center. While you could have one Lambda function call the other, you worry that managing all of those connections will become challenging as the call center application becomes more sophisticated. Plus, any change in the flow of the application will require changes in multiple places, and you could end up writing the same code over and over again.
I'm facing some timeout problems with a "middleware" (from now on file-service) service developed with NestJS and AWS-S3.
The file-service has two main purposes:
Act as an object storage abstraction layer, to allow the backend to upload files to different storage services completely transparent to the user.
Receive signed tokens as a url query parameter with file/object information, verify access to resource and stream it.
Upload works without problem.
Download small files has no problems too.
But when I try to download large files (> 50MB), after a few seconds, the connection beaks down because of a timeout and as you can figure out the download fails.
I've been spending some days looking for a solutions and reading docs.
Here some of them:
About KeepAlive
Use an instance of S3 each time
But nothing works.
Here the code:
Storage definition class
export class S3Storage implements StorageInterface {
config: any;
private s3;
constructor() {}
async initialize(config: S3ConfigInterface): Promise<void> {
this.config = config;
this.s3 = new AWS.S3();
// initialize S3 Configuration
AWS.config.update({
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
region: config.region
});
}
async downloadFile(target: FileDto): Promise<Readable> {
const params = {
Bucket: this.config.Bucket,
Key: target.sourcePath
};
return this.s3.getObject(params).createReadStream();
}
}
Download method
private async downloadOne(target: FileDto, request, response) {
const storage = await this.provider.getStorage(target.datasource);
response.setHeader('Content-Type', mime.lookup(target.filename) || 'application/octet-stream');
response.setHeader('Content-Disposition', `filename="${path.basename(target.filename)}";`);
const stream = await storage.downloadFile(target);
stream.pipe(response);
// await download and exit
await new Promise((resolve, reject) => {
stream.on('end', () => {
resolve(`${target.filename} has been downloaded`);
});
stream.on('error', () => {
reject(`${target.filename} could not be downloaded`);
});
});
}
If any one has faced the same issue (or similar) or any one has any idea (useful or not), I will appreciate any help or advice.
Thank you in advance.
I had the same issue, and here is how I solved it on my side: instead of processing the file by directly getting the stream from S3, I decided to download the content to a temp file (Amazon backend server for my API) and process already the stream from that temp file. Afterwards, I removed the temp file in order not to fill the hard drive.