I would like to use AWS Lambda to restart an app server in an elastic beanstalk instance.
After extensive googling and not finding anything, I finally found an article that I have now lost that described how to do what I want. I used it to set up a Lambda function and an EventBridge cron schedule to run the function. Looking at my the cloudwatch logs for the eventbridge, I can see the function is successfully running. However, it is definitely not restarting my app server. When I use the console button to restart, the data I pulled in with another cron gets updated and I can see the date on my main page update. This does not happen when the lambda function runs, though it should.
Here is my function:
const AWS = require('aws-sdk');
const eb = new AWS.ElasticBeanstalk({apiVersion: '2010-12-01'});
exports.handler = async (event) => {
const params = {
EnvironmentName: 'my-prod-environment'
};
eb.restartAppServer(params, function(err, data) {
if (err) {
console.log(err, err.stack);
return err;
}
else {
console.log(data);
return data;
}
});
};
The logs, unfortunately, tell me nothing. Despite the console.log statement, no error or data appears in my logs, so I don't know if the function is even completing. Does anyone have any idea why this won't work?
Related
I am new to AWS and cloud technology in general. So, please bear with me if the use case below is a trivial one.
Well, I have a table in Amazon DynamoDB which I am exporting to Amazon S3 using exportTableToPointInTime API (ExportsToS3) on a scheduled basis everyday at 6 AM. It is being done using an AWS Lambda function in this way -
const AWS = require("aws-sdk");
exports.handler = async (event) => {
const dynamodb = new AWS.DynamoDB({ apiVersion: '2012-08-10' });
const tableParams = {
S3Bucket: '<s3-bucket-name>',
TableArn: '<DynamoDB-Table-ARN>',
ExportFormat: 'DYNAMODB_JSON'
};
await dynamodb.exportTableToPointInTime(tableParams).promise();
};
The CFT template of the AWS Lambda function takes care of creating lambda roles and policies, etc. along with scheduling using Cloudwatch events. This setup works and the table is exported to the target Amazon S3 bucket everyday at the scheduled time.
Now, the next thing I want is that after the export to Amazon S3 is complete, I should be able to invoke an another lambda function and pass the export status to that lambda function which does some processing with it.
The problem I am facing is that the above lambda function finishes execution almost immediately with the exportTableToPointInTime call returning status as IN_PROGRESS.
I tried capturing the response of the above call like -
const exportResponse = await dynamodb.exportTableToPointInTime(tableParams).promise();
console.log(exportResponse);
Output of this is -
{
"ExportDescription": {
"ExportArn": "****",
"ExportStatus": "IN_PROGRESS",
"StartTime": "2021-09-20T16:51:52.147000+05:30",
"TableArn": "****",
"TableId": "****",
"ExportTime": "2021-09-20T16:51:52.147000+05:30",
"ClientToken": "****",
"S3Bucket": "****",
"S3SseAlgorithm": "AES256",
"ExportFormat": "DYNAMODB_JSON"
}
}
I am just obfuscating some values in the log with ****
As can be seen, the exportTableToPointInTime API call does not wait for the table to be exported completely. If it would have, it would have returned ExportStatus as either COMPLETED or FAILED.
Is there a way I can design the above use case to achieve my requirement - invoking an another lambda function only when the export is actually complete?
As of now, I have tried a brute force way to do it and which works but it definitely seems to be inefficient as it puts in a sleep there and also the lambda function is running for the entire duration of the export leading to cost impacts.
exports.handler = async (event) => {
const dynamodb = new AWS.DynamoDB({ apiVersion: '2012-08-10' });
const tableParams = {
S3Bucket: '<s3-bucket-name>',
TableArn: '<DynamoDB-Table-ARN>',
ExportFormat: 'DYNAMODB_JSON'
};
const exportResponse = await dynamodb.exportTableToPointInTime(tableParams).promise();
const exportArn = exportResponse.ExportDescription.ExportArn;
let exportStatus = exportResponse.ExportDescription.ExportStatus;
const sleep = (waitTimeInMs) => new Promise(resolve => setTimeout(resolve, waitTimeInMs));
do {
await sleep(60000); //waiting every 1 min and then calling listExports API
const listExports = await dynamodb.listExports().promise();
const filteredExports = listExports.ExportSummaries.filter(e => e.ExportArn == exportArn);
const currentExport = filteredExports[0];
exportStatus = currentExport.ExportStatus;
}
while (exportStatus == 'IN_PROGRESS');
var lambda = new AWS.Lambda();
var paramsForInvocation = {
FunctionName: 'another-lambda-function',
InvocationType: 'Event',
Payload: JSON.stringify({ 'ExportStatus': exportStatus })
};
await lambda.invoke(paramsForInvocation).promise();
};
What can be done to better it or the above solution is okay?
Thanks!!
One option to achieve this is to define a waiter in order to wait till a "Completed" status is returned from exportTableToPointInTime.
As far I can see there are a few default Waiters for DynamoDB already present, but there is not one for the export, so you'll need to write your own (you can use those already present as an example).
A good post describing how to use and write a waiter could be found here.
This way if the export takes less than 15 minutes you'll be able to catch it within the Lambda limits without the need of a secondary lambda.
If it takes longer than that, you'll need to decouple it, where you have multiple options as suggested by #Schepo and #wahmd:
using an S3 event on the other end
Using AWS EventBridge
Using SNS
combinations of the above.
Context: we want to export the DynamoDB table content into an S3 bucket and trigger a lambda when the export is complete.
In CloudTrail there's an ExportTableToPointInTime event that is sent when the export is started, but no event for when the export is finished.
A way to trigger a lambda once the export is completed is by creating an S3 trigger using this configuration:
In particular:
The creation event type is a complete multi-upload (others do not seem to work, not sure why).
I think the prefix can be omitted, but it's useful. It's composed of:
The first part is the table name, content.
The second part, AWSDynamoDB, is set automatically by the export tool.
This is the most important part. The last files created once the export is complete are manifest-summary.json and manifest-summary.md5. We must set the suffix as one of these files.
For an await call, you are missing "async" keyword on handler.
Change
exports.handler = (event) => {
to
exports.handler = async event => {
Since this is an await call, you need 'async' keyword with it.
Let me know if it fixed your issue.
Also, I suspect you don't need .promise() as it might be already returning promise. Anyways, please try with & without it incase it still doesn't work.
After dynamoDB await call, You can invoke another lambda. It would make sure that your lambda is invoked after dynamoDb export call is completed.
To invoke second lambda,
you can use aws sdk invoke package.
putEvent api using eventBridge.
Later option is better as it decouples both lambdas & also, first lambda does not have to wait until the seconds invocation is completed. (reduces lambda time, hence reduces cost)
I'm developing a chatbot on AWS Lex and I want to use Lambda function to branch my intent.
In order to do so, I created a Lambda as follows:
exports.handler = async (event) => {
console.log(event); //capture Lex params
/*
let { name, slots } = event.currentIntent
if(slots.MeetingType.toLowerCase() === 'on-line') {
return {
dialogAction: {
type: "ElicitSlot",
intentName: name,
slotToElicit: "InvitationLink",
slots
}
}
}
return {
dialogAction: {
type: "Delegate",
slots
}
}
*/
};
But as you can see, even when the function does nothing but log Lex output, I'm getting this error message in Lex:
An error has occurred: The server encountered an error processing the
Lambda response
Any help would be appreciated.
Because you are trying to build a Lex chatbot using JavaScript, please refer to this use case in the AWS SDK for JavaScript DEV Guide. It will walk you through this use case:
Building an Amazon Lex chatbot
Once you get this working, you can port the logic to a Lambda function.
Amazon Lex is giving you this error message because the Lambda function has failed during execution.
Enable CloudWatch logging for your Lambda function and check the logs after Lex has called it. The logs should provide you with more specific details about what's caused the code to break/fail. From there you should have a better idea of how to resolve the issue.
Feel free to post the output from the logs if you need more assistance with debugging the issue.
I have written a small cloud function in GCP which is subscribed to Pub/Sub event. When any cloud builds triggered function post message into the slack channel over webook.
In response, we get lots of details to trigger name, branch name, variables details but i am more interested in Build logs URL.
Currently getting build logs URL in response is like : logUrl: https://console.cloud.google.com/cloud-build/builds/899-08sdf-4412b-e3-bd52872?project=125205252525252
which requires GCP console access to check logs.
While in the console there an option View Raw. Is it possible to get that direct URL in the event response? so that i can directly sent it to slack and anyone can access direct logs without having GCP console access.
In your Cloud Build event message, you need to extract 2 values from the JSON message:
logsBucket
id
The raw file is stored here
<logsBucket>/log-<id>.txt
So, you can get it easily in your function with Cloud Storage client library (preferred solution) or with a simple HTTP Get call to the storage API.
If you need more guidance, let me know your dev language, I will send you a piece of code.
as #guillaume blaquiere helped.
Just sharing the piece of code used in cloud function to generate the singedURL of cloud build logs.
var filename ='log-' + build.id + '.txt';
var file = gcs.bucket(BUCKET_NAME).file(filename);
const getURL = async () => {
return new Promise((resolve, reject) => {
file.getSignedUrl({
action: 'read',
expires: Date.now() + 76000000
}, (err, url) => {
if (err) {
console.error(err);
reject(err);
}
console.log("URL");
resolve(url);
});
})
}
const singedUrl = await getURL();
if anyone looking for the whole code please follow this link : https://github.com/harsh4870/Cloud-build-slack-notification/blob/master/singedURL.js
I have this basic lambda that posts an image to a web server.
From the events in CloudWatch, I can log successfully anything that happens in that lambda function :
From this Log Group (the lambda function) I clicked on Stream to AWS Lambda, chose a new lambda function in which I expect to receive my logs and didn't put any filters at all so I can get all logs.
The Lambda is triggered properly, but the thing is when I persist what I received in the event and context objects, I have all CloudWatch log stream information but I don't see any of the logs.
What I get :
Do I need to specify a filter for me to see any logs at all? Because in the filter section if I don't put any filters and click on test filter, I get all the logs in the preview window which seems to mean it should send the whole logs to my Lambda function. Also, it looked to me the logs where that unreadable stream in AWSLogs and that it was in Base64 but didn't get any results trying to convert that.
Yes the logs are gzipped and base64-encoded as mentioned by jarmod.
Sample code in NodeJs for extracting the same in lambda will be:
var zlib = require('zlib');
exports.handler = (input, context, callback) => {
var payload = new Buffer(input.awslogs.data, 'base64');
zlib.gunzip(payload, function(e, result) {
if (e) {
context.fail(e);
} else {
result = JSON.parse(result.toString());
console.log(result);
}
});
I'm attempting to integrate my lambda function, which must run async because it takes too long, with API gateway. I believe I must, instead of choosing the "Lambda" integration type, choose "AWS Service" and specify Lambda. (e.g. this and this seem to imply that.)
However, I get the message "AWS ARN for integration must contain path or action" when I attempt to set the AWS Subdomain to the ARN of my Lambda function. If I set the subdomain to just the name of my Lambda function, when attempting to deploy I get "AWS ARN for integration contains invalid path".
What is the proper AWS Subdomain for this type of integration?
Note that I could also take the advice of this post and set up a Kinesis stream, but that seems excessive for my simple use case. If that's the proper way to resolve my problem, happy to try that.
Edit: Included screen shot
Edit: Please see comment below for an incomplete resolution.
So it's pretty annoying to set up, but here are two ways:
Set up a regular Lambda integration and then add the InvocationType header described here http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html. The value should be 'Event'.
This is annoying because the console won't let you add headers when you have a Lambda function as the Integration type. You'll have to use the SDK or the CLI, or use Swagger where you can add the header easily.
Set the whole thing up as an AWS integration in the console (this is what you're doing in the question), just so you can set the InvocationType header in the console
Leave subdomain blank
"Use path override" and set it to /2015-03-31/functions/<FunctionARN>/invocations where <FunctionARN> is the full ARN of your lambda function
HTTP method is POST
Add a static header X-Amz-Invocation-Type with value 'Event'
http://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
The other option, which I did, was to still use the Lambda configuration and use two lambdas. The first (code below) runs in under a second and returns immediately. But, what it really does is fire off a second lambda (your primary one) that can be long running (up to the 15 minute limit) as an Event. I found this more straightforward.
/**
* Note: Step Functions, which are called out in many answers online, do NOT actually work in this case. The reason
* being that if you use Sequential or even Parallel steps they both require everything to complete before a response
* is sent. That means that this one will execute quickly but Step Functions will still wait on the other one to
* complete, thus defeating the purpose.
*
* #param {Object} event The Event from Lambda
*/
exports.handler = async (event) => {
let params = {
FunctionName: "<YOUR FUNCTION NAME OR ARN>",
InvocationType: "Event", // <--- This is KEY as it tells Lambda to start execution but immediately return / not wait.
Payload: JSON.stringify( event )
};
// we have to wait for it to at least be submitted. Otherwise Lambda runs too fast and will return before
// the Lambda can be submitted to the backend queue for execution
await new Promise((resolve, reject) => {
Lambda.invoke(params, function(err, data) {
if (err) {
reject(err, err.stack);
}
else {
resolve('Lambda invoked: '+data) ;
}
});
});
// Always return 200 not matter what
return {
statusCode : 200,
body: "Event Handled"
};
};