I have used the aws api's to create a cloudwatch event from within my lambda. I have logged the successful message back from calling 'putEvents' and this returns:
2017-12-08T15:08:22.582Z a1d35179-dc29-11e7-ae3c-9354b3005c70 Success
So it was obviously successful but when I try and view the event in Cloudwatch there is nothing there? Where has it gone?
You need to have a Rule to read the Custom Event you are pushing to the CW Event. Then you will be able to see the metrics TriggeredRules showing number of successful PutEvent api you are making.
Another reason could be that you pass on an incorrect timestamp. It's a quick fix, I ran into a bug where we used int(time.time()) in Python for the timestamp parameter, instead of:
int(time.time() * 1000)
Related
I'm working on a solution where I have a SQS queue with Lambda trigger. My understanding is Lambda will receive messages in batches to be processed, and once Lambda function is successful, the messages in the SQS queue is automatically deleted. However, how do I only allow some of those messages to be deleted?
Let's assume this use case:
Lambda function receives a batch with 10 messages, and only 7 messages are valid and can be processed, and the other 3 messages needs to be reprocessed at later point.
My initial thought was I could update the visibility timeout via boto3.sqs.change_visibility_timeout for each of the 3 messages to have it reprocessed after the timeout, however, since overall lambda function execution is successful, all 10 messages are deleted from SQS queue.
Any suggestions?
Yes, by default, the Lambda function deletes all the messages upon success. You would need to handle this in your code, but not by changing the visibility timeout of the messages.
Add DLQ (dead-letter queue) that will actually handle the failed messages (messages go to DLQ after a certain number of failed attempts to be processed, depending on how you set it up)
You have few options here:
You can handle each item yourself, and delete messages that are processed successfully. In case of a message that's not successful, you can throw an error and it won't be deleted automatically by the lambda function
If you use JavaScript you can try with Middy
If you use Python, you can use Lambda Powertools Python
For AWS Lambdas with an SQS trigger, by default, when your function encounters an error processing one or more messages in a given batch, the entire batch is marked as a failure. All of the messages in the batch are made visible again in the queue. Depending on your redrive policy, you can end up repeatedly processing successful messages along with the failures.
Rather than change the visibility timeout, the simplest way to specify which messages should be retried later and which can safely be deleted from the queue is to change the function response type to ReportBatchItemFailures. This allows you to return a list of failed message ids, indicating that only those messages in the batch should be made visible again in the queue.
Here's what the reporting syntax looks like for a handler function in Node.js:
exports.handler = async (event) => {
// Process the event
const batchItemFailureResponse = {
batchItemFailures: [
{
itemIdentifier: "idFailedMessage1"
},
{
itemIdentifier: "idFailedMessage2"
},
{
itemIdentifier: "idFailedMessage3"
}
]
};
return batchItemFailureResponse;
};
There is more information to be found in the official documentation.
This response type is configured when setting a queue as an event source for the Lambda. If you're configuring from the console, navigate to the Lambda function page, select the Configuration tab, and then choose Triggers. Then choose Add Trigger and choose the SQS trigger type. In addition to providing the standard parameters, be sure to check the box under Report batch item failures after expanding Additional Settings. It should look something like this:
Add trigger with batch failure reporting
This parameter must be set when first creating the trigger.
This response type can also be defined if you use CloudFormation templates to provision your resources. See the AWS documentation for more information. Note that if you use AWS SAM event source mappings, the documentation suggests that adding FunctionResponseTypes to the YAML with ReportBatchItemFailures in the type list isn't supported. That is incorrect, the documentation is simply outdated. There is an open issue around addressing this oversight.
Finally, in addition to reporting batch item failures, you should provision a target DLQ (dead-letter queue) and determine a reasonable maximum receive count so that action can be taken on messages that fail repeatedly.
I have written and saved a lambda function. I see:
Congratulations! Your Lambda function "lambda_name" has been
successfully created. You can now change its code and configuration.
Choose Test to input a test event when you want to test your function.
Now how do I run it? I cannot see a 'run' or 'invoke' button as I would expect
Note
The lambda doesn't accept any arguments (it's extremely simple - for the purposes of this question, please presume it's simply 2 * 2 so when I run it it should not require any inputs and should return 4).
Also note
I can see a tonne of different ways to run the lambda here. I just want the simplest way (preferably a button in the browser)
Sending a test message via the Lambda console will run your Lambda function. The test message that you configure will define what is in the event parameter of your lambda handler function.
Since you are not doing anything with that message, you can send any arbitrary test message and it should work for you. You can just use the default hello world message and give it an arbitrary name.
It should then show you the results: any logs or returned objects right in the AWS Lambda console.
Further reading here
AWS Lambda functions are typically triggered by an event, such as an object being uploaded to Amazon S3 or a message being send to an Amazon SNS topic.
This is because Lambda functions are great at doing a small task very often. Often, Lambda functions only run for a few seconds, or even less than a second! Thus, they are normally triggered in response to something else happening. It's a bit like when somebody rings your phone, which triggers you to answer the phone. You don't normally answer your phone when it isn't ringing.
However, it is also possible to directly invoke an AWS Lambda function using the Invoke() command in the AWS SDK. For convenience, you can also use the AWS Command-Line Interface (CLI) aws lambda invoke command. When directly invoking an AWS Lambda function, you can receive a return value. This is in contrast to situations where a Lambda function is triggered by an event, in which case there is nowhere to 'return' a value since it was not directly invoked.
I want to write a Lambda Handler in JAVA to read the events of CloudWatch. These Events are coming from Media Convert API.
Steps that I covered:
Configured the eclipse using AWS tool Kit.
Created an AWS Project with a Lambda function.
Doubt begin from here:
Which Event type shall I select to make a Lambda Handler as it is showing following options:
S3, SNS, Custom, Stream Request Handler, Kinesis Event, Cognito Event.
Note: No mention of Elemental Media Convert type event that are written over CloudWatch Stream.
What is Stream Request Handler here? Does it a handler that could be configured to listen Event stream based event. Is it So. If Yes, Kindly help me to figure out this one.
Added Flow:
A) Media Convert service is used to change format of submitted media.
b) Documentation states that All events are published on Event stream of CloudWatch, when job status changes.
C) Here, i want to read these events from Event stream of Cloud watch regarding change in job status.
You can write a small Lambda function that prints the incoming event to the log file:
def lambda_handler(event, context):
print (event)
return
Then, trigger the function via CloudWatch. The function will write the event to the log files. You can examine the logs to see what information was passed into the function.
This will show you the real content of the message. The other message types are simply for testing in case you do not have a trigger setup to create a real event.
I'm trying to implement an AWS Lambda function that should send an HTTP request. If that request fails (response is anything but status 200) I should wait another hour before retrying (longer that the Lambda stays hot). What the best way to implement this?
What comes to mind is to persist my HTTP request in some way and being able to trigger the Lambda function again in a specified amount of time in case of a persisted HTTP request. But I'm not completely sure which AWS service that would provide that functionality for me. Is SQS an option that can help here?
Or, can I dynamically schedule Lambda execution for this? Note that the request to be retried should be identical to the first one.
Any other suggestions? What's the best practice for this?
(Lambda function is my option. No EC2 or such things are possible)
You can't directly trigger Lambda functions from SQS (at the time of writing, anyhow).
You could potentially handle the non-200 errors by writing the request data (with appropriate timestamp) to a DynamoDB table that's configured for TTL. You can use DynamoDB Streams to detect when DynamoDB deletes a record and that can trigger a Lambda function from the stream.
This is obviously a roundabout way to achieve what you want but it should be simple to test.
As jarmod mentioned, you cannot trigger Lambda functions directly by SQS. But a workaround (one I've used personally) would be to do the following:
If the request fails, push an item to an SQS Delay Queue (docs)
This SQS message will only become visible on the queue after a certain delay (you mentioned an hour).
Then have a second scheduled lambda function which is triggered by a cron value of a smaller timeframe (I used a minute).
This second function would then scan the SQS queue and if an item is on the queue, call your first Lambda function (either by SNS or with the AWS SDK) to retry it.
PS: Note that you can put data in an SQS item, since you mentioned you needed the lambda functions to be identical you can store your first function's input in here to be reused after an hour.
I suggest that you take a closer look at the AWS Step Functions for this. Basically, Step Functions is a state machine that allows you to execute a Lambda function, i.e. a task in each step.
More information can be found if you log in to your AWS Console and choose the "Step Functions" from the "Services" menu. By pressing the Get Started button, several example implementations of different Step Functions are presented. First, I would take a closer look at the "Choice state" example (to determine wether or not the HTTP request was successful). If not, then proceed with the "Wait state" example.
My Amazon Lambda function (in Python) is called when an object 123456 is created in S3's input_bucket, do a transformation in the object and saves it in output_bucket.
I would like to notify my main application if the request was successful or unsuccessful. For example, a POST http://myapp.com/successful/123456 if the processing is successful and http://myapp.com/unsuccessful/123456 if its not.
One solution I thought is to create a second Amazon Lambda function that is triggered by a put event in output_bucket, and it to do the successful POST request. This solves half of the problem because but I can't trigger the unsuccessful POST request.
Maybe AWS has a more elegant solution using a parameter in Lambda or a service that deals with these types of notifications. Any advice or point in the right direction will be greatly appreciated.
Few possible solutions which I see as elegant
Using SNS Topic: From your transformation lambda, trigger a SNS topic, with success/unsuccess message, where SNS will call a HTTP/HTTPS endpoint with message payload. The advantage here is, your transformation lambda is loosely coupled with endpoint trigger and only connected through messaging.
Using Lambda Step Functions:
You could arrange to run a Lambda function every time a new object is uploaded to an S3 bucket. This function can then kick off a state machine execution by calling StartExecution. The advantage in using step functions is that you can coordinate the components of your application as series of steps in a visual workflow.
I don't think there is any elegant AWS solution, unless you re-architect, something like your lambda sends message to SQS or some intermediatery messaging service with STATUS and then interemdeiatery invokes POST to your application.
If you still want to go with your way of solving, you might need to configure "DeadLetter queue" to do error handling in failure cases (note that use cases described here are not comprehensive, so need to make sure it covers your case) like described here.