I want to write a Lambda Handler in JAVA to read the events of CloudWatch. These Events are coming from Media Convert API.
Steps that I covered:
Configured the eclipse using AWS tool Kit.
Created an AWS Project with a Lambda function.
Doubt begin from here:
Which Event type shall I select to make a Lambda Handler as it is showing following options:
S3, SNS, Custom, Stream Request Handler, Kinesis Event, Cognito Event.
Note: No mention of Elemental Media Convert type event that are written over CloudWatch Stream.
What is Stream Request Handler here? Does it a handler that could be configured to listen Event stream based event. Is it So. If Yes, Kindly help me to figure out this one.
Added Flow:
A) Media Convert service is used to change format of submitted media.
b) Documentation states that All events are published on Event stream of CloudWatch, when job status changes.
C) Here, i want to read these events from Event stream of Cloud watch regarding change in job status.
You can write a small Lambda function that prints the incoming event to the log file:
def lambda_handler(event, context):
print (event)
return
Then, trigger the function via CloudWatch. The function will write the event to the log files. You can examine the logs to see what information was passed into the function.
This will show you the real content of the message. The other message types are simply for testing in case you do not have a trigger setup to create a real event.
Related
Before I waste to much time on this I was wondering is it technically possible to send from a Lambda a custom event to Event Bridge to SNS to Chatbot to Slack.
I have written all the infrastructure and I know that it works for non custom messages. So if I have a message with a source of aws.lambda in the rule then when I deploy the Lambda I get the eventual Slack notification.
However if I change the source to a custom source in the rule and use that in the code of the Lambda I get from the SDK call success but no Slack message. From turning on the Chatbot logging I get the following message Event received is not supported (see https://docs.aws.amazon.com/chatbot/latest/adminguide/related-services.html )
I am sort of hoping against hope that I am not sending something in on the SDK put events call that this integration although the api call only offers a limited amount of what you can change.
I did notice that the message sent to Slack from a standard event is much bigger that the one sent as a custom event.
Realistically its just looking that the Chatbox Slack integration is an extremely limited one confined to standard events on a subset of services.
Can someone confirm if this is possible or am I right in my conclusion about the limitations of the integration.
I'm working on a solution where I have a SQS queue with Lambda trigger. My understanding is Lambda will receive messages in batches to be processed, and once Lambda function is successful, the messages in the SQS queue is automatically deleted. However, how do I only allow some of those messages to be deleted?
Let's assume this use case:
Lambda function receives a batch with 10 messages, and only 7 messages are valid and can be processed, and the other 3 messages needs to be reprocessed at later point.
My initial thought was I could update the visibility timeout via boto3.sqs.change_visibility_timeout for each of the 3 messages to have it reprocessed after the timeout, however, since overall lambda function execution is successful, all 10 messages are deleted from SQS queue.
Any suggestions?
Yes, by default, the Lambda function deletes all the messages upon success. You would need to handle this in your code, but not by changing the visibility timeout of the messages.
Add DLQ (dead-letter queue) that will actually handle the failed messages (messages go to DLQ after a certain number of failed attempts to be processed, depending on how you set it up)
You have few options here:
You can handle each item yourself, and delete messages that are processed successfully. In case of a message that's not successful, you can throw an error and it won't be deleted automatically by the lambda function
If you use JavaScript you can try with Middy
If you use Python, you can use Lambda Powertools Python
For AWS Lambdas with an SQS trigger, by default, when your function encounters an error processing one or more messages in a given batch, the entire batch is marked as a failure. All of the messages in the batch are made visible again in the queue. Depending on your redrive policy, you can end up repeatedly processing successful messages along with the failures.
Rather than change the visibility timeout, the simplest way to specify which messages should be retried later and which can safely be deleted from the queue is to change the function response type to ReportBatchItemFailures. This allows you to return a list of failed message ids, indicating that only those messages in the batch should be made visible again in the queue.
Here's what the reporting syntax looks like for a handler function in Node.js:
exports.handler = async (event) => {
// Process the event
const batchItemFailureResponse = {
batchItemFailures: [
{
itemIdentifier: "idFailedMessage1"
},
{
itemIdentifier: "idFailedMessage2"
},
{
itemIdentifier: "idFailedMessage3"
}
]
};
return batchItemFailureResponse;
};
There is more information to be found in the official documentation.
This response type is configured when setting a queue as an event source for the Lambda. If you're configuring from the console, navigate to the Lambda function page, select the Configuration tab, and then choose Triggers. Then choose Add Trigger and choose the SQS trigger type. In addition to providing the standard parameters, be sure to check the box under Report batch item failures after expanding Additional Settings. It should look something like this:
Add trigger with batch failure reporting
This parameter must be set when first creating the trigger.
This response type can also be defined if you use CloudFormation templates to provision your resources. See the AWS documentation for more information. Note that if you use AWS SAM event source mappings, the documentation suggests that adding FunctionResponseTypes to the YAML with ReportBatchItemFailures in the type list isn't supported. That is incorrect, the documentation is simply outdated. There is an open issue around addressing this oversight.
Finally, in addition to reporting batch item failures, you should provision a target DLQ (dead-letter queue) and determine a reasonable maximum receive count so that action can be taken on messages that fail repeatedly.
I am new to Lambda functions, and am using the serverless framework to create them, specifically using python. I am trying to create a function that will do some heavy processing of a file. The type of work is not important, but each job might take several minutes to complete.
I'd prefer not to have an invocation request open for so long, and instead I'd like the Lambda to return immediately, but continue processing after it has returned. Once the job is done, it could email me the processed file using SMS, for example.
Is this possible with AWS Lambda?
AWS Lambda supports two invocation types:
RequestResponse
Event
RequestResponse will wait until the Lambda invocation is finished before returning the response. Event will invoke the Lambda function and then return a success status indicating that the function was successfully started. You can read more about this here.
In actual fact, Lambda functions are ‘almost’ always requested asynchronously.
For your specific use case, you can have the lambda be ‘triggered’ when a file is uploaded to s3 (instead of invoking it programmatically). This means every time a file is uploaded to s3, the lambda is triggered and will process the file.
That way, no connection is open for the duration of that processing. The function can then notify it once it has completed.
I have used the aws api's to create a cloudwatch event from within my lambda. I have logged the successful message back from calling 'putEvents' and this returns:
2017-12-08T15:08:22.582Z a1d35179-dc29-11e7-ae3c-9354b3005c70 Success
So it was obviously successful but when I try and view the event in Cloudwatch there is nothing there? Where has it gone?
You need to have a Rule to read the Custom Event you are pushing to the CW Event. Then you will be able to see the metrics TriggeredRules showing number of successful PutEvent api you are making.
Another reason could be that you pass on an incorrect timestamp. It's a quick fix, I ran into a bug where we used int(time.time()) in Python for the timestamp parameter, instead of:
int(time.time() * 1000)
My Amazon Lambda function (in Python) is called when an object 123456 is created in S3's input_bucket, do a transformation in the object and saves it in output_bucket.
I would like to notify my main application if the request was successful or unsuccessful. For example, a POST http://myapp.com/successful/123456 if the processing is successful and http://myapp.com/unsuccessful/123456 if its not.
One solution I thought is to create a second Amazon Lambda function that is triggered by a put event in output_bucket, and it to do the successful POST request. This solves half of the problem because but I can't trigger the unsuccessful POST request.
Maybe AWS has a more elegant solution using a parameter in Lambda or a service that deals with these types of notifications. Any advice or point in the right direction will be greatly appreciated.
Few possible solutions which I see as elegant
Using SNS Topic: From your transformation lambda, trigger a SNS topic, with success/unsuccess message, where SNS will call a HTTP/HTTPS endpoint with message payload. The advantage here is, your transformation lambda is loosely coupled with endpoint trigger and only connected through messaging.
Using Lambda Step Functions:
You could arrange to run a Lambda function every time a new object is uploaded to an S3 bucket. This function can then kick off a state machine execution by calling StartExecution. The advantage in using step functions is that you can coordinate the components of your application as series of steps in a visual workflow.
I don't think there is any elegant AWS solution, unless you re-architect, something like your lambda sends message to SQS or some intermediatery messaging service with STATUS and then interemdeiatery invokes POST to your application.
If you still want to go with your way of solving, you might need to configure "DeadLetter queue" to do error handling in failure cases (note that use cases described here are not comprehensive, so need to make sure it covers your case) like described here.