Requirement
I'd like to push DLQ messages to Lambda function periodically like cron.
Situation
I created a batch processing system using SQS and Lambda.
So, I also prepare DLQ(Dead Letter Queue) to store the failed messages.
AWS proposed re-run failed messages in the DLQ with Lambda trigger.
https://docs.aws.amazon.com/lambda/latest/dg/invocation-retries.html
I'd like to re-run periodically and automatically like cron, but AWS doesn't show how to do that except for manually.
In case I re-run manually, I JUST map the lambda and the DLQ. It's all done.
After that, I can re-run messaging dynamically with Enable and Disable button.
Otherwise, it's more complicated because no API switching Enable and Disable of Lambda trigger.
boto3 API: create_event_source_mapping() and delete_event_source_mapping() are seemed to better way.
But delete_event_source_mapping() requires UUID of the event mapping. I think it indicates that I need any datastore like ElastiCache or else.
However, I don't wanna prepare other resources in this situation if possible.
My Solution
Cloud Watch Event call lambda.
lambda activate(Enable) event source mapping using create_event_source_mapping().
lambda deactivate(Disable) event source mapping using delete_event_source_mapping()
It looks good to me at first, but in 3rd process lambda want to know UUID from the 1st event. I think this case needs datastore including UUID.
Do you have any solutions without datastore?
Thanks.
Related
I'm new to Cloudwatch events and to Fargate. I want to trigger a Fargate task (Python) to run whenever a file is uploaded to a specific S3 bucket. I can get the task to run whenever I upload a file, and can see the name in the event log; however I can't figure out a simple way to read the event data in Fargate. I've been researching this the past couple of days and haven't found solution other than reading the event log or using a lambda to invoke the task and to put the event data in a message queue.
Is there a simple way to obtain the event data in Fargate with boto3? It's likely that I'm not looking in the right places or asking the right question.
Thanks
One of the easiest options that you can configure is two targets for same s3 image upload event.
Push the Same Event to SQS
launch Fargate task at the same time
Read Message Event from SQS when Fargate is up (No Lambda in between), also same task definition that will work a normal use case, make sure you exit the process after reading the message from sqs.
So in this case whenever Fargate Task up, it will read messages from the SQS.
To do this you would need to use a input transformer.
Each time a event rule is triggered a JSON object accessible to use for in the transformation.
As the event itself is not accessible within the container (like with Lambda functions), the idea is that you would actually forward key information as environment variables and manipulate in your container.
At this time it does not look like every service supports this in the console so you have the following options:
CloudFormation
Terraform
CLI
You can view a tutorial for this exact scenario from this link.
I have a service that uses a JSON file on an S3 bucket for its configuration.
I would like to be able to modify this file, but I'm going to run into a concurrency issue as multiple administrators will be able to write in this file at the same time.
I'm going to use an SNS Topic to trigger a lambda that will write the config changes.
For the moment, I'm going to check the queue every minute and then handle the messages, so that I am sure that I don't have multiple instances of lambda running at the same time and writing in the same file.
Is there any way to have an SNS topic to trigger a lambda function for each message, and then wait for this message to be handled and then move on to the next one?
Cheers,
Julien
You can achieve this by setting the max concurrent executions of your Lambda function to 1. See the documentation for more details about managing concurrency for Lambdas.
I am using AWS Lambda to check the health status and then send out an email. If the health is down I want it to send an email only once.
This Lambda function runs every 20minutes or so and I would like to prevent it from sending out multiple emails in interval if things have broken. Is there a way store environment variables or something in the AWS eco system so that it knows the state between each lambda function runs. (that way it doesnt send out an email and knows it has sent an email already).
I have looked into creating an alarm and sending out notifications but the email sent out through alarm wont do and I would like to have a custom email sent out, so I am using AWS SES through lambda. There is a cloud watch alarm that turns on when there is an error but I cant seem to fetch the state of alarm through the aws-sdk (its apparently not there).
I have written the function in NodeJS
Any suggestions ?
I've implemented something like this a little differently. I too do not care for getting an email for each error, since the errors I receive from my AWS Lambdas do not require immediate attention. I prefer to get them once an hour.
So I write all the errors I receive to an SQS queue. I configure the AWS Lambdas, which are throwing the errors, to send certain errors (configurable via environment variables) to certain SQS queues. Cloudwatch rules (running whenever), configured to pull from specific SQS queues in the Cloudwatch rule definition, then execute an AWS Lambda passing in the rule definition containing the SQS queue to pull from. The Lambda called by the CloudWatch rule handles reading from the SQS queue then emailing the results.
For your case you could modify that process to read all the errors from SQS, then filter that data down to the results you want to send. I use SQS because the "errors" I get don't need to be persisted.
I could see two quick ways to store something like a "last_email_sent" value. The first would be in DynamoDB. This is part of the AWS "serverless" environment that doesn't require you to do much more than interact with it. You didn't indicate your development environment but there are multiple development environments that are supported.
The second would be with the SSM Parameter Store. You can store any number of parameters there too.
There are likely other ways to do this too. Both of these are a bit of overkill but they would work to store what you need.
Alright, I found a better way that is simpler without dealing with other constraints. The NodeJS sdk is limited as it is. When the service is down create an alarm through the sdk and the next time the lambda gets triggered check if the alarm exists and send an email. That way if you want to do some notification through alarm it is possible too.
I think in my question I said this was not possible (last part), which I will retract.
Here is the link for the sdk reference: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatch.html
My use case is as follows: I need to be able to schedule SQS messages in such a way that scheduled messages can be added to a queue on a specific date/time, and also on a recurring basis as needed.
At the implementation level, what I'm basically be looking to do is have some function I can call where I pass in the SQS queue, message, and schedule I want it to run on, without having to build the actual scheduler logic.
I haven't seen anything in AWS itself that seems to allow for that, I also didn't get the impression Lambda functions would do exactly what I need unless I'm missing something.
Is there any other third party cloud service for scheduled processes I should look into, or am I better off in the end just running a scheduling machine at AWS and have some REST API that can add cron jobs/windows scheduled tasks to it that will handle the scheduling of SQS messages?
I could see two slightly different ways of accomplishing this, both based on Cloudwatch scheduled events. The first would be to have Cloudwatch fire off a Lambda. The Lambda would either have the needed parameters or would get them from somewhere else - for example, a DynamoDB table. Otherwise, the rule target allows you to specify a SQS queue - skipping the Lambda. But I'm not sure if that would have the configuration ability you'd want.
Either way, checkout Cloudwatch -> Events -> Create Rule in the AWS console to see your choices.
I am working with PHP technology.
I have my program that will write message to Amazon SQS.
Can anybody tell me how I can use lambda service to get data from SQS and push it into MySQL. Lambda service should get trigger whenever new record gets added to the queue.
Can somebody share the steps or code that will help me to get through with this task?
There isn't any official way to link SQS and Lambda at the moment. Have you looked into using an SNS topic instead of an SQS queue?
Agree with Mark B.
Ways to get events over to lambda.
use SNS http://docs.aws.amazon.com/sns/latest/dg/sns-lambda.html
use SNS->SQS and have the lambda launched by the sns notification just use it to load whatever is in te SQS queue.
use kinesis.
alternatively have lambda run by cron job to read sqs. Depends on needed latency. If you require it be processed immediately then this is not the solution because you would be running the lambda all the time.
Important note for using SQS. You are charged when you query even if no messages are waiting. So do not do fast polls even in your lambdas. Easy to run up a huge bill doing nothing. Also good reason to make sure you set up cloudwatch on the account to monitor usage and charges.