SQS deduplication ID and item updates - amazon-web-services

I have an SQS queue that triggers a lambda each time a message arrives in the queue.
the message contains information about a product let's called it A. When the lambda is executed it inserts data of product A into RDS.
However, another message will arrive in about 30 seconds containing other information about product A, which will insert data into RDS again.
Is there any method to put some latency on the SQS triggering lambda?
Also, can the new messages received for product A be processed and the old ones being discarded? I want to use SQS message deduplication in order to use each message received for the product as unique but I am not sure that it's a good fit for this use case?
The other solution was to replace the SQS with a "custom queue", by replacing the SQS with an RDS aurora instance, the lambda will than do a cron on the instance and pick the product with expired TTL in order to insert in the DB but I find this a bit overkill, is there any other way to do this?
Thanks

Based on the comments, the partial solution to the problem is to setup an event source mapping between Lambda and SQS.
In the ideal situation, the producer should be modified in that situation. However, since the producer can't be modified, a caching solution (e.g. ElastiCache) to store the "incomplete" sqs messages before writing them to RDS and to filter out duplicates could be implemented.

Related

Draining of SQS Queue using Lambda after a certain days

So, I am putting some entries in SQS Queue which is set as an event source for the Lambda, and this flow is working fine. As soon as entry comes in SQS queue lambda process it. so far so good.
But I have a situation where I want to let the entries to stay in SQS for 3-4 days and then let a lambda process them.
So basically if I see that okey, I have 100 entries in my SQS Queue and it's been 4 days now. I want to let lambda drain them and run some logic. Is this possible, Kindly guide me?
I think disabling lambda is not the way to fulfil the requirement, as you will miss other messages too.
SQS is messaging service and when it integrated with Lambda you can just configure retry and process the message, keeping the message in SQS, not in user control but lambda do that by design.
Lambda polls the queue and invokes your function synchronously with an
event that contains queue messages. Lambda reads messages in batches
and invokes your function once for each batch. When your function
successfully processes a batch, Lambda deletes its messages from the
queue.
enter link description here
One solution that can work to deal with your query
But I have a situation where I want to let the entries to stay in SQS
for 3-4 days and then let a lambda process them.
You also need to decide which SQS should not be processed at the moment and push these message to DynamoDb and then process these message after 4 or 5 days base on Dynamo DB TTL that was added during insertion. You can follow below steps
Add property to SQS is_dynamodb to identify the message that should not be processed at the moment
Push such message to DynamoDB
Add TTL during insertion
Check event in Lambda function that stream from DynamoDb is removed not insertion
Process messages if the event is Removed

Run a lambda on every DynamoDb entry on schedule?

Is there a way to run a Lambda on every DynamoDb table record?
I have a Dynamo table with name, last name, email and a Lambda that takes name, last name, email as parameters. I am trying to configure the environment such that, every day, the Lambda runs automatically for every value it finds within Dynamo; can't do all the records in one Lambda as it won't scale (will timeout once more users are added).
I currently have a CloudWatch rule set up that triggers the lambda on schedule but I had to manually add the parameters to the trigger from Dynamo - It's not automatic and not dynamic/not connected to dynamo.
--
Another option would be to run a lambda every time a DynamoDb record is updated... I could update all the records weekly and then upon updating them the Lambda would be triggered but I don't know if that's possible either.
Some more insight on either one of these approaches would be appreciated!
Is there a way to run a Lambda on every DynamoDb table record?
For your specific case where all you want to do is process each row of a DynamoDB table in a scalable fashion, I'd try going with a Lambda -> SQS -> Lambdas fanout like this:
Set up a CloudWatch Events Rule that triggers on a schedule. Have this trigger a dispatch Lambda function.
The dispatch Lambda function's job is to read all of the entries in your DynamoDB table and write messages to a jobs SQS queue, one per DynamoDB item.
Create a worker Lambda function that does whatever you want it to do with any given item from your DynamoDB table.
Connect the worker Lambda to the jobs SQS queue so that an instance of it will dispatch whenever something is put on the queue.
Since the limiting factor is lambda timeouts, run multiple lambdas using step functions. Perform a paginated scan of the table; each lambda will return the LastEvaluatedKey and pass it to the next invocation for the next page.
I think your best option is, just as you pointed out, to run a Lambda every time a DynamoDB record is updated. This is possible thanks to DynamoDB streams.
Streams are a ordered record of changes that happen to a table. These can invoke a Lambda, so it's automatic (however beware that the change appears only once in the stream, set up a DLQ in case your Lambda fails). This approach scales well and is also pretty evolvable. If need be, you can either push the events from the stream to an SQS or Kinesis, fan out, etc., depending on the requirements.

AWS Lambda to read from SQS queue

I have an AWS Lambda function to read from an SQS queue. The lambda logic is basically to read off one message from SQS and then it processes and deletes the message. Code to read the message being something like.
ReceiveMessageRequest messageRequest =
new ReceiveMessageRequest(queueUrl).withWaitTimeSeconds(5).withMaxNumberOfMessages(1);
Now my question is what is the best way to trigger this lambda and how does this lambda scale for instance, if there are let's say 1000 messages in the queue so will there be a 1000 lambdas running together, since in my case one lambda can read only one message off the queue.
Any pointers on best practices around this kind of design.
Right now you best option is probably to setup an AWS Cloudwatch event rule that calls the lambda function on the interval that you need.
Here is a sample app from AWS to do just that:
https://github.com/awslabs/aws-serverless-sqs-event-source
I do believe that AWS will eventually support SQS as a event type for AWS lambda, which should make this even easier, but for now you best choice is probably a version of the code I linked above.
We can now use SQS messages to trigger AWS Lambda Functions. Moreover, no longer required to run a message polling service or create an SQS to SNS mapping.
Further details:
https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
AWS added native support in June 2018: https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
There are probably a few ways to do this, but I found this guide to be fairly helpful when I tried to implement the same sort of functionality you are describing in Node.js. One downside to this strategy is that you can only poll the queue every 60s.
The basic workflow would look something like this:
Set up a CloudWatch Alarm that gets triggered when the queue has a certain number of messages.
The Cloudwatch alarm then posts to SNS
The SNS message triggers a Lambda scale() function
The scale() function updates a configuration record in a DynamoDB table that sets the number of worker processes needed
You then have a main CloudWatch Schedule that invokes a worker() function every 60s
The worker() function reads configuration from DynamoDB to determine how many concurrent processes are needed, based on the queue size.
Worker() then invokes the appropriate number of process() functions
Process() function consumes messages from SQS, performs your main application logic, and then removes the item from the queue.
You can find an example of what the scaling functions would look like in Node.js here
I have used this solution in a production environment for almost a year without any issues, even with thousands of messages in the queue. If you cut out the scaling portion it is only going to do one message a time.

Consume SQS messages using AWS lambda function

I have 2 FIFO SQS queues which receives JSON messages that are to be indexed to elasticsearch. One queue is constantly adding delta changes to the database and adding them to the queue. The second queue is used for database re-indexing i.e. the entire 50Tb if data is to be indexing every couple of months (where everything is added to the queue). I have a lambda function that consumes the messages from the queues and places them into the appropriate queue (either the active index or the indexing being rebuilt).
How should I trigger the lambda function to best process the backlog of messages in SQS so it process both queues as quickly as possible?
A constraint I have is that the queue items need to be processed in order. If the lambda function could be run indefinitely without the 5 minute limit I could keep running one function that constantly processes messages.
Instead of pushing your messages directly into SQS you could publish the messages to a SNS Topic with 2 Subscriber registered.
Subscriber: SQS
Subscriber: Lambda Function
Has the benefit that your Lambda is invoked at the same time as the message is stored in SQS.
The standard way to do this is to use Cloudwatch Events that run periodically. This lets you pull data from the queue on a regular schedule.
Because you have to poll SQS this may not lead to the fastest processing of messages. Also, be careful if you constantly have messages to process - Lambda will end up being far more expensive than a small EC2 instance to handle the messages.
Not sure I fully understand your problem, but here are my 2 cents:
If you have a constant and real-time stream of data, consider using Kinesis Streams with 1 shard in order to preserve the FIFO. You may consume the data in batch of n items using lambda. Up to you to decide the batch size n and the memory size of lambda.
with this solution you pay a low constant price for Kinesis Streams and a variable price for Lambdas.
Should you really are in love with SQS and the real-time does not metter, you may consume items with Lambdas or EC2 or Batch. Either you trigger many lambdas with CloudWatch Events, either you keep alive an EC2, either you trigger on a regular basis an AWS Batch job.
there is an economic equation to explore, each solution is the best for one use case and the worst for another, make your choice ;)
I prefer SQS + Lambdas when there are few items to consume and SQS + Batch when there are a lot of items to consume.
You may probably also consider using SNS + SQS + Lambdas like #maikay says in his answer, but I wouldn't choose that solution.
Hope it helps. Feel free to ask for clarifications. Good luck!

AWS SQS standard queue or FIFO queue when message can not be duplicated?

We plan to use AWS SQS service to queue events created from web service and then use several workers to process those events. One event can only be processed one time. According to AWS SQS document, AWS SQS standard queue can "occasionally" produce duplicated message but with unlimited throughput. AWS SQS FIFO queue will not produce duplicated message but with throughput limitation of 300 API calls per second (with batchSize=10, equivalent of 3000 messages per second). Our current peak hour traffic is only 80 messages per second. So, both are fine in terms of throughput requirement. But, when I started to use AWS SQS FIFO queue, I found that I need to do extra work like providing extra parameters
"MessageGroupId" and "MessageDeduplicationId" or need to enable "ContentBasedDeduplication" setting. So, I am not sure which one is a better solution. We just need the message not duplicated. We don't need the message to be FIFO.
Solution #1:
Use AWS SQS FIFO queue. For each message, need to generate a UUID for "MessageGroupId" and "MessageDeduplicationId" parameters.
Solution #2:
Use AWS SQS FIFO queue with "ContentBasedDeduplcation" enabled. For each message, need to generate a UUID for "MessageGroupId".
Solution #3:
Use AWS SQS standard queue with AWS ElasticCache (either Redis or Memcached). For each message, the "MessageId" field will be saved in the cache server and checked for duplication later on. Existence means this message has been processed. (By the way, how long should the "MessageId" exists in the cache server. AWS SQS document does not mention how far back a message could be duplicated.)
You are making your systems complicated with SQS.
We have moved to Kinesis Streams, It works flawlessly. Here are the benefits we have seen,
Order of Events
Trigger an Event when data appears in stream
Deliver in Batches
Leave the responsibility to handle errors to the receiver
Go Back with time in case of issues
Buggier Implementation of the process
Higher performance than SQS
Hope it helps.
My first question would be that why is it even so important that you don't get duplicate messages? An ideal solution would be to use a standard queue and design your workers to be idempotent. For e.g., if the messages contain something like a task-ID and store the completed task's result in a database, ignore those whose task-ID already exists in DB.
Don't use receipt-handles for handling application-side deduplication, because those change every time a message is received. In other words, SQS doesn't guarantee same receipt-handle for duplicate messages.
If you insist on de-duplication, then you have to use FIFO queue.