Will DynamoDB streams items expire if Lambda can't keep up? - amazon-web-services

We have configured DynamoDB streams to trigger a Lambda function. More than 10 million unique records will be inserted into DynamoDB table within 30 minutes and Lambda will process these records when triggered through streams.
As per DynamoDB Streams documentation, streams will expire after 24 hrs.
Question:
Does this mean that Lambda function (multiple concurrent executions) should complete processing of all 10 million records within 24hrs?
If some streams events remain to be processed after 24hrs, will they be lost?

As long as you don't throttle the lambda, it won't 'not keep up'.
What will happen is the stream will be batched depending on your settings - so if you have your settings in your dynamo stream to 5 events at once, it will bundle five events and push them toward lambda.
even if that happens hundreds of times a minute, Lambda will (assuming again you aren't purposely limiting lambda executions) spin up additional concurrent executions to handle the load.
This is standard AWS philosophy. Pretty much every serverless resource (and even some not, like EC2 with Elastics Beanstalk) are designed to seamlessly and effortless scale horizontally to handle burst traffic.
Likely your Lambda executions will be done within a couple of minutes of the last event being sent. The '24 hour time out' is against waiting for a lambda to be finished/reactivated (ie: you can set up cloudwatch events to 'hold' Dynamo Streams until certain times of the day then process everything, such as waiting until off hours to let all the streams process, then turning it off again during business hours the next day)
To give you an example that is similar - I ran 10,000 executions through an SQS into a lambda. It completed the 10,000 executions in about 15 mins. Lambda concurrency is designed to handle this kind of burst flow.
Your Dynamo Read/Write capacity is going to be hammered however, so make sure you have it set to at least dynamic and not provisioned.
UPDATE
As #Maurice pointed out in the comments, there is a Stream Limit on concurrent batches sent at a moment with Dynamo. The calculation indicates that it will fall far short even with a short lambda execution time - longer the lambda, the less likely you are to complete.
Which means, if you don't have to have those all processed as quickly as posisble you should divide up the input.
You can add an AWS SQS queue somewhere in the process. Most likely, because even with the largest batch size and and a super quick process you wont get through all them, before the insert into dynamo.
The SQS has limits on its messages of up to 14 days. This may be enough to do what you want. If you have control of the messages coming in you can insert them into an sqs queue with a wait attached to it in order to process a smaller amount inserts at once - what can be accomplished in a single day, or well slightly less. It would be
lambda to collate your inserts into an SQS queue -> SQS with a wait/smaller batch size -> Lambda to insert smaller batches into dynamo -> Dynamo Stream -> Processing Lambda
The other option is to do something similar but use a State Machine with wait times and maps. State Machines have a 1 year run time limit, so you have plenty of time with that one.
The final option is to, instead of streaming the data straight into lambda, execute lambdas to query smaller sections of the dynamo at once to process them

Related

Why Lambda never reach or close to the reserved concurrency? Want SQS triggers more Lambda function to process message concurrently

I've set a Lambda trigger with a SQS Queue. Lambda's reserved concurrency is set to 1000. However, there are millions of messages waiting in the queue need to be processed and it only invokes around 50 Lambdas at the same time. Ideally, I want SQS to trigger 1000 (or close to 1000) Lambda functions concurrently. Do I miss any configuration in SQS or Lambda? Thank you for any suggestion.
As stated in AWS Lambda developer guide:
...Lambda increases the number of processes that are reading batches by up to 60 more instances per minute. The maximum number of batches that an event source mapping can process simultaneously is 1,000.
So the behavior that you encountered (only invokes around 50 Lambdas at the same time) is actually expected.
If you are not using already, I would suggest doing batch processing in your lambda (so you can process 10 messages per invocation). If that is still not enough, you can potentially create more queues and lambdas to divide your load (considering that order is not relevant in your case), or move away from it and start polling the queue directly with EC2/ECS (which can increase your costs considerably however).

Perform AWS Lambda (with multiple data) only after a fixed amount of data is gathered

I'd like to execute a lambda function with multiple data, only after a fixed amount of data is gathered. The fixed amount would be, for example, to consider only a specific amount of messages, or messages that are sent in a specific temporal range.
I thought to solve this problem using an SQS, on which I write the messages, and using a polling to check the SQS status. But I don't like this solution, because I'd like to trigger the lambda instantly when the criteria is matched (for example: elapsed time from the first message sent, or a fixed amount of messages)
The ideal would be to send all the messages gathered, for example, after 1 minute after the first message arrives.
To be clear:
First message arrives in the queue
From now on starts a timer (e.g 1 min)
The timer ends and It will trigger the lambda with all the messages gathered till now
Moreover, I'd like to handle different queues in parallel, based on different ids
Is there an elegant way to do so?
I have already in place a system that works with sequential lambda, that handles all the process per single message
Unfortunately, it's not an easy task to do on AWS Lambda (we have a similar use case).
SQS or Kinesis data stream as a trigger can be helpful, but have several limitations:
SQS will be pulled by AWS Lambda in a very high frequency. You will have to add a concurrency limit to your lambda to make it get triggered by more than a single item. And the maximum batch size is just 10.
The base rate for Kinesis trigger is one per second for each shard, and cannot be changed.
Aggregating records between different invocations is not a good idea because you never know if the next invocation will start on a different container so they will get lost.
Kinesis Firehose can be helpful, as you can configure max batch size and max time range for sending a new batch. You can configure it to write to an S3 bucket and configure a lambda to be triggered by new created files.
Make sure that if you use a Kinesis data stream as the source of a Kinesis firehose, the data from each shard of the data stream is seperately batched in the Firehose (this is not documented in AWS).
You can do this in a few ways. I'd do it like this:
Have the queue be an event source for a lambda function
That lambda function can: trigger a state machine OR not do anything. It triggers the state machine if there isn't one currently triggered (meaning we're in that 1 minute range).
The state machine has the following steps:
1 minute wait
Does it's processing

AWS SQS with a single worker?

I'm struggling to establish a queue in an AWS environment where the tasks are consumed by one Lambda / worker.
AWS Lambda automatically scales however I don't want that. The trouble is the function makes several complex changes to a database and there can be race conditions. Unfortunately this is out of my control.
Therefore it is easier to ensure there is one worker instead of solving the complex SQL issues. So what I want is whenever there is a messages in the queue, a single worker receives the messages and completes the tasks sequentially. Order does not matter.
Set the concurrency limit on the Lambda function to 1.
As you've noticed that 'built-in' SQS starts with a minimum of five workers and scales up.
I have two suggestions for you, however:
If you only have one shard, then kinesis (with a batch-size of one item), will ensure sequential, ordered, execution. This is because Kinesis is parallel by shard (and one shard can take 1000 records/second, so it's probably fine to only have one!) and the built-in lambda trigger takes a customisable batch size (which can be 1) and waits for it to complete before taking the next batch.
If you need to use SQS, then the "old" way of integrating (prior to the SQS trigger) will give you a "most likely one" and sequential execution. This is when you actually trigger your lambda on a Scheduled CloudWatch Event, which allows you to have a single lambda checking the queue every X (configured by you). The challenge here is if X is shorter than the amount of time it takes to process a message, then a second lambda will run in parallel (there are patterns such as having X = the timeout of your lambda, and just having your lambda run for 5 minutes going through the queue one message at a time).

AWS Lambda is seemingly not highly available when invoked from SNS

I am invoking a data processing lambda in bulk fashion by submitting ~5k sns requests in an asynchronous fashion. This causes all the requests to hit sns in a very short time. What I am noticing is that my lambda seems to have exactly 5k errors, and then seems to "wake up" and handle the load.
Am I doing something largely out of the ordinary use case here?
Is there any way to combat this?
I suspect it's a combination of concurrency, and the way lambda connects to SNS.
Lambda is only so good at automatically scaling up to deal with spikes in load.
Full details are here: (https://docs.aws.amazon.com/lambda/latest/dg/scaling.html), but the key points to note that
There's an account-wide concurrency limit, which you can ask to be
raised. By default it's much less than 5k, so that will limit how
concurrent your lambda could ever become.
There's a hard scaling limit (+1000 instances/minute), which means even if you've managed to convince AWS to let you have a concurrency limit of 30k, you'll have to be under sustained load for 30 minutes before you'll have that many lambdas going at once.
SNS is a non-stream-based asynchronous invocation (https://docs.aws.amazon.com/lambda/latest/dg/invoking-lambda-function.html#supported-event-source-sns) so what you see is a lot of errors as each SNS attempts to invoke 5k lambdas, but only the first X (say 1k) get through, but they keep retrying. The queue then clears concurrently at your initial burst (typically 1k, depending on your region), +1k a minute until your reach maximum capacity.
Note that SNS only retries three times at intervals (AWS is a bit sketchy about the intervals, but it is probably based on the retry: delay the service returns, so should be approximately intelligent); I suggest you setup a DLQ to make sure you're not dropping messages because the time for the queue to clear.
While your pattern is not a bad one, it seems like you're very exposed to the concurrency issues that surround lambda.
An alternative is to use a stream based event-source (like Kinesis), which processes in batches at a set concurrency (e.g. 500 records per lambda, concurrent by shard count, rather than 1:1 with SNS), and waits for each batch to finish before processing the next.

Why should I use Amazon Kinesis and not SNS-SQS?

I have a use case where there will be stream of data coming and I cannot consume it at the same pace and need a buffer. This can be solved using an SNS-SQS queue. I came to know the Kinesis solves the same purpose, so what is the difference? Why should I prefer (or should not prefer) Kinesis?
Keep in mind this answer was correct for Jun 2015
After studying the issue for a while, having the same question in mind, I found that SQS (with SNS) is preferred for most use cases unless the order of the messages is important to you (SQS doesn't guarantee FIFO on messages).
There are 2 main advantages for Kinesis:
you can read the same message from several applications
you can re-read messages in case you need to.
Both advantages can be achieved by using SNS as a fan out to SQS. That means that the producer of the message sends only one message to SNS, Then the SNS fans-out the message to multiple SQSs, one for each consumer application. In this way you can have as many consumers as you want without thinking about sharding capacity.
Moreover, we added one more SQS that is subscribed to the SNS that will hold messages for 14 days. In normal case no one reads from this SQS but in case of a bug that makes us want to rewind the data we can easily read all the messages from this SQS and re-send them to the SNS. While Kinesis only provides a 7 days retention.
In conclusion, SNS+SQSs is much easier and provides most capabilities. IMO you need a really strong case to choose Kinesis over it.
On the surface they are vaguely similar, but your use case will determine which tool is appropriate. IMO, if you can get by with SQS then you should - if it will do what you want, it will be simpler and cheaper, but here is a better explanation from the AWS FAQ which gives examples of appropriate use-cases for both tools to help you decide:
FAQ's
Semantics of these technologies are different because they were designed to support different scenarios:
SNS/SQS: the items in the stream are not related to each other
Kinesis: the items in the stream are related to each other
Let's understand the difference by example.
Suppose we have a stream of orders, for each order we need to reserve some stock and schedule a delivery. Once this is complete, we can safely remove the item from the stream and start processing the next order. We are fully done with the previous order before we start the next one.
Again, we have the same stream of orders, but now our goal is to group orders by destinations. Once we have, say, 10 orders to the same place, we want to deliver them together (delivery optimization). Now the story is different: when we get a new item from the stream, we cannot finish processing it; rather we "wait" for more items to come in order to meet our goal. Moreover, if the processor process crashes, we must "restore" the state (so no order will be lost).
Once processing of one item cannot be separated from processing another one, we must have Kinesis semantics in order to handle all the cases safely.
Kinesis support multiple consumers capabilities that means same data records can be processed at a same time or different time within 24 hrs at different consumers, similar behavior in SQS can be achieved by writing into multiple queues and consumers can read from multiple queues. However writing again into multiple queue will add sub seconds {few milliseconds} latency in system.
Second, Kinesis provides routing capability to selective route data records to different shards using partition key which can be processed by particular EC2 instances and can enable micro batch calculation {Counting & aggregation}.
Working on any AWS software is easy but with SQS is easiest one. With Kinesis, there is a need to provision enough shards ahead of time, dynamically increasing number of shards to manage spike load and decrease to save cost also required to manage. it's pain in Kinesis, No such things are required with SQS. SQS is infinitely scalable.
Excerpt from AWS Documentation:
We recommend Amazon Kinesis Streams for use cases with requirements that are similar to the following:
Routing related records to the same record processor (as in streaming MapReduce). For example, counting and aggregation are simpler when all records for a given key are routed to the same record processor.
Ordering of records. For example, you want to transfer log data from the application host to the processing/archival host while maintaining the order of log statements.
Ability for multiple applications to consume the same stream concurrently. For example, you have one application that updates a real-time dashboard and another that archives data to Amazon Redshift. You want both applications to consume data from the same stream concurrently and independently.
Ability to consume records in the same order a few hours later. For example, you have a billing application and an audit application that runs a few hours behind the billing application. Because Amazon Kinesis Streams stores data for up to 7 days, you can run the audit application up to 7 days behind the billing application.
We recommend Amazon SQS for use cases with requirements that are similar to the following:
Messaging semantics (such as message-level ack/fail) and visibility timeout. For example, you have a queue of work items and want to track the successful completion of each item independently. Amazon SQS tracks the ack/fail, so the application does not have to maintain a persistent checkpoint/cursor. Amazon SQS will delete acked messages and redeliver failed messages after a configured visibility timeout.
Individual message delay. For example, you have a job queue and need to schedule individual jobs with a delay. With Amazon SQS, you can configure individual messages to have a delay of up to 15 minutes.
Dynamically increasing concurrency/throughput at read time. For example, you have a work queue and want to add more readers until the backlog is cleared. With Amazon Kinesis Streams, you can scale up to a sufficient number of shards (note, however, that you'll need to provision enough shards ahead of time).
Leveraging Amazon SQS’s ability to scale transparently. For example, you buffer requests and the load changes as a result of occasional load spikes or the natural growth of your business. Because each buffered request can be processed independently, Amazon SQS can scale transparently to handle the load without any provisioning instructions from you.
The biggest advantage for me is the fact that Kinesis is a replayable queue, and SQS is not. So you can have multiple consumers of the same messages of Kinesis (or the same consumer at different times) where with SQS, once a message has been ack'd, it's gone from that queue.
SQS is better for worker queues because of that.
Another thing: Kinesis can trigger a Lambda, while SQS cannot. So with SQS you either have to provide an EC2 instance to process SQS messages (and deal with it if it fails), or you have to have a scheduled Lambda (which doesn't scale up or down - you get just one per minute).
Edit: This answer is no longer correct. SQS can directly trigger Lambda as of June 2018
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
The pricing models are different, so depending on your use case one or the other may be cheaper. Using the simplest case (not including SNS):
SQS charges per message (each 64 KB counts as one request).
Kinesis charges per shard per hour (1 shard can handle up to 1000 messages or 1 MB/second) and also for the amount of data you put in (every 25 KB).
Plugging in the current prices and not taking into account the free tier, if you send 1 GB of messages per day at the maximum message size, Kinesis will cost much more than SQS ($10.82/month for Kinesis vs. $0.20/month for SQS). But if you send 1 TB per day, Kinesis is somewhat cheaper ($158/month vs. $201/month for SQS).
Details: SQS charges $0.40 per million requests (64 KB each), so $0.00655 per GB. At 1 GB per day, this is just under $0.20 per month; at 1 TB per day, it comes to a little over $201 per month.
Kinesis charges $0.014 per million requests (25 KB each), so $0.00059 per GB. At 1 GB per day, this is less than $0.02 per month; at 1 TB per day, it is about $18 per month. However, Kinesis also charges $0.015 per shard-hour. You need at least 1 shard per 1 MB per second. At 1 GB per day, 1 shard will be plenty, so that will add another $0.36 per day, for a total cost of $10.82 per month. At 1 TB per day, you will need at least 13 shards, which adds another $4.68 per day, for a total cost of $158 per month.
Kinesis solves the problem of map part in a typical map-reduce scenario for streaming data. While SQS doesnt make sure of that. If you have streaming data that needs to be aggregated on a key, kinesis makes sure that all the data for that key goes to a specific shard and the shard can be consumed on a single host making the aggregation on key easier compared to SQS
Kinesis Use Cases
Log and Event Data Collection
Real-time Analytics
Mobile Data Capture
“Internet of Things” Data Feed
SQS Use Cases
Application integration
Decoupling microservices
Allocate tasks to multiple worker nodes
Decouple live user requests from intensive background work
Batch messages for future processing
I'll add one more thing nobody else has mentioned -- SQS is several orders of magnitude more expensive.
In very simple terms, and keeping costs out of the picture, the real intention of SNS-SQS are to make services loosely coupled. And this is only primary reason to use SQS where the order of the msgs are not so important and where you have more control of the messages. If you want a pattern of job queue using an SQS is again much better. Kinesis shouldn't be used be used in such cases because it is difficult to remove messages from kinesis because kinesis replays the whole batch on error. You can also use SQS as a dead letter queue for more control. With kinesis all these are possible but unheard of unless you are really critical of SQS.
If you want a nice partitioning then SQS won't be useful.