Pattern to guarantee processing of files in an SQS queue using Lambda - amazon-web-services

I have a very simple task to perform on many files as they are uploaded to an S3 bucket. The task can easily be performed by a simple Lambda function, and the resulting files written back to S3.
Connecting this Lambda function to an event triggered by S3 events is trivial, but I have some added complications:
The files must be processed (if the Lambda function fails then the operation needs to be attempted again, until it succeeds, given a reasonable amount of retries)
I need to know when all files have finished being processed
It seems to me that a putting a message queue in the system (SQS) seems to be a sensible option. That way, any failed messages will be retried after the Visibility timeout. Also I will be able to query the queue length to see if there are still operations in flight.
I can connect S3 events to an SQS queue. The problem is that I cannot directly trigger Lambda invocations from an SQS queue.
File --> S3 --> SQS --??--> Lambda ----> S3
\
`-> If successful delete message from SQS
I could use Kinesis to take messages from the queue and trigger lambda functions, but this seems a bit overkill? Likewise, I could have dedicated instances polling the queue and operating on it, but for such a simple function, I don't really want to have to run a cluster of instances.
Is there a good design pattern for this?

For S3 to Lambda directly, I think you're right that you need to worry about the lambda function not correctly working / crashing and the processing for that particular S3 object not taking place.
That being said, I would suggest doing S3 to Lambda directly coupled with a mechanism that allows you to detect and re-request processing of items that failed to process (i.e. a nanny). You could periodically schedule a job through Lambda (http://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html) that looks for old items (older than a certain threshold) that were not processed. Most processing is going to be done on the direct (happy) path.
One way this could be done would be using SQS with delayed delivery and with lambda scheduled function.
So you would have
File --> S3 --> Lambda --> S3 (happy path)
  \-> SQS (delayed)
\-> Lambda(scheduled job) -> S3 (unhappy path)
In the scheduled job you could check if processing happened and just ack the message. The scheduled job can trigger the same Lambda job used for the normal processing.
Also, don't forget to configure a dead letter queue for the SQS queue to catch things that fail processing over multiple retries.
Keep in mind that you could potentially bypass SQS completely if you have a way of figuring out what was processed and what was not process by inspecting the S3 bucket.
Also keep in mind that all of the above assumes that it okay if you process something that was already process (i.e. edge cases involving timeouts on lamda function, delay on the queue and sqs delivery)

Related

AWS lambda - best practice when reading from long list/s3

I have a scheduled error handling lambda, I would like to use Serverless technology here as opposed to a spring boot service or something.
The lambda will read from an s3 bucket and process accordingly. The problem is at times the s3 bucket may have high volume of data to be processed. long running operations aren't suited to lambdas.
One solution I can think of is have the lambda read and process one item from the bucket and on success trigger another instance of the same lambda unless the bucket is empty/fully-processed. The thing i don't like is that this is synchronous and quite slow. I also need to be conscious of running too many lambdas at the same time as we are hitting a REST endpoint as part of the error flow and don't want to overload it with too many requests.
I am thinking it would be nice to have maybe 3 instances of the lambdas running at the same time until the bucket is empty but not really sure, I am wondering if anyone has any nice patterns that could be used here or suggestions on best practices?
Thanks
Create a S3 bucket for processing your files.
Enable a trigger S3 -> Lambda, on every new file in the bucket lambda will be invoked to process the file, every file is processed separately. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-event-notifications.html
Once the file is processed you could either delete or move file to other place.
About concurrency please have a look at provisioned concurrency https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
Update:
As you still plan to use a scheduler lambda and S3
Lambda reads/lists only the filenames and puts messages into SQS to process the file.
A new Lambda to consume SQS messages and process the file.
Note: I would recommend using SQS initially if the files/messages are not so big, it has built it recovery mechanics, DLQ , delays, visibility etc which you could benefit more than the simple S3 storage, second way is just create message with file reference and still use SQS.
I'd separate the lambda that is called by the scheduler from the lambda that is doing the actual processing. When the scheduler calls the first lambda, it can look at the contents of the bucket, then spawn the worker lambdas to process the objects. This way you have control over how many objects you want per worker.
Given your requirements, I would recommend:
Configure an Amazon S3 Event so that a message is pushed to an Amazon SQS queue when the objects are created in the S3 bucket
Schedule an AWS Lambda function at regular intervals that will:
Check that the external service is working
Invoke a Lambda function to process one message from the queue, and keep looping
The hard part would be throttling the second Lambda function so that it doesn't try to send all request at once (which might impact that external service).
You could probably do this by using a Step Function to trigger Lambda and then, if it was successful, trigger another Lambda function. This could even be done in parallel, such as allowing up to three parallel Lambda executions. The benefit of using Step Functions is that there is no cost for "waiting" for each Lambda to finish executing.
So, the Step Function flow would be something like:
Invoke a "check external service" Lambda function
If it fails, then quit the flow
Invoke the "processing" Lambda function
Get one message
Process the message
If successful, remove the message from the queue
Return success/fail
If it was successful, keep looping until the queue is empty

How to prevent AWS SQS from deleting a message when Lambda function triggered fails to process that message?

I have deployed a AWS Lambda function that triggers when a SQS queue receives a message. The function makes a request to a Rest API and if the response is not Ok the SQS message needs to be processed again.
That's why I need to resend the message to the queue but I would prefer to delete the SQS messages programatically, although I can't find how to configure SQS. I have tried message retention but it seems the trigger event causes the message being deleted anyway.
Other possible options could be back up the message in S3 or persisting it in DynamoDB but I wonder if there's a better option.
Any insights on this question would be very helpful.
From AWS Lambda Retry Behavior - AWS Lambda:
If you configure an Amazon SQS queue as an event source, AWS Lambda will poll a batch of records in the queue and invoke your Lambda function. If the invocation fails or times out, every message in the batch will be returned to the queue, and each will be available for processing once the Visibility Timeout period expires. (Visibility timeouts are a period of time during which Amazon Simple Queue Service prevents other consumers from receiving and processing the message).
Once an invocation successfully processes a batch, each message in that batch will be removed from the queue. When a message is not successfully processed, it is either discarded or if you have configured an Amazon SQS Dead Letter Queue, the failure information will be directed there for you to analyze.
So, it seems (from reading this) that a simple option would be set a high visibility timeout on the queue and then raise an error if the function cannot process the message. This message will remain invisible for the configured timeout period, then would reappear on the queue for processing. If it exceeds the permitted number of retries, it would be deleted or moved to a Dead Letter Queue (if configured).
There is a lambda-powertools library created and maintained by AWSLabs and one of the feature is batch processing.
The batch processing utility handles partial failures when processing
batches from Amazon SQS, Amazon Kinesis Data Streams, and Amazon
DynamoDB Streams.
Check out the documentation here. This is the python version, but there are versions for other environments.
So after some research I found the following:
Frankly there was an workaround options to selectively filter out messages processed as good ones from a batch - before aws implemented it.
Kindly refer to approaches 1-3 demonstrated in here
As for using aws's implementation use approach No.4

How to determine how many times my lambda executed for a certain of time

I have one Lambda it is executed on s3 put item trigger.
Now in s3 any objects uploaded lambda is triggering..
Let say Some one uploaded 5 files in s3 so each time it will execute the lambda for 5 files...
Is there any way that lambda can trigger only one time for all those 5 files...
Can I trace after complete of 5 time triggers/lambda execution...How many minutes lambda is not executing as no files uploaded..
Any help will really helpful for me
If you have the bucket notification configured for object create (s3:ObjectCreated) and if you haven't specified a filter or filter satisfies the uploaded objects your lambda will get triggered each time for per uploaded object.
To see the number of invocations, you can look at the Invocations metric for your lambda function in Cloudwatch metrics
You may want to add a queue that will handle the requests to process new files on S3.
A relevant one could be Kinesis data stream / SQS. If batching is important to you, Kinesis will be probably better.
The requests can be sent by a lambda triggered by S3 as you described, but it will only send the request to the queue, and another lambda will then process it. A simpler way will be to send the request in the same code that puts the object in S3 (if possible).
This way you can have statistics of how many requests were sent, processed, waiting, etc.

Handling data from Dead Letter Queue

I have the following pipeline in place to move events:-
Service -> SNS -> AWS Lambda -> Dynamo Db.
So, basically, Service is publishing data to SNS Topic which gets subscribed by AWS Lambda Function. Then, this AWS Lambda Function pushes the data to Dynamo Db. Now, I am adding a DLQ with AWS Lambda to store error processed messages.
Error messages can be due to an error in publisher application or consumer application. Eg. Publisher changed the format of data being published and say I am not supporting it in AWS Lambda and it gives some error.
I wanted to know after pushing to DLQ such messages, what do we normally do?
Do we try again to push the data by changing the AWS Lambda function? Is this step done manually or we make a job which pushes the data from DLQ to lambda function periodically?
We normally just put an alarm on DLQ and then manually handle this?
Since Sometimes, the issue can be due to Dynamo Db connection first time, which would be handled next time if we push. If we do it manually, then it would be a problem.
I’m addition to Lambda DLQs, you should consider adding SNS DLQs:
https://aws.amazon.com/blogs/compute/designing-durable-serverless-apps-with-dlqs-for-amazon-sns-amazon-sqs-aws-lambda/
I can comment here for SQS -> DLQ
Don't need to move the message because it will come with so many other challenges like duplicate messages, recovery scenarios, lost message, de-duplication check and etc.
Here is the solution which we implemented -
Usually, we use the DLQ for transient errors, not for permanent errors. So took below approach -
Read the message from DLQ like a regular queue
Benefits
To avoid duplicate message processing
Better control on DLQ- Like I put a check, to process only when the regular queue is completely processed.
Scale up the process based on the message on DLQ
Then follow the same code which regular queue is following.
More reliable in case of aborting the job or the process got terminated while processing (e.g. Instance killed or process terminated)
Benefits
Code reusability
Error handling
Recovery and message replay
Extend the message visibility so that no other thread process them.
Benefit
Avoid processing same record by multiple threads.
Delete the message only when either there is a permanent error or successful.
Benefit
Keep processing until we are getting a transient error.
AWS Lambda Dead Letter Queues directs events that cannot be processed to the Amazon SNS topic or Amazon SQS queue that you’ve configured for the Lambda function.
So handling the error with given payload, using a service subscribed to the SNS topic or reading messages from SQS is up to the developer to decide. Addressing the questions listed,
You can use another Lambda function subscribed to a SNS topic to process the message.
Yes, its more similar to setup alarm and manually handle it.
By default, a failed Lambda function invoked asynchronously is retried twice, and then the event is discarded unless there is a DLQ setup. So if its a dynamodb connection problem, probably solved in the second invocation.

Can I limit concurrent invocations of an AWS Lambda?

I have a Lambda function that’s triggered by a PUT to an S3 bucket.
I want to limit this Lambda function so that it’s only running one instance at a time – I don’t want two instances running concurrently.
I’ve had a look through the Lambda configuration and docs, but I can’t see anything obvious. I can about writing my own locking system, but it would be nice if this was already a solved problem.
How can I limit the number of concurrent invocations of a Lambda?
AWS Lambda now supports concurrency limits on individual functions:
https://aws.amazon.com/about-aws/whats-new/2017/11/set-concurrency-limits-on-individual-aws-lambda-functions/
I would suggest you to use Kinesis Streams (or alternatively DynamoDB + DynamoDB Streams, which essentially have the same behavior).
You can see Kinesis Streams as as queue. The good part is that you can use a Kinesis Stream as a Trigger to you Lambda function. So anything that gets inserted into this queue will automatically be passed over to your function, in order. So you will be able to process those S3 events one by one, one Lambda execution after the other (one instance at a time).
In order to do that, you'll need to create a Lambda function with the simple purpose of getting S3 Events and putting them into a Kinesis Stream. Then you'll configure that Kinesis Stream as your Lambda Trigger.
When you configure the Kinesis Stream as your Lambda Trigger I suggest you to use the following configuration:
Batch size: 1
This means that your Lambda will be called with only one event from Kinesis. You can select a higher number and you'll get a list of events of that size (for example, if you want to process the last 10 events in one Lambda execution instead of 10 consecutive Lambda executions).
Starting position: Trim horizon
This means it'll behave as a queue (FIFO)
A bit more info on AWS May Webinar Series - Streaming Data Processing with Amazon Kinesis and AWS Lambda.
I hope this helps anyone with a similar problem.
P.S. Bear in mind that Kinesis Streams have their own pricing. Using DynamoDB + DynamoDB Streams might be cheaper (or even free due to the non-expiring Free Tier of DynamoDB).
No, this is one of the things I'd really like to see Lambda support, but currently it does not. One of the problems is that if there were a lot of S3 PUT operations happening AWS would have to queue up all the Lambda invocations somehow, and there is currently no support for that.
If you built a locking mechanism into your Lambda function, what would you do with the requests you don't process due to a lock? Would you just throw those S3 notifications away?
The solution most people recommend is to have S3 send the notifications to an SQS queue, and then have your Lambda function scheduled to run periodically, like once a minute, and check if there is an item in the queue that needs to be processed.
Alternatively, have S3 send the notifications to SQS and just have a t2.nano EC2 instance with a single-threaded service polling the queue.
I know this is an old thread, but I ran across it trying to figure out how to make sure my time sequenced SQS messages were processed in order coming out of a FIFO queue and not getting processed simultaneously/out-of-order via multiple Lambda threads running.
Per the documentation:
For FIFO queues, Lambda sends messages to your function in the order
that it receives them. When you send a message to a FIFO queue, you
specify a message group ID. Amazon SQS ensures that messages in the
same group are delivered to Lambda in order. Lambda sorts the messages
into groups and sends only one batch at a time for a group. If your
function returns an error, the function attempts all retries on the
affected messages before Lambda receives additional messages from the
same group.
Your function can scale in concurrency to the number of active message
groups.
Link: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
So essentially, as long as you use a FIFO queue and submit your messages that need to stay in sequence with the same MessageGroupID, SQS/Lambda automatically handles the sequencing without any additional settings necessary.
Have the S3 "Put events" cause a message to be placed on the queue (instead of involving a lambda function). The message should contain a reference to the S3 object. Then SCHEDULE a lambda to "SHORT POLL the entire queue".
PS: S3 events can not trigger a Kinesis Stream... only SQS, SMS, Lambda (see http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#supported-notification-destinations). Kinesis Stream are expensive and used for real-time event handling.