EventBridge and Lambda different inputs - amazon-web-services

I need to run a Lambda Function with a Thousand of different inputs, I did it with a Event Bus and another lambda that send the events reading de input from a DynamoDB.
Is that the best way to do this? The lambda that send the events to the event bus, take too much time, and I need to do a loop to send 10 entries at time for the event bus limitations, in boto3.

You can still trigger a stepfunction from Eventbridge and perform a parallel scan with dynamodb to produce the input for the lambda.
Alternatively you can update every dynamodb item with lambda or stepfunction and use dynamodb streams to trigger your lambda.

Related

Is there an idiomatic way to aggregate events before processing them in AWS Lambda?

I have an AWS Lambda function which processes events from S3. I'd like to aggregate them before processing and let lambda process the batch.
This is depicted below:
Ideally, I'd like to be able to specify a batch size, and a timeout (say a single even, and then nothing for 5 sec, I'd like to send an 1-event batch).
Is there an idiomatic way to do it using Lambda or other AWS services?
There are a few things you can do:
1. Make upstream do the aggregation:
Make publishing the publisher's responsibility, and get the publisher to give you one event per group of objects to process. This works well if the publisher is already working in batches.
2. Insert your own aggregation step:
Trigger on each event.
Store the event somewhere.
If enough events have been stored, empty the store and pass all the contents to the processing step.
This works well if your processing step is much more expensive per event than just handling the event. Often, this can take the form of {aggregating lambda} -> {processing batch job}, since Lambda isn't great for very expensive processing.
3. Do aggregation on a time basis:
Send your events to an SQS queue.
Trigger on a timer (e.g. Cloudwatch events).
When triggered, empty the queue and process everything in it. If it's too much to process in a single invocation, immediately trigger an additional lambda.
This works well if processing is fairly cheap, and you want to minimize your number of Lambda invocations. The trigger schedule (how long you wait in between invocations) is determined by weighing how long you're willing to wait to process an event against how many invocations you're willing to pay for. Things to watch out for: 1. if you get no events at all, you will still be invoking your Lambda, and 2. if you get events faster than they can be processed, your queue will grow more and more and your processing will fall further and further behind.
I think you can achieve the batch operation by setting SQS queue as destination for S3 notification. Let's say you want to specify a batch size of 20, all your S3 events are going to SQS. You would create a CloudWatch rule to trigger a Lambda when your SQS have 20 items. Your Lambda would poll SQS for the batch of 20 items and process them.
You can also set SQS triggers but it has a limit of max batch size 10.

how should i architect aws lambda to support parallel process in batch model?

i have an aws lambda function to do some statistics on over 1k of stock tickers after market close. i have an option like below.
setup a cron job in ec2 instance and trigger a cron job to submit 1k http request asyn (e.g. http://xxxxx.lambdafunction.xxxx?ticker= to trigger the aws lambda function (or submit 1k request to SNS and let lambda to pickup.
i think it should run fine, but much appreciate if there is any serverless/PaaS approach to trigger task
On top of my head, Here are a couple of ways to achieve what you need:
Option 1: [Cost-Effective]
Post all the ticks to AWS FIFO SQS queue.
Define triggers on this queue to invoke lambda function.
Result: Since you are posting all the events in FIFO queue that maintains the order, all the events will be polled sequentially. More-over SQS to lambda trigger will help you scale automatically based on the number of message in the queue.
Option 2: [Costly and can easily scale for real-time processing]
Same as above, but instead of posting to FIFO queue, post to Kinesis Stream.
Enable Kinesis stream to trigger lambda function.
Result: Kinesis will ensure the order of event arriving in the stream and lambda function invocation will be invoked based on the number of shards in the stream. This implementation scales significantly. If you have any future use-case for real-time processing of tickers, this could be a great solution.
Option 3: [Cost Effective, alternate to Option:1]
Collect all ticker events(1k or whatever) and put it into a file.
Upload this file to AWS S3 bucket.
Enable S3 event notification to trigger proxy lambda function.
This proxy lambda function reads the s3 file and based on the total number of events in the file, it will spawn n parallel actor lambda function.
Actor lambda function will process each event.
Result: Easy to implement, cost-effective and provides easy scaling based on your custom algorithm to distribute the load in the proxy lambda function.
Option 4: [All-serverless]
Write a lambda function that gets the list of tickers from some web-server.
Define an AWS cloud watch rule for generating events based on cron/frequency.
Add a trigger to this cloudwatch rule to invoke proxy lambda function.
Proxy lambda function will use any combination of above options[1, 2 or 3] to trigger the actor lambda function for processing the records.
Result: Everything can be configured via AWS console and easy to use. Alternatively, you can also write your AWS cloud formation template to generate all the required resources in a single go.
Having said that, now I will leave this up to you to choose the right solution based on your business/cost requirements.
You can use lambda fanout option.
You can follow these steps to process 1k or more using serverless aproach.
1.Store all the stock tickers in a S3 file.
2.Create a master lambda which will read the s3 file and split the stocks in groups of 10.
3. Create a child lambda which will make the async call to external http service and fetch the details.
4. In the master lambda Loop through these groups and invoke 100 child lambdas passing in each group and return the results to the
Master lambda
5. Collect all the information returned from the child lambdas and continue with your processing here.
Now you can trigger this master lambda at the end of markets everyday using CloudWatch time based rule scheduler.
This is a complete serverless approach.

AWS Lambda to read from SQS queue

I have an AWS Lambda function to read from an SQS queue. The lambda logic is basically to read off one message from SQS and then it processes and deletes the message. Code to read the message being something like.
ReceiveMessageRequest messageRequest =
new ReceiveMessageRequest(queueUrl).withWaitTimeSeconds(5).withMaxNumberOfMessages(1);
Now my question is what is the best way to trigger this lambda and how does this lambda scale for instance, if there are let's say 1000 messages in the queue so will there be a 1000 lambdas running together, since in my case one lambda can read only one message off the queue.
Any pointers on best practices around this kind of design.
Right now you best option is probably to setup an AWS Cloudwatch event rule that calls the lambda function on the interval that you need.
Here is a sample app from AWS to do just that:
https://github.com/awslabs/aws-serverless-sqs-event-source
I do believe that AWS will eventually support SQS as a event type for AWS lambda, which should make this even easier, but for now you best choice is probably a version of the code I linked above.
We can now use SQS messages to trigger AWS Lambda Functions. Moreover, no longer required to run a message polling service or create an SQS to SNS mapping.
Further details:
https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
AWS added native support in June 2018: https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
There are probably a few ways to do this, but I found this guide to be fairly helpful when I tried to implement the same sort of functionality you are describing in Node.js. One downside to this strategy is that you can only poll the queue every 60s.
The basic workflow would look something like this:
Set up a CloudWatch Alarm that gets triggered when the queue has a certain number of messages.
The Cloudwatch alarm then posts to SNS
The SNS message triggers a Lambda scale() function
The scale() function updates a configuration record in a DynamoDB table that sets the number of worker processes needed
You then have a main CloudWatch Schedule that invokes a worker() function every 60s
The worker() function reads configuration from DynamoDB to determine how many concurrent processes are needed, based on the queue size.
Worker() then invokes the appropriate number of process() functions
Process() function consumes messages from SQS, performs your main application logic, and then removes the item from the queue.
You can find an example of what the scaling functions would look like in Node.js here
I have used this solution in a production environment for almost a year without any issues, even with thousands of messages in the queue. If you cut out the scaling portion it is only going to do one message a time.

Using AWS Lambda Functions to Consume AWS SQS Queues

I'm using an AWS Lambda function that is triggered from an SNS event trigger to consume from an SQS queue. When the Lambda function executes, it pulls 10 messages from the queue, processes them, pulls another 10, and so on and so forth - up to a certain time limit that's coded into the Lambda function (less than the max of 5 minutes, obviously).
It's my understanding that a Lambda function triggered by an SNS event is one-to-one, is that correct? In other words, one SNS event won't trigger multiple Lambda functions (up to the maximum concurrent execution limit). There's no scaling based on load.
Are there any other potential solutions, leveraging Lambda, that would let me consume from SQS as frequently/fast as possible? I had considered trying to auto-scale my Lambda functions by leveraging CloudWatch alarms (and SNS event triggers) based on SQS queue size, but it seems like those alarms can fire, at most, every 5 minutes. I've also considered developing a master Lambda function that can automatically execute (many) slave Lambdas based on querying the queue size.
I understand that the more optimal design may be to leverage Kinesis instead of SNS. I may consider incorporating Kinesis in the future, but let's just pretend that Kinesis is not an option at this time.
There is no best way to do this. One approach (which you've kind of already mentioned) is to use CloudWatch and schedule a Lambda function to run every minute (that's the minimum schedule time for Lambda). This Lambda function will then look for new SQS messages and invoke other Lambda functions to handle new message(s). Here is a very good article for that use case: https://cloudonaut.io/integrate-sqs-and-lambda-serverless-architecture-for-asynchronous-workloads/
Personally, I do not recommend triggering your Lambda by SNS for this use case, because SNS doesn't give a full guarantee for delivery and recommend sending the SNS notifications to SQS - which does not solve your problem. From the FAQ's:
[...] If it is critical that all published messages be successfully processed, developers should have notifications delivered to an SQS queue (in addition to notifications over other transports).
Source: https://aws.amazon.com/sns/faqs/
For this kind of processing, instead of SQS if you push messages to Kinesis Stream you should be able to flexibly process(In batches of needed size) the messages.
Note: If you use SQS, after triggering a Lambda function through SNS (or using a Scheduled Lambda), it can invoke inner Lambda functions to check the queue where multiple concurrent inner Lambdas are spawned. However the problem is that its not practical to process SQS items in batches.

Can I limit concurrent invocations of an AWS Lambda?

I have a Lambda function that’s triggered by a PUT to an S3 bucket.
I want to limit this Lambda function so that it’s only running one instance at a time – I don’t want two instances running concurrently.
I’ve had a look through the Lambda configuration and docs, but I can’t see anything obvious. I can about writing my own locking system, but it would be nice if this was already a solved problem.
How can I limit the number of concurrent invocations of a Lambda?
AWS Lambda now supports concurrency limits on individual functions:
https://aws.amazon.com/about-aws/whats-new/2017/11/set-concurrency-limits-on-individual-aws-lambda-functions/
I would suggest you to use Kinesis Streams (or alternatively DynamoDB + DynamoDB Streams, which essentially have the same behavior).
You can see Kinesis Streams as as queue. The good part is that you can use a Kinesis Stream as a Trigger to you Lambda function. So anything that gets inserted into this queue will automatically be passed over to your function, in order. So you will be able to process those S3 events one by one, one Lambda execution after the other (one instance at a time).
In order to do that, you'll need to create a Lambda function with the simple purpose of getting S3 Events and putting them into a Kinesis Stream. Then you'll configure that Kinesis Stream as your Lambda Trigger.
When you configure the Kinesis Stream as your Lambda Trigger I suggest you to use the following configuration:
Batch size: 1
This means that your Lambda will be called with only one event from Kinesis. You can select a higher number and you'll get a list of events of that size (for example, if you want to process the last 10 events in one Lambda execution instead of 10 consecutive Lambda executions).
Starting position: Trim horizon
This means it'll behave as a queue (FIFO)
A bit more info on AWS May Webinar Series - Streaming Data Processing with Amazon Kinesis and AWS Lambda.
I hope this helps anyone with a similar problem.
P.S. Bear in mind that Kinesis Streams have their own pricing. Using DynamoDB + DynamoDB Streams might be cheaper (or even free due to the non-expiring Free Tier of DynamoDB).
No, this is one of the things I'd really like to see Lambda support, but currently it does not. One of the problems is that if there were a lot of S3 PUT operations happening AWS would have to queue up all the Lambda invocations somehow, and there is currently no support for that.
If you built a locking mechanism into your Lambda function, what would you do with the requests you don't process due to a lock? Would you just throw those S3 notifications away?
The solution most people recommend is to have S3 send the notifications to an SQS queue, and then have your Lambda function scheduled to run periodically, like once a minute, and check if there is an item in the queue that needs to be processed.
Alternatively, have S3 send the notifications to SQS and just have a t2.nano EC2 instance with a single-threaded service polling the queue.
I know this is an old thread, but I ran across it trying to figure out how to make sure my time sequenced SQS messages were processed in order coming out of a FIFO queue and not getting processed simultaneously/out-of-order via multiple Lambda threads running.
Per the documentation:
For FIFO queues, Lambda sends messages to your function in the order
that it receives them. When you send a message to a FIFO queue, you
specify a message group ID. Amazon SQS ensures that messages in the
same group are delivered to Lambda in order. Lambda sorts the messages
into groups and sends only one batch at a time for a group. If your
function returns an error, the function attempts all retries on the
affected messages before Lambda receives additional messages from the
same group.
Your function can scale in concurrency to the number of active message
groups.
Link: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
So essentially, as long as you use a FIFO queue and submit your messages that need to stay in sequence with the same MessageGroupID, SQS/Lambda automatically handles the sequencing without any additional settings necessary.
Have the S3 "Put events" cause a message to be placed on the queue (instead of involving a lambda function). The message should contain a reference to the S3 object. Then SCHEDULE a lambda to "SHORT POLL the entire queue".
PS: S3 events can not trigger a Kinesis Stream... only SQS, SMS, Lambda (see http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#supported-notification-destinations). Kinesis Stream are expensive and used for real-time event handling.