I am new in aws sqs as of now I understand sqs have a queue which is storing request messages (parameter) then our attached lambda will fetch numbers of messages based on the batch file which we set on lambda.
so if the sqs queue has 10000 messages and the lambda batch is set to 100 then in each pulling lambda which fetches 100 messages from the queue and executes all until all request are processed then again it will pull 100 messages and so on?
so as of now, I understand lambda will wait for the next pulling until the previous pulling process is finished.
hope I am correct if not please correct me.
now my requirement is lambda should not wait to finish the previous pulling instead it should pull the next 100 messages and execute parallelly for eg lambda should create a different instance(something like this) and each instance pulls 100 100 messages and execute parallelly.
In the situation you describe, the AWS Lambda service will automatically run multiple AWS Lambda functions based upon the concurrency settings of your function.
See: Lambda function scaling - AWS Lambda
The default is to permit up to 1000 concurrent executions of an AWS Lambda function.
Therefore, you do not need to change anything. It will automatically create multiple instances of the Lambda function in parallel and pass (up to) 100 messages to each execution.
For a really good series of articles to understand how AWS Lambda operates, see: Operating Lambda: Performance optimization – Part 1 | AWS Compute Blog
Related
My requirement is like this.
Read from a SQS every 2 hours, take all the messages available and then process it.
Processing includes creating a file with details from SQS messages and sending it to an sftp server.
I implemented a AWS Lambda to achieve point 1. I have a Lambda which has an sqs trigger. I have set batch size as 50 and then batch window as 2 hours. My assumption was that Lambda will get triggered every 2 hours and 50 messages will be delivered to the lambda function in one go and I will create a file for every 50 records.
But I observed that my lambda function is triggered with varied number of messages(sometimes 50 sometimes 20, sometimes 5 etc) even though I have configured batch size as 50.
After reading some documentation I got to know(I am not sure) that there are 5 long polling connections which lambda spawns to read from SQS and this is causing this behaviour of lambda function being triggered with varied number of messages.
My question is
Is my assumption on 5 parallel connections being established correct? If yes, is there a way I can control it? I want this to happen in a single thread / connection
If 1 is not possible, what other alternative do I have here. I do not want to have one file created for every few records. I want one file to be generated every two hours with all the messages in sqs.
A "SQS Trigger" for Lambda is implemented with the so-called Event Source Mapping integration, which polls, batches and deletes messages from the queue on your behalf. It's designed for continuous polling, although you can disable it. You can set a maximum batch size of up to 10,000 records a function receives (BatchSize) and a maximum of 300s long polling time (MaximumBatchingWindowInSeconds). That doesn't meet your once-every-two-hours requirement.
Two alternatives:
Remove the Event Source Mapping. Instead, trigger the Lambda every two hours on a schedule with an EventBridge rule. Your Lambda is responsible for the SQS ReceiveMessage and DeleteMessageBatch operations. This approach ensures your Lambda will be invoked only once per cron event.
Keep the Event Source Mapping. Process messages as they arrive, accumulating the partial results in S3. Once every two hours, run a second, EventBridge-triggered Lambda, which bundles the partial results from S3 and sends them to the SFTP server. You don't control the number of Lambda invocations.
Note on scaling:
<Edit (mid-Jan 2023): AWS Lambda now supports SQS Maximum Concurrency>
AWS Lambda now supports setting Maximum Concurrency to the Amazon SQS event source, a more direct and less fiddly way to control concurrency than with reserved concurrency. The Maximum Concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke. The valid range is 2-1000 concurrent instances.
The create and update Event Source Mapping APIs now have a ScalingConfig option for SQS:
aws lambda update-event-source-mapping \
--uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
--scaling-config '{"MaximumConcurrency":2}' # valid range is 2-1000
</Edit>
With the SQS Event Source Mapping integration you can tweak the batch settings, but ultimately the Lambda service is in charge of Lambda scaling. As the AWS Blog Understanding how AWS Lambda scales with Amazon SQS standard queues says:
Lambda consumes messages in batches, starting at five concurrent batches with five functions at a time. If there are more messages in the queue, Lambda adds up to 60 functions per minute, up to 1,000 functions, to consume those messages.
You could theoretically restrict the number of concurrent Lambda executions with reserved concurrency, but you would risk dropped messages due to throttling errors.
You could try to set the ReservedConcurrency of the function to 1. That may help. See the docs for reference.
A simple solution would be to create a CloudWatch Event Trigger (similar to a Cronjob) that triggers your Lambda function every two hours. In the Lambda function, you call ReceiveMessage on the Queue until you get all messages, process them and afterward delete them from the Queue. The drawback is that there may be too many messages to process within 15 minutes so that's something you'd have to manage.
I have a lambda function that accepts a parameter i.e a category_id, pulls some data from an API, and updates the database based on the response.
I have to execute the same lambda function for Multiple Ids after an interval of 1 minute on daily basis.
For example, run lambda for category 1 at 12:00 AM, then run for category 2 at 12:01 AM and so one for 500+ categories.
What could be the best possible solution to achieve this?
This is what I am currently thinking:
Write Lambda using AWS SAM
Add Lambda Layer for Shared Dependencies
Attach Lambda with AWS Cloudwatch Events to run it on schedule
Add Environment Variable for category_id in lambda
Update the SAM template to use the same lambda function again and again but only change will be in the Cron expression schedule and Value of Environment Variable category_id
Problems in the Above Solution:
Number of Lambda functions will increase in the account.
Each Lambda will be attached with a Cloudwatch Event so its number will also increase
There is a quota limit of max 300 Cloudwatch Event per account (though we can request support to increase that limit)
It'll require the use of nested stacks because of the SAM template size limit as well as the number of resources per template which 200 max.
I'll be able to create only 50 Lambda Functions per nested stack, it means the number of nested stacks will also increase because 1 lambda = 4 resources (Lambda + Role + Rule + Event)
Other solutions (not sure if they can be used):
Use of Step Functions
Trigger First Lambda function only using Cron Schedule and Invoke Lambda for the next category using current lambda(only one CloudWatch Event will be required to invoke the function for the first category but time difference will vary i.e next lambda will not execute exactly after one minute).
Use Only One Lambda and One Cloud Watch Schedule Event, Lambda Function will have a list of all category ids and that function will invoke itself recursively by using one category id at a time and removing the use category id from the list (the only problem is lambda will not execute exactly after one minute for next category_id in the list)
Looking forward to hearing about the best solution.
I would suggest using a standard Worker pattern:
Create an Amazon SQS queue
Configure the AWS Lambda function so that it is triggered to run whenever a message is sent to the SQS queue
Trigger a separate process at midnight (eg another Lambda function) that sends the 500 messages to the SQS queue, each with a different category ID
This will cause the Amazon SQS functions to execute. If you only want one of the Lambda functions to be running at any time (with no parallel executions), set the function's Concurrency Limit to 1 so that only one is running at any time. When one function completes, Lambda will automatically grab another message from the queue and start executing. There will be practically no "wasted time" between executions of the function.
Given that you are doing a large amount of processing, an Amazon EC2 instance might be more appropriate.
If the bandwidth requirements are low (eg if it is just making API calls), then a T3a.micro ($0.0094 per Hour) or even T3a.nano instance ($0.0047 per Hour) can be quite cost-effective.
A script running on the instance could process a category, then sleep for 30 seconds, in a big loop. Running 500 categories at one minute each would take about 8 hours. That's under 10c each day!
The instance can then stop or self-terminate when the work is complete. See: Auto-Stop EC2 instances when they finish a task - DEV Community
i have an aws lambda function to do some statistics on over 1k of stock tickers after market close. i have an option like below.
setup a cron job in ec2 instance and trigger a cron job to submit 1k http request asyn (e.g. http://xxxxx.lambdafunction.xxxx?ticker= to trigger the aws lambda function (or submit 1k request to SNS and let lambda to pickup.
i think it should run fine, but much appreciate if there is any serverless/PaaS approach to trigger task
On top of my head, Here are a couple of ways to achieve what you need:
Option 1: [Cost-Effective]
Post all the ticks to AWS FIFO SQS queue.
Define triggers on this queue to invoke lambda function.
Result: Since you are posting all the events in FIFO queue that maintains the order, all the events will be polled sequentially. More-over SQS to lambda trigger will help you scale automatically based on the number of message in the queue.
Option 2: [Costly and can easily scale for real-time processing]
Same as above, but instead of posting to FIFO queue, post to Kinesis Stream.
Enable Kinesis stream to trigger lambda function.
Result: Kinesis will ensure the order of event arriving in the stream and lambda function invocation will be invoked based on the number of shards in the stream. This implementation scales significantly. If you have any future use-case for real-time processing of tickers, this could be a great solution.
Option 3: [Cost Effective, alternate to Option:1]
Collect all ticker events(1k or whatever) and put it into a file.
Upload this file to AWS S3 bucket.
Enable S3 event notification to trigger proxy lambda function.
This proxy lambda function reads the s3 file and based on the total number of events in the file, it will spawn n parallel actor lambda function.
Actor lambda function will process each event.
Result: Easy to implement, cost-effective and provides easy scaling based on your custom algorithm to distribute the load in the proxy lambda function.
Option 4: [All-serverless]
Write a lambda function that gets the list of tickers from some web-server.
Define an AWS cloud watch rule for generating events based on cron/frequency.
Add a trigger to this cloudwatch rule to invoke proxy lambda function.
Proxy lambda function will use any combination of above options[1, 2 or 3] to trigger the actor lambda function for processing the records.
Result: Everything can be configured via AWS console and easy to use. Alternatively, you can also write your AWS cloud formation template to generate all the required resources in a single go.
Having said that, now I will leave this up to you to choose the right solution based on your business/cost requirements.
You can use lambda fanout option.
You can follow these steps to process 1k or more using serverless aproach.
1.Store all the stock tickers in a S3 file.
2.Create a master lambda which will read the s3 file and split the stocks in groups of 10.
3. Create a child lambda which will make the async call to external http service and fetch the details.
4. In the master lambda Loop through these groups and invoke 100 child lambdas passing in each group and return the results to the
Master lambda
5. Collect all the information returned from the child lambdas and continue with your processing here.
Now you can trigger this master lambda at the end of markets everyday using CloudWatch time based rule scheduler.
This is a complete serverless approach.
I am trying to come up with a way to have pieces of data processed at specific time intervals by invoking aws lambda every N hours.
For example, parse a page at specific url every 6 hours and store result in s3 bucket.
Have many (~100k) urls each processed that way.
Of course, you can have a VM that hosts some scheduler that would trigger lambdas, as described in this answer, but that breaks the "serverless" approach.
So, is there a way to do this using aws services only?
Things I tried that does not work:
SQS can delay messages, but only for maximum of 15 min (I need hours) and there is no built-in integration between SQS and Lambda so you need to have some polling agent (lambda?) that would poll the qeueu all the time and send new messages to worker lambda, which again breaks the point of only executing at scheduled time;
CloudWatch Alarms can send messages to SNS that triggers Lambda. You can have periodic lambda calls implemented like that by using future metric timestamp, however alarm message cannot have a custom data (think url from example above) connected to it, so that does not work too;
I could create Lambda CloudWatch scheduled triggers programmatically but they also cannot pass any data to Lambda.
The only way I could think of, is to have a dynamo DB table with "url" records, each with the timestamp of last "processing" and have periodic lambda that would query the table and send "old" records as jobs to another "worker" lambda (directly or via SNS).
That would work, however you still need to have a "polling" lambda, which could become a bottleneck as number of items to process grows.
Any other ideas?
100k jobs every 6 hours, doesn't sound like a great use case for Serverless IMO. Personally, I would set up a CloudWatch event with a relevant cron expression that triggered a Lambda to start an EC2 instance that processed all the URLs (stored in DynamoDB) and script the EC2 instance to shutdown after processing the last url.
But that's not what you asked.
You could set up a CloudWatch event with a relevant cron expression that spawns a lambda (orchestrator) reads the urls from DynamoDB or even an S3 file then invokes a second lambda (worker) for each url to actually parse the pages.
Using this pattern you will start hitting concurrency issues at 1000 lambdas (1 orchestrator & 999 workers), less if you have other lambdas running in the same region. You can ask AWS to increase this limit, but I don't know under what scenarios they will do this, or how high they will increase the limit.
From here you have three choices.
Split out the payload to each worker lambda so each instance receives multiple urls to process.
Add an another column to your list of urls and group urls with this column (e.g. first 500 are marked with a 1, second 500 are marked with a 2, etc). Then your orchestrator lambda could take urls off the list in batches. This would require you to run the CloudWatch event at a greater frequency and manage the state so the orchestrator lambda when invoked knows which is the next batch (I've done this at a smaller scale just storing a variable in a S2 file).
Would be to use some combination of options 1 and 2.
Looks like, it's fitting Batch processing scenario with AWS lambda function as a job. It's serverless but obviously adds dependency on another AWS service.
In the same time, it has dashboard, processing status, retries and all perks from job scheduling service.
I have a Lambda function which spawns up a number of worker Lambda functions and each worker function posts to an SQS queue if there is any error.
There is a UI which long-polls the SQS queue for any errors. My problem is that how do I know when the processing is completed?
Since the first Lambda function (which spawns the worker Lambda functions) runs asynchronously, that is, it splits the data across the worker Lambda functions and then returns/finishes. I need to have an ability to be able to figure out when the processing is completed.
The reason why I'm only posting the errors to the SQS queue and not success is because if I have 10,000 objects to process and there are 9,000 successes, I would have to do quite many ReceiveMessages (an SQS API call to retrieve items from the SQS queue) in the UI/client side (probably around 900 calls if I specify the maximum number of messages to receive as 10 per call. You cannot retrieve more than 10 messages from the queue per a call.)
How can I overcome this design issue?
I'm using API Gateway, AWS Lambda and Dynamo DB (feel free to suggest any other Amazon/AWS service that could make this easier to get the job done.)