I have a lambda function that is is logging into a server on a specific interval, defined from a CloudWatch event rule. There are multiple servers that need to be logged into on different intervals, each defined by their own respective CloudWatch event rule. However, I only want one lambda function invocation hitting a specific server at a time. Can each CloudWatch event rule be limited to just one lambda function invocation at a time, or would I have to create a duplicate lambda function for each specific CloudWatch event rule and set the concurrent invocation to 1 that way? I was hoping to avoid that as it just adds duplicate lambda functions. I'd just want to keep it simple, if possible.
If you know the IDs of these instances, you can pass them as arguments to your CloudWatch Event rules in the form of constant:
Your single function would get the ID of the instance from the event object and perform operations on that one specific instance.
Related
I have an aws lambda function. When it receives only one trigger, it always succeds. But when it receives more than one trigger, it sometimes throws error. The first trigger always succeds.
Can I configure one aws lambda function receives only one trigger?
can one aws lambda function handle multiple triggers at once?
Yes, Lambda functions can handle multiple triggers at once.
when it receives more than one trigger, it sometimes throws error
This is most probably related to your implementation. Are you doing something different based on the inputs? Is the code behaving differently based on time?
Can I configure one aws lambda function receives only one trigger?
You can limit the concurrency of the Lambda function. If you set it to 1, you can only have one Lambda function running at any given time.
See: Set Concurrency Limits on Individual AWS Lambda Functions
Here's what I know, or think I know.
In AWS Lambda, the first time you call a function is commonly called a "cold start" -- this is akin to starting up your program for the first time.
If you make a second function invocation relatively quickly after your first, this cold start won't happen again. This is colloquially known as a "warm start"
If a function is idle for long enough, the execution environment goes away, and the next request will need to cold start again.
It's also possible to have a single AWS Lambda function with multiple triggers. Here's an example of a single function that's handling both API Gateway requests and SQS messages.
My question: Will AWS Lambda reuse (warm start) an execution environment when different event triggers come in? Or will each event trigger have it's own cold start? Or is this behavior that's not guaranteed by Lambda?
Yes, different triggers will use the same containers since the execution environment is the same for different triggers, the only difference is the event that is passed to your Lambda.
You can verify this by executing your Lambda with two types of triggers (i.e. API Gateway and simply the Test function on the Lambda Console) and looking at the CloudWatch logs. Each Lambda container creates its own Log Stream inside of your Lambda's Log Group. You should see both event logs going to the same Log Stream which means the 2nd event is successfully using the warm container created by the first event.
My AWS lambda function is getting invoked my multiple places like api-gateway, aws-SNS and cloud-watch event. Is there any way to figure out who invoked the lambda function? My lambda function's logic depends on the invoker.
Another way to achieve this is having three different lambda functions but I don't want to go that way if I can find invoker information in a single Lambda function itself.
I would look at the event object as all of the three services will have event of different structure.
For example, for CloudWatch Events I would check if there is a source field in the event. For SNS I would check for Records and API gateway for httpMethod.
But you can check for any other attribute that is unique to a given service. If you are not sure, just print out to logs example events from your function for the three services and check what would be the most suited attribute to look for.
I'm not sure if I understand AWS Lambda - SQS triggers correctly. Can I possibly configure it in such a way that one SQS queue can trigger different lambda functions depending on the message body or a message attribute?
My use case: I have three different lambda functions (processCricket, processFootball, processTennis) each of which perform a unique function. I have a single queue (processGame) which receives messages. Each message on the queue has a attribute "type" which is either "Cricket", "Football" or "Tennis". Can I invoke a different lambda function depending on the "type" on the message?
Option 1: Configure SQS to trigger a different lambda function depending on the type (Not sure if I can do this)
Option 2: Configure one lambda function which can check type and then call the other lambda functions depending on its type
Option 3: Create separate queues for each lambda. Control which lambda processes the message by adding the message to the appropriate queue.
Option 1: Configure SQS to trigger a different lambda function depending on the type
You can't know about the type until it is consumed by the lambda. So this one is not possible.
Option 2: Configure one lambda function which can check type and then call the other lambda functions depending on its type
Yes it is the "possible" way of first option. but it may cost "more" depending on your usage. When you consume the sqs in batch mode, then you have to invoke multiple lambdas by making multiple checks.
Option 3: Create separate queues for each lambda. Control which lambda processes the message by adding the message to the appropriate queue.
In my opinion, this could be the best option. You can configure different DLQ for each queue, set different batch size depending on your business rules, no need for extra lambda to increase "complexity".
You should not configure multiple Lambda functions as triggers for a single SQS queue. This is because the message in SQS will be delivered to any one consumer and while this message is being processed by that consumer, it would not be visible to others. Thus, you wouldn't be able to decide which "type" of message goes to which function, so Option 1 is invalid.
Both Option 2 and 3 should work fine. I would select Option 2 if you do not expect that many messages to be delivered to your queue, thus not having to worry about Lambda scaling. Also note, multiple messages can be delivered in a single batch to the Lambda trigger, so you would have to implement your logic accordingly.
If you're expecting a large number of messages, then Option 3 would be better suited.
Your best option here would be to not send the messages directly to the queue at all. You can use either SNS or EventBridge as the destination for the message. Then you should have one queue for each type of message. You can then subscribe each queue to the source (SNS or EventBridge) and only receive the messages that make sense for that queue. With EventBridge you can do a fair amount of filtering on the entire payload. For SNS you'd need to add the type to the attributes so it can be used for filtering.
You should probably post to an SNS topic and have multiple lambdas subscribe to the topic and process the event as they wish.
Multiple lambda subscribing to SNS with different subscription filters looks best option here.
Benefits.
Minimum infra to manage.
Subscribers (Lambdas) do not have to worry about the filtering after receiving the message, each lambda will get specific message for processing based on the condition given in subscription filters.
SNS will take care of routing part based on the subscription filter.
Only additional piece you have to take care is make the type field available in Message Headers to apply to the subscription filters.
I was asking the same question and found article below while I was considering EventBridge vs SQS.
In short, Option 1 is possible i.e. trigger a specific function based on event type. This came in Nov 2021.
https://aws.amazon.com/about-aws/whats-new/2021/11/aws-lambda-event-filtering-amazon-sqs-dynamodb-kinesis-sources/
For more around price comparison see here: https://dev.to/aws-builders/should-we-consider-migrate-to-amazon-eventbridge-from-amazon-sns-sqs--4dgi
I have a Scheduled Lambda function (via CloudWatch event rule) which is triggered every minute.
This lambda picks up a request from SQS queue, process the parameters and triggers AWS step functions workflow.
Now, ONLY 1 Lambda function instance is running every minute. How can I trigger multiple (e.g. 10) concurrent Lambda functions like this?
One way I can think of is to create 10 Cloudwatch event rule which runs every 1 minute, but I am not sure if that is the right way of doing it. Also, if I use this way, 10 lambda would be called even if I don't have entries in my SQS queue.
You can use the lambda step function.
Event trigger first function. Then it will call multiple functions parallel.
Some useful links:
https://www.youtube.com/watch?v=c797gM0f_Pc
https://medium.com/soluto-nashville/simplifying-workflows-with-aws-step-functions-57d5fad41e59
since your lambda function fetching data from SQS so you can create event source mapping between lambda and SQS so whenever message published to SQS, your lambda function will invoke concurrently depending on number of messages in queue so you do not need to invoke lamnda from cloudwatch event