What example belong does is, a cronjob runs every minute to trigger a lambda function written in Golang. If the lambda returns an error, a message is put into a DLQ straight away. However, what I am struggling to work out is that the retry logic. A message should go to DLQ only after third lambda try which is what I am trying to accomplish. If you see I am missing something in AWS commands below please let me know.
What I tried so far is that, I created an additional normal queue on top of DQL and linked it to lambda instead with --dead-letter-config. Then linked DLQ to target with DeadLetterConfig with RetryPolicy. I am not sure if this is how the whole thing is designed to work but I think there may be more components required for this. Not even sure if this is correct either!
Lambda (main.go)
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(handle)
}
func handle(_ context.Context, event events.CloudWatchEvent) error {
detail, err := event.Detail.MarshalJSON()
if err != nil {
return err
}
if string(detail) == `{"ok": "yes"}` {
return nil
}
return fmt.Errorf("not ok")
}
AWS steps
GOOS=linux CGO_ENABLED=0 go build -ldflags "-s -w" -o main main.go
zip main.zip main
# Create rule
aws --profile localstack --endpoint-url http://localhost:4566 events put-rule \
--name test-rule \
--schedule-expression 'cron(* * * * *)'
# Create DLQ
aws --profile localstack --endpoint-url http://localhost:4566 sqs create-queue \
--queue-name test-dead-letter-queue \
--attributes '{}'
# Create lambda with DLQ
aws --profile localstack --endpoint-url http://localhost:4566 lambda create-function \
--function-name test-lambda \
--handler main \
--runtime go1.x \
--role test-role \
--dead-letter-config '{"TargetArn":"arn:aws:sqs:eu-west-1:000000000000:test-dead-letter-queue"}' \
--zip-file fileb://main.zip
# Create lambda rule (purposely causes lambda error!)
aws --profile localstack --endpoint-url http://localhost:4566 events put-targets \
--rule test-rule \
--targets '[{"Id":"1","Arn":"arn:aws:lambda:eu-west-1:000000000000:function:test-lambda","Input":"{\"ok\":\"no\"}"}]'
I am not seeing what AWS doc says happening.
Error handling for a given event source depends on how Lambda is invoked. Amazon CloudWatch Events is configured to invoke a Lambda function asynchronously.
Asynchronous invocation – Asynchronous events are queued before being used to invoke the Lambda function. If AWS Lambda is unable to fully process the event, it will automatically retry the invocation twice, with delays between retries.
You have a asynchronous schedule rule -> lambda setup. Note that async retries are made with a delay (see below). Perhaps you are not noticing the retries because the backoff time >= your per-minute schedule?
Lambda error handling and automatic retry logic is different for async and sync invocation patterns:
Asynchronous (e.g. event-triggered) Lambda Invocations
Two async lambda retries are made with a 1 minute and 2 minute backoff, respectively. Async lambdas will be retried in the case of both invocation and function errors. After the retries are exhausted, the event will be sent to the lambda's DLQ or the lambda's failure destination*, if configured.
Synchronous (e.g. SQS-triggered) Lambda Invocations
An alternative is to use SQS as an event source (schedule rule -> Queue -> Lambda). In this synchronous scenario, retry logic and the DLQ is configured on the Queue itself, not on the Lambda.
* Destinations are a newer alternative to async lambda DLQs: "Destinations and DLQs can be used together and at the same time although Destinations should be considered a more preferred solution."
Related
I have the following set up:
I have a simple return in my Lambda function right now:
async function handler(event) {
return 'success';
}
If I invoke the lambda from the aws-cli using:
aws lambda invoke --function-name tt-yt-automation-production-1b2b-tiktok-download --invocation-type Event --cli-binary-format raw-in-base64-out /dev/null
It will return the message to the destination SQS (SQS#2), which will trigger another Lambda.
However, if SQS#1 has a message and invokes the Lambda, it does not send a message to SQS#2 and invokes the next Lambda in line.
My suspicion is that an SQS invocation doesn't count as an asynchronous invocation and won't send a message to SQS#2.
Is there a configuration I'm missing?
My suspicion is that an SQS invocation doesn't count as an asynchronous invocation and won't send a message to SQS#2.
You are correct. SQS invocation is synchronous. You have to "manually" (using AWS SDK) submit messages from your lambda to your SQS#2.
I have created a asynchronous lambda function that is running fine when I am testing it on aws console. It is taking 6-7 mins to complete the execution. But when I am calling the same function from my local aws cli it is showing the below output.
Read timeout on endpoint URL: "https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/mandrill/invocations"
Any idea whats going wrong and how can I resolve that. The command I am using to invoke this function from cli is below,
aws lambda invoke --invocation-type RequestResponse --function-name mandrill --region us-east-1 --payload "{ \"domain\": \"faisal999.wombang.com\" }" --cli-binary-format raw-in-base64-out response.json
To invoke a function asynchronously, set InvocationType to Event.
aws lambda invoke --invocation-type Event --function-name mandrill --region us-east-1 --payload "{ \"domain\": \"faisal999.wombang.com\" }" --cli-binary-format raw-in-base64-out response.json
Additionally, consider the following:
For asynchronous invocation , Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue .
In your command you have --invocation-type RequestResponse
From the AWS docs:
RequestResponse (default) - Invoke the function synchronously. Keep the connection open until the function returns a response or times out. The API response includes the function response and additional data.
You may want to try it with --invocation-type Event.
Event - Invoke the function asynchronously. Send events that fail multiple times to the function's dead-letter queue (if it's configured). The API response only includes a status code.
I have a hello-world test Lambda configured with:
trigger: API Gateway
destination: Amazon SQS. one queue for success, and another for failure.
import json
def lambda_handler(event, context):
print("Received event: " + json.dumps(event))
return {
"statusCode": 200,
"body": 'success'
}
When I invoke the Lambda via the CLI, the message gets enqueued to the success queue as expected:
aws lambda invoke --function-name event-destinations --invocation-type Event --payload '{}' response.json
However, when I invoke the Lambda via the API Gateway, no messages are enqueued to either destination queue. I have Lambda Proxy Integration enabled. Cloudwatch metrics confirm that the invocation is successful (Invocations count goes up, Errors count does not). The following returns a 200 and and the expected response body from my Lambda code:
curl 'https://REDACTED.execute-api.us-east-1.amazonaws.com/api_trigger' \
--header 'Content-Type: application/json' \
--data-raw '{}'
Similarly, no messages are enqueued to either destination queue when I use the Test button in the Lambda console. edit: this is expected behavior per https://www.trek10.com/blog/lambda-destinations-what-we-learned-the-hard-way
Why would the destination behavior differ between these 3 invocations? I have set retry attempts to 0 for this test.
It seems there is a set of valid {trigger, destination} pairs, and {API Gateway, SQS} is not one of them. Being able to invoke the lambda from a given trigger is not sufficient to get the event passed along to the destination. AWS console doesn't enforce these pairing or raise warnings.
I referenced the chart from: https://www.trek10.com/blog/lambda-destinations-what-we-learned-the-hard-way/
I added an S3 trigger to my lambda, and the S3 events are published to the destination SQS queues without issue.
Lambda Destinations are only triggered for asynchronous invocations. In Lambda non-proxy (custom) integration, the backend Lambda function is invoked synchronously by default.
You can configure the Lambda function for a Lambda non-proxy integration to be invoked asynchronously by specifying 'Event' as the Lambda invocation type. This is done as follows:
In Integration Request, add an X-Amz-Invocation-Type header with a static value of 'Event'.
Quoting from here.
I have a DLQ configured to store messages when Lambda function gets failed.
Lambda Console Snippet -
Configuration in Lambda-
DLQ configuration -
Code Snippet:-
But message count is always 0 in DLQ, it's not increasing.
Where I am getting wrong?
Lamda failure messages are only put in DLQ if the lambda was invoked asynchronously
You can invoke your lambda asynchronously by specifying --invocation-type Event using aws sdk i.e.
$ aws lambda invoke --function-name my-function --invocation-type Event --payload '{ "key": "value" }' response.json
{
"StatusCode": 202
}
For more information you can read the documentation here
I have created a Lambda function which will be triggered via a subscription to a CloudWatch Log Pattern and the function will in-turn pass the logs to a web-hook (Refer https://gist.github.com/tomfa/f4e090cbaff0189eba17c0fc301c63db).
Now, I need this lambda function to EXECUTE only if the the function is called "x" times in "y" minutes.
Is it possible to disable/enable a lambda through SNS. Another idea is to
1. Create CloudWatch Events on State Change
2. Subscribe this to a SNS which will
enables the lambda, if state goes from OK to ALARM
disables the lambda, if state goes back to OK
You can use CloudWatch Events to send a message to an Amazon SNS topic on a schedule. make sure you are in correct region as as CloudWatch Events is not available in every region.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
How to configure Cloudwatch :
AWS Lambda Scheduled Tasks
run scheduled task in AWS without cron
AWS Lambda Scheduled Tasks
Use CloudWatch and get metrics about the lambda invocation and error and you can find successful call and error , threshold count. now you can use AWS SDK
https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-data.html
export.handler = function(event, context, callback) {
apiCall().then(resp => callback(null, resp).catch(err => callback(err));
}
You could create a custom CloudWatch Metric based of your search filter of the CloudWatch Logs
Examples of this can be found in the Amazon CloudWatch Logs User Guide
Count Log Events
aws logs put-metric-filter \
--log-group-name MyApp/access.log \
--filter-name EventCount \
--filter-pattern "" \
--metric-transformations \
metricName=MyAppEventCount,metricNamespace=MyNamespace,metricValue=1,defaultValue=0
Count Occurrences
aws logs put-metric-filter \
--log-group-name MyApp/message.log \
--filter-name MyAppErrorCount \
--filter-pattern 'Error' \
--metric-transformations \
metricName=ErrorCount,metricNamespace=MyNamespace,metricValue=1,defaultValue=0
Then you can go in and create a CloudWatch Alarm that will fire based on x of these events being logged in y time span. The CloudWatch Alarm can send a message to an SNS topic that triggers your Lambda function