How to invoke asynchronous Lambda function using aws cli - amazon-web-services

I have created a asynchronous lambda function that is running fine when I am testing it on aws console. It is taking 6-7 mins to complete the execution. But when I am calling the same function from my local aws cli it is showing the below output.
Read timeout on endpoint URL: "https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/mandrill/invocations"
Any idea whats going wrong and how can I resolve that. The command I am using to invoke this function from cli is below,
aws lambda invoke --invocation-type RequestResponse --function-name mandrill --region us-east-1 --payload "{ \"domain\": \"faisal999.wombang.com\" }" --cli-binary-format raw-in-base64-out response.json

To invoke a function asynchronously, set InvocationType to Event.
aws lambda invoke --invocation-type Event --function-name mandrill --region us-east-1 --payload "{ \"domain\": \"faisal999.wombang.com\" }" --cli-binary-format raw-in-base64-out response.json
Additionally, consider the following:
For asynchronous invocation , Lambda adds events to a queue before sending them to your function. If your function does not have enough capacity to keep up with the queue, events may be lost. Occasionally, your function may receive the same event multiple times, even if no error occurs. To retain events that were not processed, configure your function with a dead-letter queue .

In your command you have --invocation-type RequestResponse
From the AWS docs:
RequestResponse (default) - Invoke the function synchronously. Keep the connection open until the function returns a response or times out. The API response includes the function response and additional data.
You may want to try it with --invocation-type Event.
Event - Invoke the function asynchronously. Send events that fail multiple times to the function's dead-letter queue (if it's configured). The API response only includes a status code.

Related

Lambda does not send message to SQS destination if invoked by an SQS source

I have the following set up:
I have a simple return in my Lambda function right now:
async function handler(event) {
return 'success';
}
If I invoke the lambda from the aws-cli using:
aws lambda invoke --function-name tt-yt-automation-production-1b2b-tiktok-download --invocation-type Event --cli-binary-format raw-in-base64-out /dev/null
It will return the message to the destination SQS (SQS#2), which will trigger another Lambda.
However, if SQS#1 has a message and invokes the Lambda, it does not send a message to SQS#2 and invokes the next Lambda in line.
My suspicion is that an SQS invocation doesn't count as an asynchronous invocation and won't send a message to SQS#2.
Is there a configuration I'm missing?
My suspicion is that an SQS invocation doesn't count as an asynchronous invocation and won't send a message to SQS#2.
You are correct. SQS invocation is synchronous. You have to "manually" (using AWS SDK) submit messages from your lambda to your SQS#2.

Why does synchronous lambda invoke multiple times?

I have a lambda function my-func which should only be run once at a time. Because of that, I set the Reserved Concurrency to 1. I am trying to invoke it with the command:
aws lambda invoke --function-name my-func --invocation-type RequestResponse --cli-binary-format raw-in-base64-out --payload '{\"recreate\":true}' response.json
However, it results in this error after a few seconds:
An error occurred (TooManyRequestsException) when calling the Invoke operation (reached max retries: 2): Rate Exceeded.
It appears that it tries to invoke the function multiple times even though the original invocation never ran into an error. If I increase the Reserved Concurrency to a value like 5, then the single lambda invoke command results in multiple invocations even though the first invocation continues to execute without any problem.
Another thing that is throwing me off is that it works correctly from the AWS console GUI. I created a test event in the AWS lambda function on the console interface. It invokes my-func with the same payload I am using in the aws-cli command:
{
"recreate": true
}
Invoking the function using this test event works flawlessly. It seems to just run the function once and doesn't cause a TooManyRequestsException. Does this mean something is wrong with my aws-cli command?
I've figured out the problem. Even though my function's timeout was set to 600 (5 minutes), the aws cli has it's own socket timeout which defaults to 60 seconds. Each time it reached this timeout, it must have triggered a retry. I fixed it by adding --cli-read-timeout 600 to my command like so:
aws --cli-read-timeout 600 lambda invoke --function-name my-func --invocation-type RequestResponse --cli-binary-format raw-in-base64-out --payload '{\"recreate\":true}' response.json

AWS Lambda retry using SQS DLQ

What example belong does is, a cronjob runs every minute to trigger a lambda function written in Golang. If the lambda returns an error, a message is put into a DLQ straight away. However, what I am struggling to work out is that the retry logic. A message should go to DLQ only after third lambda try which is what I am trying to accomplish. If you see I am missing something in AWS commands below please let me know.
What I tried so far is that, I created an additional normal queue on top of DQL and linked it to lambda instead with --dead-letter-config. Then linked DLQ to target with DeadLetterConfig with RetryPolicy. I am not sure if this is how the whole thing is designed to work but I think there may be more components required for this. Not even sure if this is correct either!
Lambda (main.go)
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func main() {
lambda.Start(handle)
}
func handle(_ context.Context, event events.CloudWatchEvent) error {
detail, err := event.Detail.MarshalJSON()
if err != nil {
return err
}
if string(detail) == `{"ok": "yes"}` {
return nil
}
return fmt.Errorf("not ok")
}
AWS steps
GOOS=linux CGO_ENABLED=0 go build -ldflags "-s -w" -o main main.go
zip main.zip main
# Create rule
aws --profile localstack --endpoint-url http://localhost:4566 events put-rule \
--name test-rule \
--schedule-expression 'cron(* * * * *)'
# Create DLQ
aws --profile localstack --endpoint-url http://localhost:4566 sqs create-queue \
--queue-name test-dead-letter-queue \
--attributes '{}'
# Create lambda with DLQ
aws --profile localstack --endpoint-url http://localhost:4566 lambda create-function \
--function-name test-lambda \
--handler main \
--runtime go1.x \
--role test-role \
--dead-letter-config '{"TargetArn":"arn:aws:sqs:eu-west-1:000000000000:test-dead-letter-queue"}' \
--zip-file fileb://main.zip
# Create lambda rule (purposely causes lambda error!)
aws --profile localstack --endpoint-url http://localhost:4566 events put-targets \
--rule test-rule \
--targets '[{"Id":"1","Arn":"arn:aws:lambda:eu-west-1:000000000000:function:test-lambda","Input":"{\"ok\":\"no\"}"}]'
I am not seeing what AWS doc says happening.
Error handling for a given event source depends on how Lambda is invoked. Amazon CloudWatch Events is configured to invoke a Lambda function asynchronously.
Asynchronous invocation – Asynchronous events are queued before being used to invoke the Lambda function. If AWS Lambda is unable to fully process the event, it will automatically retry the invocation twice, with delays between retries.
You have a asynchronous schedule rule -> lambda setup. Note that async retries are made with a delay (see below). Perhaps you are not noticing the retries because the backoff time >= your per-minute schedule?
Lambda error handling and automatic retry logic is different for async and sync invocation patterns:
Asynchronous (e.g. event-triggered) Lambda Invocations
Two async lambda retries are made with a 1 minute and 2 minute backoff, respectively. Async lambdas will be retried in the case of both invocation and function errors. After the retries are exhausted, the event will be sent to the lambda's DLQ or the lambda's failure destination*, if configured.
Synchronous (e.g. SQS-triggered) Lambda Invocations
An alternative is to use SQS as an event source (schedule rule -> Queue -> Lambda). In this synchronous scenario, retry logic and the DLQ is configured on the Queue itself, not on the Lambda.
* Destinations are a newer alternative to async lambda DLQs: "Destinations and DLQs can be used together and at the same time although Destinations should be considered a more preferred solution."

Destination only works when Lambda is invoked through AWS CLI

I have a hello-world test Lambda configured with:
trigger: API Gateway
destination: Amazon SQS. one queue for success, and another for failure.
import json
def lambda_handler(event, context):
print("Received event: " + json.dumps(event))
return {
"statusCode": 200,
"body": 'success'
}
When I invoke the Lambda via the CLI, the message gets enqueued to the success queue as expected:
aws lambda invoke --function-name event-destinations --invocation-type Event --payload '{}' response.json
However, when I invoke the Lambda via the API Gateway, no messages are enqueued to either destination queue. I have Lambda Proxy Integration enabled. Cloudwatch metrics confirm that the invocation is successful (Invocations count goes up, Errors count does not). The following returns a 200 and and the expected response body from my Lambda code:
curl 'https://REDACTED.execute-api.us-east-1.amazonaws.com/api_trigger' \
--header 'Content-Type: application/json' \
--data-raw '{}'
Similarly, no messages are enqueued to either destination queue when I use the Test button in the Lambda console. edit: this is expected behavior per https://www.trek10.com/blog/lambda-destinations-what-we-learned-the-hard-way
Why would the destination behavior differ between these 3 invocations? I have set retry attempts to 0 for this test.
It seems there is a set of valid {trigger, destination} pairs, and {API Gateway, SQS} is not one of them. Being able to invoke the lambda from a given trigger is not sufficient to get the event passed along to the destination. AWS console doesn't enforce these pairing or raise warnings.
I referenced the chart from: https://www.trek10.com/blog/lambda-destinations-what-we-learned-the-hard-way/
I added an S3 trigger to my lambda, and the S3 events are published to the destination SQS queues without issue.
Lambda Destinations are only triggered for asynchronous invocations. In Lambda non-proxy (custom) integration, the backend Lambda function is invoked synchronously by default.
You can configure the Lambda function for a Lambda non-proxy integration to be invoked asynchronously by specifying 'Event' as the Lambda invocation type. This is done as follows:
In Integration Request, add an X-Amz-Invocation-Type header with a static value of 'Event'.
Quoting from here.

Messages are not getting stored in DLQ when Lambda function gets failed

I have a DLQ configured to store messages when Lambda function gets failed.
Lambda Console Snippet -
Configuration in Lambda-
DLQ configuration -
Code Snippet:-
But message count is always 0 in DLQ, it's not increasing.
Where I am getting wrong?
Lamda failure messages are only put in DLQ if the lambda was invoked asynchronously
You can invoke your lambda asynchronously by specifying --invocation-type Event using aws sdk i.e.
$ aws lambda invoke --function-name my-function --invocation-type Event --payload '{ "key": "value" }' response.json
{
"StatusCode": 202
}
For more information you can read the documentation here