I'm using AWS Dynamo Streams to trigger an AWS Lambda function. If the lambda function successfully invoked, I want a child lambda function to be invoked via the async invocation destination feature provided by Lambda.
Even though I've configured the async invocation with the target child Lambda function, the child function is not triggered when the parent Lambda function is successful, the child Lambda function's associated CloudWatch log group is empty.
My parent lambda has such policies: AWSLambdaInvocation-DynamoDB (Provides read access to DynamoDB Streams), AWSLambdaFullAccess, AWSLambdaBasicExecutionRole
Question:
Why don't onSuccess and onFailure destinations work with invoking parent lambda via AWS Dynamo Streams?
AWS support helped me resolve this issue.
If we configure an async destination for Lambda, the async destination would only work if your Lambda function is invoked asynchronously. So in this specific case, with a DynamoDB event source, DynamoDB does not invoke our function async, but rather Lambda reads from the stream. So there's another option to configure a destination for streams, but that would only be for an on-failure destination.
The async destination would only work with async event sources like SNS or S3.
The page that shows how different event sources works with Lambda
So, if you look under the heading "Services that invoke Lambda functions asynchronously" - those are the service integrations that invoke Lambda async and would work with async destinations. Either those services, or if you actually invoke your function VIA the CLI asynchronously.
So for example if you invoke your function from the CLI and pass the flag
--invocation-type Event. That would invoke your function async.
If you pass --invocation-type RequestResponse, that would be synchronous.
Related
Background:
On object creation in an S3 bucket, an event notification is sent to asynchronously invoke a Lambda function.
A DLQ has been added to the Lambda function to explore error-handling and retry mechanisms for errors during the Lambda execution.
Breaking this down in simpler terms (abstracting away the complexity):
Creation: S3 object created
Notification: S3 event notification sent to Lambda
Invocation: Lambda is attempted to be asynchronously invoked
Execution: Lambda is invoked successfully and is currently being executed
Question:
Does the DLQ added to the Lambda account for errors occurring during the invocation i.e. pre-execution of the Lambda? I know the DLQ accounts for errors occurring during the execution of the Lambda
Is it possible to trigger a specific lambda function from the AWS Kinesis stream?
I have multiple lambda functions (CREATE/UPDATE/DELETE) which are subscribed to the kinesis stream, now I want to trigger only specific lambda function based on the data/event type.
Is it possible? If not what is the better architecture/way to handle this problem?
Sadly, you can't do this. There are generally two ways to overcome this:
Connect your stream to one lambda function. The function will receive all records from the stream and dispatch them to other functions. The dispatch can be direct, or through dedicated SQS queues for example.
/-----> SQS 1 ---> CREATE lambda
Kinesis---->Lambda ------> SQS 2 ---> UPDATE lambda
\-----> SQS 3 ---> DELETE lambda
Also use one function, but this time UPDATE, CREATE and DELETE code will be in the single function. So in the lambda handler function, you would use basic if-then-else conditions to invoke different code for UPDATE, CREATE and DELETE functionality.
I need to asynchronously invoke lambda functions from my EC2 instance. At high level, so many services come to my mind(most likely all of these support my desired functionality) -
AWS State machine(not sure), step functions, Active MQ, SQS, SNS. I am aware about pros and cons of each at high level. However not sure which one should I go for :|. Please let me know your feedback.
PS: We expect the invocation in 1000s per second at peak for very short periods. Concurrency for lambda functions is not an issue as we can ask Amazon for increase in the limit along with the burst.
If you want to invoke asynchronously then you can not use SQS as SQS invoke lambda function synchronously with event source mapping.
You can use SNS to invoke lambda function asynchronously out of the option you listed above.
Better option would be writing small piece of code in any AWS SDK whichever you are comfortable and then call lambda function from that piece of code asynchronously.
Example in python using boto3 asynchronously
pass Event in InvocationType to invoke lambda function asynchronously and pass RequestResponse to invoke lambda function synchronously
client = boto3.client('lambda')
response = client.invoke(
FunctionName="loadSpotsAroundPoint",
**InvocationType='Event',**
Payload=payload3
Background:
I'm developing a custom AWS github-webhook via Terraform. I'm using AWS API Gateway to trigger an AWS Lambda function that validates the GitHub webhook's sha256 signature from the request header. If the lambda function successfully validates the request, I want a child lambda function to be invoked via the async invocation destination feature provided by Lambda.
Problem:
Even though I've configured the async invocation with the target child Lambda function, the child function is not triggered when the parent Lambda function is successful. This is reflected in the fact that the child Lambda function's associated CloudWatch log group is empty.
Relevant Code:
Here's the Terraform configuration for the Lambda function destination:
resource "aws_lambda_function_event_invoke_config" "lambda" {
function_name = module.github_webhook.function_name
destination_config {
on_success {
destination = module.lambda.function_arn
}
}
}
If more code from the module is needed, feel free to ask in the comments. The entire source code for this module is here: https://github.com/marshall7m/terraform-aws-codebuild/tree/master/modules/dynamic-github-source
Attempts:
Made sure both parent/child Lambda functions have permission to create logs within their respective Cloudwatch log group (attached arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole policy to both)
Made sure the parent Lambda function has the correct permission to invoke the child function: "lambda:InvokeFunction", "lambda:InvokeAsync"
Setup async invocation for child lambda function for both success and failure parent Lambda runs (child function still not triggered)
Add API integration request parameter `{'X-Amz-Invocation-Type': 'Event'} as mentioned in: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-integration-async.html
For every attempt to fix this, I made sure to redeliver the request from the source (github webhook page) and not via the AWS Lambda console.
From your description it seems to me that you are invoking parent function synchronously. Lambda destinations are only for asynchronous invocations:
You can also configure Lambda to send an invocation record to another service. Lambda supports the following destinations for asynchronous invocation
So you have to execute your parent function asynchronously for your child function to be invoked.
Adding the API integration request parameter `{'X-Amz-Invocation-Type': 'Event'} as mentioned in: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-integration-async.html did the trick. I initially came to the conclusion that this solution doesn't work based on the fact that a new Cloudwatch log group stream wasn't created when I redelivered the github payload. As it turns out, when I took a closer look at the previous Cloudwatch log stream, I found out that Cloudwatch appends logs for retriggered invocations of the Lambda function to the previous associated Cloudwatch log stream.
I have created SQS Queue and event source mapping that triggers lambda function on receiving message. All this works fine and I am doing through aws java sdk.
Now I want to return value from Lambda function. How will I be able to access it as I am calling Lambda function only through SQS.Any help is appreciated
Below is my handler method structure:
public String handleRequest(SQSEvent event, Context context) {
.....
....
return "something"
}
This is not possible because the that sends a message to Amazon SQS completes once the message is sent. This allows the queue to be used to decouple services. In fact, a message could sit in an SQS queue for up to 14 days.
While the SQS queue will trigger the AWS Lambda function very quickly, there is still no concept of a "return" value from the Lambda function if it is triggered from an SQS message.
If you wish to trigger Lambda and wait for a response, you can directly invoke the Lambda function and await a response. This would not involve the use of SQS.