I have a use case where client sends a request with the following payload:
payload = {
identifier: arn:aws:lambda:{region}:{account}:function:{function-name}:{version}
data = ""
}
I want to invoke lambdas based on identifier. There are some consideration:
data in payload can very from Kbs to few MBs
Lambda which need to be invoked can increase
I was thinking of having an api gateway which recieves request from the client which is then connected to a parent lambda A which invokes other child lambdas B and C based on identifier provided by the client. However I am not sure if the child lambda can handle few Mbs of data. Also is it a good way to chain lambdas in the following way
I was looking into how I can asynchronously invoke lambdas but I couldn't find anywhere where it allows to invoke lambdas based on versions. Any suggestions in this regard would be helpful thanks.
You can invoke another Lambda from Lambda based on version.
However, the problem with asynchronous invocation is that the payload size limit is less than what you need. According to the documentation, the size limit of invocation payload is 256 KB for asynchronous calls and 6 MB for synchronous calls.
I don’t know which language do you prefer, but invoking Lambda from another Lambda based on version with Python’s boto3 can look like this:
import json
from boto3 import client as boto3_client
client = boto3_client('lambda')
def lambda_handler(event, context):
# ...
client.invoke(
# your lambda identifier with version
FunctionName=your_function_name,
# to invoke a function asynchronously, set InvocationType to 'Event'
InvocationType='Event',
Payload=json.dumps(your_payload)
)
# ...
As stated by the documentation, the 'FunctionName' parameter is the
name of the Lambda function, version, or alias. […] You can append a
version number or alias to any of the formats. The length constraint
applies only to the full ARN. If you specify only the function name,
it is limited to 64 characters in length.
The invoke function accepts the following name formats for 'FunctionName' argument:
Function name – my-function (name-only), my-function:v1 (with alias).
Function ARN – arn:aws:lambda:us-west-2:123456789012:function:my-function.
Partial ARN – 123456789012:function:my-function.
Related
I have a handful of Python Lambda functions with tracing enabled, they start like this:
import json
import boto3
from aws_xray_sdk.core import patch
from aws_xray_sdk.core import xray_recorder
patch(['boto3'])
def lambda_handler(event, context):
...
With this tracing works for every Lambda function itself and I can see the subservice calls to DynamoDB or Kinesis done through boto3.
But how can I connect various Lambda functions together in one trace? I'm thinking of generating a unique string in the first function and write it into the message stored in Kinesis. Another function would then pick up the string from the Kinesis' message and trace it again.
How would this be possible in a way to then see the whole connected trace in X-Ray?
If your upstream service which invokes your Lambda functions has tracing enabled, your functions will automatically send traces. From your question, I'm not sure how your functions are invoked. If one function is directly invoking another function, you'll have a single trace for them.
For your approach of invoking lambdas with Kinesis messages, I'm not sure it would achieve what you want due to several reasons.Firstly, Kinesis is not integrated with X-Ray, which means it will not propagate the trace header to downstream lambda. Secondly, the segment and the trace header for a lambda function is not directly accessible from your function's code since it is generated by the lambda runtime upon invocation and is thus immutable. Explicitly overriding the trace id in a lambda function may result in undesired behavior of your service graph.
Thanks.
I have a Lambda that requires messages to be sent to another Lambda to perform some action. In my particular case it is passing a message to a Lambda in order for it to perform HTTP requests and refresh cache entries.
Currently I am relying on the AWS SDK to send an SQS message. The mechanics of this are working fine. The concern that I have is that the SQS send method call takes around 50ms on average to complete. Considering I'm in a Lambda, I am unable to perform this in the background and expect for it to complete before the Lambda returns and is frozen.
This is further compounded if I need to make multiple SQS send calls, which is particularly bad as the Lambda is responsible for responding to low-latency HTTP requests.
Are there any alternatives in AWS for communicating between Lambdas that does not require a synchronous API call, and that exhibits more of a fire and forget and asynchronous behavior?
Though there are several approaches to trigger one lambda from another, (in my experience) one of the fastest methods would be to directly trigger the ultimate lambda's ARN.
Did you try invoking one Lambda from the other using AWS SDKs?
(for e.g. in Python using Boto3, I achieved it like this).
See below, the parameter InvocationType = 'Event' helps in invoking target Lambda asynchronously.
Below code takes 2 parameters (name, which can be either your target Lambda function's name or its ARN, params is a JSON object with input parameters you would want to pass as input). Try it out!
import boto3, json
def invoke_lambda(name, params):
lambda_client = boto3.client('lambda')
params_bytes = json.dumps(params).encode()
try:
response = lambda_client.invoke(FunctionName = name,
InvocationType = 'Event',
LogType = 'Tail',
Payload = params_bytes)
except ClientError as e:
print(e)
return None
return response
Hope it helps!
For more, refer to Lambda's Invoke Event on Boto3 docs.
Alternatively, you can use Lambda's Async Invoke as well.
It's difficult to give exact answers without knowing what language are you writing the Lambda function in. To at least make "warm" function invocations faster I would make sure you are creating the SQS client outside of the Lambda event handler so it can reuse the connection. The AWS SDK should use an HTTP connection pool so it doesn't have to re-establish a connection and go through the SSL handshake and all that every time you make an SQS request, as long as you reuse the SQS client.
If that's still not fast enough, I would have the Lambda function handling the HTTP request pass off the "background" work to another Lambda function, via an asynchronous call. Then the first Lambda function can return an HTTP response, while the second Lambda function continues to do work.
You might also try to use Lambda Destinations depending on you use case. With this you don't need to put things in a queue manually.
https://aws.amazon.com/blogs/compute/introducing-aws-lambda-destinations/
But it limits your flexibility. From my point of view chaining lambdas directly is an antipattern and if you would need that, go for step functions
I'm working on an application and our Back-end is written in .NET Web APi Core and Front-end in React. I make an endpoint which gets a JSON list and the size of the list is almost 83 MB. When I, deploy my back-end into AWS Lambda and call my endpoint it gives me an error (Error converting the response object of type Amazon.Lambda.APIGatewayEvents.APIGatewayProxyResponse from the Lambda function to JSON: Unable to expand the length of this stream beyond its capacity.: JsonSerializerException). I already check the Lambda payload response limit is 6 MB, (storing data into S3 from lambda endpoint and then call into Front-End will not work for me), so is there any way I can get that much data through Lambda.
Not with a single call, sorry, you cannot. As described in this link, the payload limit (for synchronous call) is, as you say, 6MB. Asynchronous calls have even lower limits.
I'd suggest you modify your UI/API to narrow your results first, or trap this error and alert the user (or calling service) that the payload is too large (and hence should be aborted, narrowed, or split into multiple calls).
In a single call we cannot retrieve more than 6.2 mb data from AWS lambda or through AWS API.Im also facing the same issue,Either we have to filter data or should do multiple calls
I'm trying to build a basic AWS Lambda API and function setup to do the following:
Part 1: Client calls function with api and runs both a background 1 min function to process data and a quick messesge to client in browser.
Part 2: When background function is complete it returns 302 redirect to the client with a generated link.
I'm stuck on Part 2. How can I go from the background function to the API back to the client?
I'm using python boto3 for my Lambda scripts.
This is AWS Lambda so your client doesn't have a persistent connection to the server-side code.
Here is an idea of one way to build this:
your client makes an API request that triggers a Lambda function
on invocation, your Lambda function generates a new, unique id (a UUID), writes that to DynamoDB so that this UUID can later be associated with the result of the background processing
the Lambda kicks off the background processing, passing the UUID to it
the Lambda returns the generated UUID to the client
the background processing happens asynchronously, ultimately writing any results to the DynamoDB item associated with the UUID that triggered it
the client polls another API periodically, say every 10s, sending in the UUID it was given
the polled Lambda takes the presented UUID, does a lookup in DynamoDB and returns a 302 redirect to a URL result, or an indication that the results aren't ready yet (e.g. HTTP 404)
some process that you create removes the item from DynamoDB later (or not)
Is there a way out through which I only allow email receiving from specific domains in Amazon SES. For example - I only want to honour emails coming from domains abc.com and reject any other mails coming from different domains.
Yep!
You can invoke a Lambda function when an email received, this article explains the process in more detail.
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-action-lambda.html
From that document
Writing Your Lambda Function
To process your email, your Lambda function can be invoked
asynchronously (that is, using the Event invocation type). The event
object passed to your Lambda function will contain metadata pertaining
to the inbound email event. You can also use the metadata to access
the message content from your Amazon S3 bucket.
If you want to actually control the mail flow, your Lambda function
must be invoked synchronously (that is, using the RequestResponse
invocation type) and your Lambda function must call the callback
method with two arguments: the first argument is null, and the second
argument is a disposition property that is set to either STOP_RULE,
STOP_RULE_SET, or CONTINUE. If the second argument is null or does not
have a valid disposition property, the mail flow continues and further
actions and rules are processed, which is the same as with CONTINUE.
For example, you can stop the receipt rule set by writing the
following line at the end of your Lambda function code:
callback( null, { "disposition" : "STOP_RULE_SET" });