Trigger another Lambda function from Lambda#Edge - amazon-web-services

I would like to offload some functionality from my Lambda#Edge to speed up response time.
This would mean triggering another Lambda Function inside my Lambda#Edge.
Lambda#Edge distributes the application across all regions, so when a request is made it would execute the application in the region closest to the requester.
My current solution is to create an SNS with the same topic name on all regions, have an SQS in us-east-1 listen to all these SNS Topics, and the Lambda function to listen to the SQS.
However, creating an SNS on every region is quite a hassle to maintain.
Any other suggestions on how I can trigger another Lambda function inside my Lambda#Edge?
Thanks!

Within lambda you can simply make the call to another lambda. I don't know which language you are using, but here is an example in Python and the boto3 library with a sample payload of information you may want to pass on to the lambda being invoked (I used region and detail-type as example info to pass along):
payload = {'region': <the region>, 'detail-type': 'some other detail you care about'}
lambda_client = boto3.client('lambda', account_id=<your account ID>)
lambda_client.invoke(FunctionName=<ARN of the function you want to invoke>, InvocationType='Event', Payload=json.dumps(payload))
Similar options are available in other languages. More details for this call in Python are at https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.invoke

Related

How get the content of SQS Message using Lambda in AWS?

everyone.
I'm interested in get the content of SQS Message using Lambda. Let me explain my infrastructure: I have an EC2 instance that has a script like that bellow. So, it script will send to SQS the message containing instance ID.
#!/bin/bash
INSTANCE_ID=$(curl http://*.*.*.*/latest/meta-data/instance-id)
REGION=$(curl http://*.*.*.*/latest/meta-data/placement/availability-zone | sed '$s/.$//')
QUEUE-URL=$(...)
aws sqs send-message --queue-url "${QUEUE-URL}" --message-body "${INSTANCE_ID}" --region "${REGION}"
The ideia is: when the SQS recieve the message, I would like to trigger a Lambda Function to modificate this instance. But, for that, I need the instance ID. I have searching a lot and unfortunately, I couldn't understand very well how I could get the instance ID, using AWS Lambda, from the SQS Message mentioned above.
I've been trying to solve this problem, but as I don't understand Lambda so much, I searched for many solutions and tested then. Unfortunately, I had no success. So, I interested to learn more about this service.
If someone's could help me with that, I'd be very greateful.
The AWS Lambda function can be configured with the Amazon SQS queue as a 'trigger'.
When a message is sent to the SQS queue, the Lambda function will be invoked. The message both will be available in the Lambda function body.
The code would look something like:
def lambda_handler(event, context):
for record in event['Records']:
body = record['body']
print("From SQS: " + body)
It is possible that multiple messages are passed to the Lambda function, so it first loops through each Record, then extracts the passed-in information in the body parameter.
The print() will show the contents of the message body in CloudWatch Logs. Check to make sure it contains what you expect. Then, add code that uses that value.
There is no need for your Lambda function to specifically call SQS -- this is handled automatically by the AWS Lambda service, which will then delete the message from the queue after your Lambda function successfully completes.
#John's answer Is all new you need.
Since you are new to AWS. I would tell you answer in a way that will help you to debug for future too.
You need to first make sure sns topic target is configured as trigger.
In your lambda function, there is parameter irrespective of any language event. This parameter contains the information about the event which triggered the lambda
Inside the event parameter there is not all the information, like source, event information, in this information you will get Your instance id.
Just try logging event -> then records and you will get your instance id.
Then you can play with your Instance ID
Tip : In the console there is lambda test event where you can generate a sample event form different aws services, sns is also there. You can visualise the sample event too before testing with real event form sns .
Docs for reference - https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-event

Better way to save AWS Lambda response to S3

I have a lambda handler returning response and I want to save the response as a JSON file to S3.
I went through some pages describing how to save information from AWS lambda to S3 using boto3 directly called from the lambda function. However, I'd like the lambda function to concentrate on calculating and making a response then let another lambda or module for creating such output on S3.
Is there any way offered by AWS? I guess the StepFunctions is a way to go but I'd like to know if there is another better way.
If you don't want to do it from the same lambda wit boto3 you have quite a few options. As you mentioned on your own, one option could be another lambda, for which you'll need a trigger, which on it's own could be:
Calling the second lambda from the first lambda directly with boto3
Sending the response to SNS which will trigger the second lambda which will then parse and save it to S3.
Sending the response as a message to SQS which will be then processed by the second lambda.
Sending it as an event to AWS EventBridge which will trigger the second lambda similar to the above.
As suggested by yourself and other answers/comments - Step Functions.
In general, it depends on your use case I'm not quite sure what your payload is, but the best option could actually be to push it directly from your first lambda either way.

Asynchronously invoke lambda functions

I need to asynchronously invoke lambda functions from my EC2 instance. At high level, so many services come to my mind(most likely all of these support my desired functionality) -
AWS State machine(not sure), step functions, Active MQ, SQS, SNS. I am aware about pros and cons of each at high level. However not sure which one should I go for :|. Please let me know your feedback.
PS: We expect the invocation in 1000s per second at peak for very short periods. Concurrency for lambda functions is not an issue as we can ask Amazon for increase in the limit along with the burst.
If you want to invoke asynchronously then you can not use SQS as SQS invoke lambda function synchronously with event source mapping.
You can use SNS to invoke lambda function asynchronously out of the option you listed above.
Better option would be writing small piece of code in any AWS SDK whichever you are comfortable and then call lambda function from that piece of code asynchronously.
Example in python using boto3 asynchronously
pass Event in InvocationType to invoke lambda function asynchronously and pass RequestResponse to invoke lambda function synchronously
client = boto3.client('lambda')
response = client.invoke(
FunctionName="loadSpotsAroundPoint",
**InvocationType='Event',**
Payload=payload3

Using AWS API in order to invoke Lambda functions Asynchronously

I have been researching AWS Documentation on how to invoke lambda functions, and I've come across different ways to do that. Mainly, Lambda invocation is done by calling Invoke() function which can be used to invoke lambda functions synchronously or asynchronously.
Currently I am invoking my Lambda functions via HTTP Request (as REST API), but, HTTP Request times out after 30 seconds, while asynchronous calls as far as I know times out after 15min.
What are the advantages, besides time that I have already mentioned, of asynchronous lambda invocation compared to invoking lambda with HTTP Request. Also, what are best (recommended) ways to invoke lambdas in production? On AWS docs (SDK for Go - https://docs.aws.amazon.com/sdk-for-go/api/service/lambda/#InvokeAsyncInput) I see that InvokeAsyncInput and InvokeAsyncOutput have been depricated. So I am wondering how async implementation would actually look like.
Lambda really is about event-driven-computing. This means Lambda always gets triggered in response to an event. This event can originate from a wide range of AWS Services as well as the AWS CLI and SDK.
All of these events invoke the Lambda function and pass some kind of information in the form of an event and context object. How this event looks like depends on the service that triggered lambda. You can find more information about the context in this documentation.
There is no real "best" way to invoke Lambda - this mostly depends on your use case - if you're building a webservice, let API Gateway invoke Lambda for you. If you want to process new files on S3 - let S3 trigger Lambda. If you're just testing the Lambda function you can invoke it via the CLI. If you have custom software that needs to trigger a Lambda function you can use the SDK. If you want to run Lambda on a schedule, configure CloudWatch events...
Please provide more information about your use case if you require a more detailed evaluation of the available options - right now this is very broad.

Trigger RDS lambda on CloudFront access

I'm serving static JS files over from my S3 Bucket over CloudFront and I want to monitor whoever accesses them, and I don't want it to be done over CloudWatch and such, I want to log it on my own.
For every request to the CloudFront I'd like to trigger a lambda function that inserts data about the request to my MySQL RDS instance.
However, CloudFront limits Viewer Request Viewer Response triggers too much, such as 1-second timeout (which is too little to connect to MySQL), no VPC configuration to the lambda (therefore I can't even access the RDS subnet) and such.
What is the most optimal way to achieve that? Setup an API Gateway and how would I send a request to there?
The typical method to process static content (or any content) accessed from CloudFront is to enable logging and then process the log files.
To enable CloudFront Edge events, which can include processing and changing an event, look into Lambda#Edge.
Lambda#Edge
I would enable logging first and monitor the traffic for a while. When the bad actors hit your web site (CloudFront Distribution) they will generate massive traffic. This could result in some sizable bills using Lambda Edge. I would also recommend looking in Amazon WAF to help mitigate Denial of Service attacks which may help with the amount of Lambda processing.
This seems like a suboptimal strategy, since CloudFront suspends request/response processing while the trigger code is running -- the Lambda code in a Lambda#Edge trigger has to finish executing before processing of the request or response continues, hence the short timeouts.
CloudFront provides logs that are dropped multiple times per hour (depending on the traffic load) into a bucket you select, which you can capture from an S3 event notification, parse, and insert into your database.
However...
If you really need real-time capture, your best bet might be to create a second Lambda function, inside your VPC, that accepts the data structures provided to the Lambda#Edge trigger.
Then, inside the code for the viewer request or viewer response trigger, all you need to do is use the built-in AWS SDK to invoke your second Lambda function asynchronously, passing the event to it.
That way, the logging task is handed off, you don't wait for a response, and the CloudFront processing can continue.
I would suggest that if you really want to take this route, this will be the best alternative. One Lambda function can easily invoke a second one, even if the second function is not in the same account, region, or VPC, because the invocation is done by communicating with the Lambda service's endpoint API.
But, there's still room for some optimization, because you have to take another aspect of Lambda#Edge into account, and it's indirectly related to this:
no VPC configuration to the lambda
There's an important reason for this. Your Lambda#Edge trigger code is run in the region closest to the edge location that is handling traffic for each specific viewer. Your Lambda#Edge function is provisioned in us-east-1, but it's then replicated to all the regions, ready to run if CloudFront needs it.
So, when you are calling that 2nd Lambda function mentioned above, you'll actually be reaching out to the Lambda API in the 2nd function's region -- from whichever region is handling the Lambda#Edge trigger for this particular request.
This means the delay will be more, the further apart the two regions are.
This your truly optimal solution (for performance purposes) is slightly more complex: instead of the L#E function invoking the 2nd Lambda function asynchronously, by making a request to the Lambda API... you can create one SNS topic in each region, and subscribe the 2nd Lambda function to each of them. (SNS can invoke Lambda functions across regional boundaries.) Then, your Lambda#Edge trigger code simply publishes a message to the SNS topic in its own region, which will immediately return a response and asynchronously invoke the remote Lambda function (the 2nd function, which is in your VPC in one specific region). Within your Lambda#Edge code, the environment variable process.env.AWS_REGION gives you the region where you are currently running, so you can use this to identify how to send the message to the correct SNS topic, with minimal latency. (When testing, this is always us-east-1).
Yes, it's a bit convoluted, but it seems like the way to accomplish what you are trying to do without imposing substantial latency on request processing -- Lambda#Edge hands off the information as quickly as possible to another service that will assume responsibility for actually generating the log message in the database.
Lambda and relational databases pose a serious challenge around concurrency, connections and connection pooling. See this Lambda databases guide for more information.
I recommend using Lambda#Edge to talk to a service built for higher concurrency as the first step of recording access. For example you could have your Lambda#Edge function write access records to SQS, and then have a background worker read from SQS to RDS.
Here's an example of Lambda#Edge interacting with STS to read some config. It could easily be refactored to write to SQS.