My Amazon Lambda function (in Python) is called when an object 123456 is created in S3's input_bucket, do a transformation in the object and saves it in output_bucket.
I would like to notify my main application if the request was successful or unsuccessful. For example, a POST http://myapp.com/successful/123456 if the processing is successful and http://myapp.com/unsuccessful/123456 if its not.
One solution I thought is to create a second Amazon Lambda function that is triggered by a put event in output_bucket, and it to do the successful POST request. This solves half of the problem because but I can't trigger the unsuccessful POST request.
Maybe AWS has a more elegant solution using a parameter in Lambda or a service that deals with these types of notifications. Any advice or point in the right direction will be greatly appreciated.
Few possible solutions which I see as elegant
Using SNS Topic: From your transformation lambda, trigger a SNS topic, with success/unsuccess message, where SNS will call a HTTP/HTTPS endpoint with message payload. The advantage here is, your transformation lambda is loosely coupled with endpoint trigger and only connected through messaging.
Using Lambda Step Functions:
You could arrange to run a Lambda function every time a new object is uploaded to an S3 bucket. This function can then kick off a state machine execution by calling StartExecution. The advantage in using step functions is that you can coordinate the components of your application as series of steps in a visual workflow.
I don't think there is any elegant AWS solution, unless you re-architect, something like your lambda sends message to SQS or some intermediatery messaging service with STATUS and then interemdeiatery invokes POST to your application.
If you still want to go with your way of solving, you might need to configure "DeadLetter queue" to do error handling in failure cases (note that use cases described here are not comprehensive, so need to make sure it covers your case) like described here.
Related
i have a question about lambda anti patterns, and how to address my specific situation.
My current setup is this:
user/webpage -> ApiGateway -> Lambda1 -> synchronously calls Lambda2 (my other microservice) -> back to lambda1 -> back to user
Currently my lambda2 is behind an apigateway as well, but I toyed with idea of invoking directly. Either way it's basically another microservice that I control.
I understand that generally, lambdas calling other lambdas are considered an antipattern. All the blogs/threads/etc online mention using stepfunctions instead, or sqs, or something else.
In my situation, I don't see how I could use stepfunctions, since I have to return something to the webpage/user. If I used a stepfunction, it seems like I would have to then poll for the results, or maybe use websockets; basically in my webpage I would not be able to just call my endpoint and get a result, I'd have to have some way to asynchronously get my result.
Similarly with a queue, or any other solution I saw online, it's basically all asynchronous.
Is there any other pattern or way of doing this?
Thanks.
While invoking a lambda from another lambda, everything will work fine except when the second lambda timeouts or it throttles. If your business logic is built in such a way that failures are handled gracefully and has idempotent behaviour built in, then a lambda calling another lambda (via API gateway or direct invocation) should work fine. Having said that, AWS has come out with Synchronous Express Workflows for AWS Step Functions. The linked blog has detailed examples of using it. The only caveat here is that your entire operation should get over in 5 minutes. The maximum duration an express workflow can run is 5min. So if your application flow is completing within that time limit then this is the recommended way of orchestrating services.
I am using AppSync with Aurora/RDS.
I would like that in some cases, when a query/mutation is sent to the db, then, after that, I want to send an email and push notification, but this should be detached from the query/mutation, that is, it does not matter if it fails or works.
At the moment I see all these options:
Can you tell me which one I should use?
Create a query that calls a lambda function that sends the
push/email and call it from the client once the actual
query/mutation is done. I don't like this because the logic is in
the client rather than the server. Seems easy to implement, and I
guess it is easy to ignore the result of the second operation from a
client point of view.
A variation of the previous one. Pack both operations in a single
network request. With GraphQL, that is easy, but I don't want the
client waits for the second operation. (Is it possible to create
lambda functions that return immediately, like a trigger of other
functions?)
Attach my queries/mutations to lambda functions instead of RDS
directly. Then, those lambda functions call other lambda functions
for notifications. Seems more difficult to program, but more
micro-services architecture friendly. Probably this is the best one,
not sure.
Use SQL triggers and call lambda functions from those triggers. I
don't know if this is even possible. Researching...
Use pipelines resolvers. The first one is the query/mutation, the
second one is the lambda function that sends the push/email. I would
say this is a bad option because I don't want the client to wait for
the second operation or manage the logic when the second resolver
fails.
Amazon RDS Events: It appears it is possible to attach lambda
functions to specific AWS RDS events.
https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html It
seems it is about creating DBs, restoring... and that kind of
things. I don't see anything like creating a row, updating a row...
So, I discard this unless I am wrong.
Invoking a Lambda Function with an Aurora MySQL Stored Procedure
CALL mysql.lambda_async ( lambda_function_ARN,lambda_function_input )
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
"For example, you might want to send a notification using Amazon
Simple Notification Service (Amazon SNS) whenever a row is inserted
into a specific table in your database." That is exactly what I am
looking for. I like this idea, but I don't know if that is possible
with Aurora Serverless. Researching... It seems it is not possible
when using server-less:
https://www.reddit.com/r/aws/comments/a9szid/aurora_serverless_call_lambda/
Use step functions: No idea about how to use it.
Somehow, attach this lambda notification function to GraphQL/AppSync
instead of the database, but I guess it is not a good idea because I
need to read the database to the push notification token and the
email of the use who is going to receive the notifications.
Which method do you recommend me? I am using amplify cli.
Thanks a lot.
Currently AWS AppSync can only send notifications when the app is active. We are looking into implementation of the non active case.
If you want to send notifications when the app is not active, you can use the push notifications on iOS: silent push/interactive push or push notifications on Android.
If you want to send emails, voice/text message or notifications on phone when the app is not active, you can integrate with Amazon Pinpiont.
My Lex bot has four intents. Suppose a user asks a question at the very beginning of the conversation and this question is not allotted to any of the four intents. Hence no intent will be established. When this happens, I want to call lambda to run an "intent suggestion model" (built using topic modeling) to suggest the user about what the intent of the question might be. Also, lambda will have to store such queries in a database (s3 or RDB) so that if such queries are repetitive, then that intent can eventually be added to the bot and for other analytical solutions.
What you need is a fallback intent but Lex does not support fallback intents as of now.
You can still achieve this if you use a bridge between your chat client and Lex.
Setup an API Gateway and Lambda function in between your chat-client and the Lex.
Your chat-client will send a request to API Gateway, API Gateway will forward this to Lambda function which will be used to call Lex and get response from it. Lex will have one more lambda function as a webhook.
In the Lambda function you used to call Lex, we can check if any intent was matched or we got an error message, if it's an error message and trigger some action like intent suggestion model.
You need to use boto library to call Lex and use post_text() method.
Hope it helps.
I'm trying to implement an AWS Lambda function that should send an HTTP request. If that request fails (response is anything but status 200) I should wait another hour before retrying (longer that the Lambda stays hot). What the best way to implement this?
What comes to mind is to persist my HTTP request in some way and being able to trigger the Lambda function again in a specified amount of time in case of a persisted HTTP request. But I'm not completely sure which AWS service that would provide that functionality for me. Is SQS an option that can help here?
Or, can I dynamically schedule Lambda execution for this? Note that the request to be retried should be identical to the first one.
Any other suggestions? What's the best practice for this?
(Lambda function is my option. No EC2 or such things are possible)
You can't directly trigger Lambda functions from SQS (at the time of writing, anyhow).
You could potentially handle the non-200 errors by writing the request data (with appropriate timestamp) to a DynamoDB table that's configured for TTL. You can use DynamoDB Streams to detect when DynamoDB deletes a record and that can trigger a Lambda function from the stream.
This is obviously a roundabout way to achieve what you want but it should be simple to test.
As jarmod mentioned, you cannot trigger Lambda functions directly by SQS. But a workaround (one I've used personally) would be to do the following:
If the request fails, push an item to an SQS Delay Queue (docs)
This SQS message will only become visible on the queue after a certain delay (you mentioned an hour).
Then have a second scheduled lambda function which is triggered by a cron value of a smaller timeframe (I used a minute).
This second function would then scan the SQS queue and if an item is on the queue, call your first Lambda function (either by SNS or with the AWS SDK) to retry it.
PS: Note that you can put data in an SQS item, since you mentioned you needed the lambda functions to be identical you can store your first function's input in here to be reused after an hour.
I suggest that you take a closer look at the AWS Step Functions for this. Basically, Step Functions is a state machine that allows you to execute a Lambda function, i.e. a task in each step.
More information can be found if you log in to your AWS Console and choose the "Step Functions" from the "Services" menu. By pressing the Get Started button, several example implementations of different Step Functions are presented. First, I would take a closer look at the "Choice state" example (to determine wether or not the HTTP request was successful). If not, then proceed with the "Wait state" example.
I currently have a 3rd party application pushing messages to a Lambda function through API gateway. The Lambda function needs to serialize, log, and push the message to another ESB that I have very little control over.
I'm trying to ensure that there is some kind of recovery mechanism in the case that the Lambda function is either at max load or cannot communicate with the ESB. I've read about Kinesis being a good option for exactly this, but the ESB does not support batching for my use case.
This would cause me to run into the scenario where some messages might make it to ESB, while others don't, which would ultimately cause the batch to fail. Then, when the batch is retried, the messages would be duplicated in the ESB.
Is there a way I could utilize the functionality that Kinesis offers without the batching? Is there another AWS offering that better fits my use case? Ideally I would have one message being handled by the Lambda function that stays in the queue until it is successfully pushed into the ESB.
Any tips would be much appreciated.
Thanks,
Matt
Following might be of help to you:
1) setup api-gateway to log to sqs and 2) then set up a lambda function on that sqs queue to serialize, log, and push the message to the external endpoint.
For the first part: How to integrate API Gateway with SQS this will be of help. (as already mentioned in comments)
This article might help you more for second part: https://dzone.com/articles/integrate-sqs-and-lambda-serverless-architecture-f
Note that you can also choose what kind of trigger you would like (based on usecase)- cron based poll/ or event based, you also have control over when you are deleting from sqs in your lambda function. (you can also find the very basic code in lambda blueprint with name "sqs-poller").
Thanks!