I followed tutorial on creating and invoking step functions
I'm getting output in my GET request of api as
{
"executionArn": "arn:aws:states:ap-northeast-1:123456789012:execution:HelloWorld:MyExecution",
"startDate": 1.486772644911E9
}
But, instead of above response I want my step functions output, which is given by end state as below.
{
"name":"Hellow World"
}
How to achieve this?
Update: You can now use Express Step Functions for synchronous requests.
AWS Step Functions are asynchronous and do not immediately return their results. API Gateway methods are synchronous and have a maximum timeout of 29 seconds.
To get the function output from a Step Function, you have to add a second method in API Gateway which will call the Step Function with the DescribeExecution action. The API Gateway client will have to call this periodically (poll) until the returned status is no longer "RUNNING".
Here's the DescribeExecution documentation
Use Express Step Functions instead. This type of Step Functions can be called synchronously. Go to your API Gateway and in the Integration Request section make sure you have the StartSyncExecution action:
After that, go a bit lower in the same page to the Mapping Templates:
and include the following template for the application/json Content-Type:
#set($input = $input.json('$'))
{
"input": "$util.escapeJavaScript($input)",
"stateMachineArn": "arn:aws:states:us-east-1:your_aws_account_id:stateMachine:your_step_machine_name"
}
After that, go back to the Method Execution and go to the Integration Response and then to the Mapping Templates section:
And use the following template to have a custom response from your lambda:
#set ($parsedPayload = $util.parseJson($input.json('$.output')))
$parsedPayload
My testing Step Function is like this:
And my Lambda Function code is:
Deploy your API Gateway stage.
Now, if you go to Postman and send a POST request with any json body, now you have a response like this:
New Synchronous Express Workflows for AWS Step Functions is the answer:
https://aws.amazon.com/blogs/compute/new-synchronous-express-workflows-for-aws-step-functions/
Amazon API Gateway now supports integration with Step Functions StartSyncExecution for HTTP APIs:
https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-api-gateway-supports-integration-with-step-functions-startsyncexecution-http-apis/
First of all the step functions executes asynchronously and API Gateway is only capable of invoking the step function (Starting a flow) only.
If you are waiting for the results of a step function invocation from a web application, you can use AWS IOT WebSockets for this. The steps are as follows.
Setup AWS IOT topic with WebSockets.
Configure the API Gateway and Step functions invocation.
From the Web Frontend subscribe to the IOT Topic as a WebSocket listener.
At the last step (And in error steps) in the Step Functions workflow use AWS SDK to trigger the IOT Topic which will broadcast the results to the Web App running in the browser using WebSockets.
For more details on WebSockets with AWS IOT refer the medium article Receiving AWS IoT messages in your browser using websockets.
Expanding on what #MikeD at AWS says, if you're certain that the Step Function won't exceed the 30 second timeout, you could create a lambda that executes the step function and then blocks as it polls for the result. Once it has the result, it can return it.
It is a better idea to have the first call return immediately with the execution id, and then pass that id into a second call to retrieve the result, once it's finished.
Related
I'm trying to build a basic AWS Lambda API and function setup to do the following:
Part 1: Client calls function with api and runs both a background 1 min function to process data and a quick messesge to client in browser.
Part 2: When background function is complete it returns 302 redirect to the client with a generated link.
I'm stuck on Part 2. How can I go from the background function to the API back to the client?
I'm using python boto3 for my Lambda scripts.
This is AWS Lambda so your client doesn't have a persistent connection to the server-side code.
Here is an idea of one way to build this:
your client makes an API request that triggers a Lambda function
on invocation, your Lambda function generates a new, unique id (a UUID), writes that to DynamoDB so that this UUID can later be associated with the result of the background processing
the Lambda kicks off the background processing, passing the UUID to it
the Lambda returns the generated UUID to the client
the background processing happens asynchronously, ultimately writing any results to the DynamoDB item associated with the UUID that triggered it
the client polls another API periodically, say every 10s, sending in the UUID it was given
the polled Lambda takes the presented UUID, does a lookup in DynamoDB and returns a 302 redirect to a URL result, or an indication that the results aren't ready yet (e.g. HTTP 404)
some process that you create removes the item from DynamoDB later (or not)
My Lex bot has four intents. Suppose a user asks a question at the very beginning of the conversation and this question is not allotted to any of the four intents. Hence no intent will be established. When this happens, I want to call lambda to run an "intent suggestion model" (built using topic modeling) to suggest the user about what the intent of the question might be. Also, lambda will have to store such queries in a database (s3 or RDB) so that if such queries are repetitive, then that intent can eventually be added to the bot and for other analytical solutions.
What you need is a fallback intent but Lex does not support fallback intents as of now.
You can still achieve this if you use a bridge between your chat client and Lex.
Setup an API Gateway and Lambda function in between your chat-client and the Lex.
Your chat-client will send a request to API Gateway, API Gateway will forward this to Lambda function which will be used to call Lex and get response from it. Lex will have one more lambda function as a webhook.
In the Lambda function you used to call Lex, we can check if any intent was matched or we got an error message, if it's an error message and trigger some action like intent suggestion model.
You need to use boto library to call Lex and use post_text() method.
Hope it helps.
I am implementing an API using Amazon lambda function and API Gateway this lambda function will, in turn, call another 3rd party API and will transform that data into a specific format and will return it.
The 3rd part API that I am using to fetch records has pagination and throttling enabled but the API I am building using lambda and API Gateway I don't want to implement pagination in it rather I want this API to get all the pages one by one transform them in the specific format and return at once. The Client of this API should not have to call it with different pagination parameters.
Now as Lambda function has a maximum of 15-minute limit and the 3rd part API also has a max request per minute limit what is the best way to implement this.
This is how I am doing it right now, in my lambda function I push a specific number of requests in promises and when the max number is reached I stop pushing more and execute the pending promises and set a timeout function for one minute meanwhile the pending promises executes and I generate the response from them but don't send it back as there are pending requests to be made. When the timeout completes I again push a specific amount of requests in promises and repeat the process.
Once all the pages are completed I return the data.
Now the problem is this may exceed more than 15 minutes and the lambda function would terminate.
Is there a better approach for this, even by using some other amazon services.
I currently have a 3rd party application pushing messages to a Lambda function through API gateway. The Lambda function needs to serialize, log, and push the message to another ESB that I have very little control over.
I'm trying to ensure that there is some kind of recovery mechanism in the case that the Lambda function is either at max load or cannot communicate with the ESB. I've read about Kinesis being a good option for exactly this, but the ESB does not support batching for my use case.
This would cause me to run into the scenario where some messages might make it to ESB, while others don't, which would ultimately cause the batch to fail. Then, when the batch is retried, the messages would be duplicated in the ESB.
Is there a way I could utilize the functionality that Kinesis offers without the batching? Is there another AWS offering that better fits my use case? Ideally I would have one message being handled by the Lambda function that stays in the queue until it is successfully pushed into the ESB.
Any tips would be much appreciated.
Thanks,
Matt
Following might be of help to you:
1) setup api-gateway to log to sqs and 2) then set up a lambda function on that sqs queue to serialize, log, and push the message to the external endpoint.
For the first part: How to integrate API Gateway with SQS this will be of help. (as already mentioned in comments)
This article might help you more for second part: https://dzone.com/articles/integrate-sqs-and-lambda-serverless-architecture-f
Note that you can also choose what kind of trigger you would like (based on usecase)- cron based poll/ or event based, you also have control over when you are deleting from sqs in your lambda function. (you can also find the very basic code in lambda blueprint with name "sqs-poller").
Thanks!
My Amazon Lambda function (in Python) is called when an object 123456 is created in S3's input_bucket, do a transformation in the object and saves it in output_bucket.
I would like to notify my main application if the request was successful or unsuccessful. For example, a POST http://myapp.com/successful/123456 if the processing is successful and http://myapp.com/unsuccessful/123456 if its not.
One solution I thought is to create a second Amazon Lambda function that is triggered by a put event in output_bucket, and it to do the successful POST request. This solves half of the problem because but I can't trigger the unsuccessful POST request.
Maybe AWS has a more elegant solution using a parameter in Lambda or a service that deals with these types of notifications. Any advice or point in the right direction will be greatly appreciated.
Few possible solutions which I see as elegant
Using SNS Topic: From your transformation lambda, trigger a SNS topic, with success/unsuccess message, where SNS will call a HTTP/HTTPS endpoint with message payload. The advantage here is, your transformation lambda is loosely coupled with endpoint trigger and only connected through messaging.
Using Lambda Step Functions:
You could arrange to run a Lambda function every time a new object is uploaded to an S3 bucket. This function can then kick off a state machine execution by calling StartExecution. The advantage in using step functions is that you can coordinate the components of your application as series of steps in a visual workflow.
I don't think there is any elegant AWS solution, unless you re-architect, something like your lambda sends message to SQS or some intermediatery messaging service with STATUS and then interemdeiatery invokes POST to your application.
If you still want to go with your way of solving, you might need to configure "DeadLetter queue" to do error handling in failure cases (note that use cases described here are not comprehensive, so need to make sure it covers your case) like described here.