I am using Newman to send a list of requests: A B C D.
The problem is that the request C will take more than 10 seconds to response and I don't want to wait for it to finish before sending request D.
I've tried to use setTimeout in both pre-request and test of request C but it no use.
Are there anyway to accomplish this?
Related
I have a request that lasts more than 3 minutes, I want the request to be sent and immediately give the answer 200 and after the end of the work - give the result
The workflow you've described is called asynchronous task execution.
The main idea is to remove time or resource consuming parts of work from the code that handles HTTP requests and deligate it to some kind of worker. The worker might be a diffrent thread or process or even a separate service that runs on a different server.
This makes your application more responsive, as the users gets the HTTP response much quicker. Also, with this approach you can display such UI-friendly things as progress bars and status marks for the task, create retrial policies if task failes etc.
Example workflow:
user makes HTTP request initiating the task
the server creates the task, adds it to the queue and returns the HTTP response with task_id immediately
the front-end code starts ajax polling to get the results of the task passing task_id
the server handles polling HTTP requests and gets status information for this task_id. It returns the info (whether results or "still waiting") with the HTTP response
the front-end displays spinner if server returns "still waiting" or the results if they are ready
The most popular way to do this in Django is using the celery disctributed task queue.
Suppose a request comes, you will have to verify it. Then send response and use a mechanism to complete the request in the background. You will have to be clear that the request can be completed. You can use pipelining, where you put every task into pipeline, Django-Celery is an option but don't use it unless required. Find easy way to resolve the issue
I have an async (epoll based) http server written in mix of C and C++ that serves as a message broker and runs on Linux/MacOS. This is the scenario that I am manually testing with curl in multiple shell windows that I want to automate.
Request 1: Long poll asking for a message. There are none, so this request waits until a message arrives.
Request 2: Puts in a message that resolves request 1.
I'm unsure of the best way to orchestrate this. Any recommendations would be massively appreciated. My current thought is to use threads for the requests and have the responses write to files, and then a sleep/wake/check file for data loop with some timeout...but I'm hoping that better tooling/approaches exists :)
I'm building the Alexa skill that sends the request to my web server,
then web server will do some process and upload a file to Amazon S3.
During the period of web server process, I make skill keep getting the file from Amazon S3 per 10 seconds till get the file. And the response is based on the file content.
But unfortunately, the web server process takes more than 1 minute. That means skill must stay more than 1 minute to get the file to response.
For now, I used progressive response with async await in my code,
and skill did keep waiting for the file on S3.
But I found that the skill will send the second request to Lambda after 50 seconds automatically. That means for the same skill, i got the two lambda function running at the same time.
And the execution result is : After the first response that progressive response made, 50 seconds later will hear another response that also made by the progressive response which belongs to the second request.
And nothing happened till the end.
I know it is bad to let skill waits this long, but i still want to figure out the executable way if skill needs to wait this long.
There are some points I want to figure out.
Is there anyway to prevent the skill to send the second
requests to Lambda?
Is there another way I can try to accomplish the goal?
Thanks
Eventually, I found that the second invoke of Lambda is not from Alexa, is from AWS Lambda itself. Refer to the following artical
https://cloudonaut.io/your-lambda-function-might-execute-twice-deal-with-it/
So you have to deal with this kind of situation in your Lambda code. One thing can be used is these two times invoke's request id is the same. So you can tell if this is the first time execution by checking your storage for the same request id which you store at the first time execution.
Besides, I also found that once the Alexa Skill waits for more than 1 minutes, it will crash and return the error by speaking (test by Amazon Echo). And there is nothing different in the AWS Lambda log compare to the normal execution one. That meaning the Log seems to be fine but actually the execution result is not.
Hope this can help someone is also struggled at this problem.
I've been working on a Dialogflow chatbot that calls a webhook which can often take more than the 5s delay to process and answer the user's request. So, following this post, my webhook sends a response containing a followup event if the processing is too long, and will be able to answer the following request sent by the intent triggered by the event.
Now, while this approach is working great, I have two questions :
Is there any way to send a message ("Please wait, I'm processing your request") to the user on every followup event ?
Since I'm using the Dialogflow-Messenger integration, is there any way to display the three dots "typing" animation while the webhook is processing the request ?
Thanks !
When developing a chatbot, you should keep in mind that you are trying to duplicate how 2 humans interact. You are developing a conversation and in the conversation, we should not keep other person waiting. All your requests should be completed within 4-5 seconds (to avoid timeout by the platform) to have a better UX.
So there is no way to show either Please Wait or animated 3 dots!
Write a good backend code to fetch response faster or tweak and cache your response. Dialogflow is currently designed for 1-1 conversation and cannot provide multiple delayed responses. If you need it that way, you will require to develop your own NLP engine.
I have a requirement to count the jetty transactions and measure the time it took to process the request and get back the response using JMX for our monitoring system.
I am using Jetty 8.1.7 and I can’t seem to find a proper way to do this. I basically need to identify when request is sent (due to Jetty Async approach this is triggered from thread A) and when the response is complete (as the oncompleteResponse is done in another thread).
I usually use ThreadLocal for such state in other areas I need similar functionality, but obviously this won’t work here.
Any ideas how to overcome?
To use jetty's async requests you basically have to subclass ContentExchange and override its methods. So you can add an extra field to it which would contain a timestamp of when the request was sent, and use it later in your onResponseComplete() method to measure the processing time. If you need to know the time when your request was actually sent to the server instead of when it was created you can override the onRequestCommitted() and onRequestComplete() methods.