Dialogflow : display "processing" message on intents triggered by followup events - google-cloud-platform

I've been working on a Dialogflow chatbot that calls a webhook which can often take more than the 5s delay to process and answer the user's request. So, following this post, my webhook sends a response containing a followup event if the processing is too long, and will be able to answer the following request sent by the intent triggered by the event.
Now, while this approach is working great, I have two questions :
Is there any way to send a message ("Please wait, I'm processing your request") to the user on every followup event ?
Since I'm using the Dialogflow-Messenger integration, is there any way to display the three dots "typing" animation while the webhook is processing the request ?
Thanks !

When developing a chatbot, you should keep in mind that you are trying to duplicate how 2 humans interact. You are developing a conversation and in the conversation, we should not keep other person waiting. All your requests should be completed within 4-5 seconds (to avoid timeout by the platform) to have a better UX.
So there is no way to show either Please Wait or animated 3 dots!
Write a good backend code to fetch response faster or tweak and cache your response. Dialogflow is currently designed for 1-1 conversation and cannot provide multiple delayed responses. If you need it that way, you will require to develop your own NLP engine.

Related

How to configure Dialogflow CX agent to receive multiple messages before replying to user

I've built a custom integration for Dialogflow CX which allows the user to send multiple messages to the Agent. However, the Agent doesn't understand what the user is trying to say when it receives more than one message.
How can I configure my Agent to wait a predetermined amount of time (allowing the user to send as many messages as possible) before trying to reply so that the Agent can make sense of all the text that was sent?
Unfortunately, that's not how Dialogflow-CX agents work. You can't wait a predetermined amount of time before answering: to each prompt from the user the agent will provide a fulfilment if it's specified.
The only way to achieve what you're asking is to develop an integration starting from the API they provide. You could develop an integration which waits a determined interval, joins all text received and sends it to the API via detectIntent request. This way you could have the user input multiple texts and only return a response when needed, but you'd need to handle the logic yourself (meaning a basic to intermediate knowledge of a programming language probably).

Integration test two http requests that depend upon each other in C/C++

I have an async (epoll based) http server written in mix of C and C++ that serves as a message broker and runs on Linux/MacOS. This is the scenario that I am manually testing with curl in multiple shell windows that I want to automate.
Request 1: Long poll asking for a message. There are none, so this request waits until a message arrives.
Request 2: Puts in a message that resolves request 1.
I'm unsure of the best way to orchestrate this. Any recommendations would be massively appreciated. My current thought is to use threads for the requests and have the responses write to files, and then a sleep/wake/check file for data loop with some timeout...but I'm hoping that better tooling/approaches exists :)

How to update progress bar while making a Django Rest api request?

My django rest app accepts request to scrape multiple pages for prices & compare them (which takes time ~5 seconds) then returns a list of the prices from each page as a json object.
I want to update the user with the current operation, for example if I scrape 3 pages I want to update the interface like this :
Searching 1/3
Searching 2/3
Searching 3/3
How can I do this?
I am using Angular 2 for my front end but this shouldn't make a big difference as it's a backend issue.
This isn't the only way, but this is how I do this in Django.
Things you'll need
Asynchronous worker procecess
This allows you to do work outside the context of the request-response cycle. The most common are either django-rq or Celery. I'd recommend django-rq for its simplicity, especially if all you're implementing is a progress indicator.
Caching layer (optional)
While you can use the database for persistence in this case, temporary cache key-value stores make more sense here as the progress information is ephemeral. The Memcached backend is built into Django, however I'd recommend switching to Redis as it's more fully featured, super fast, and since it's behind Django's caching abstraction, does not add complexity. (It's also a requirement for using the django-rq worker processes above)
Implementation
Overview
Basically, we're going to send a request to the server to start the async worker, and poll a different progress-indicator endpoint which gives the current status of that worker's progress until it's finished (or failed).
Server side
Refactor the function you'd like to track the progress of into an async task function (using the #job decorator in the case of django-rq)
The initial POST endpoint should first generate a random unique ID to identify the request (possibly with uuid). Then, pass the POST data along with this unique ID to the async function (in django-rq this would look something like function_name.delay(payload, unique_id)). Since this is an async call, the interpreter does not wait for the task to finish and moves on immediately. Return a HttpResponse with a JSON payload that includes the unique ID.
Back in the async function, we need to set the progress using cache. At the very top of the function, we should add a cache.set(unique_id, 0) to show that there is zero progress so far. Using your own math implementation, as the progress approaches 100% completion, change this value to be closer to 1. If for some reason the operation fails, you can set this to -1.
Create a new endpoint to be polled by the browser to check the progress. This looks for a unique_id query parameter and uses this to look up the progress with cache.get(unique_id). Return a JSON object back with the progress amount.
Client side
After sending the POST request for the action and receiving a response, that response should include the unique_id. Immediately start polling the progress endpoint at a regular interval, setting the unique_id as a query parameter. The interval could be something like 1 second using setInterval(), with logic to prevent sending a new request if there is still a pending request.
When the progress received equals to 1 (or -1 for failures), you know the process is finished and you can stop polling
That's it! It's a bit of work just to get progress indicators, but once you've done it once it's much easier to re-use the pattern in other projects.
Another way to do this which I have not explored is via Webhooks / Channels. In this way, polling is not required, and the server simply sends the messages to the client directly.

Automate Suspended orchestrations to be resumed automatically

We have a BizTalk application which sends XML files to external applications by using a web-service.
BizTalk calls the web-services method by passing XML file and destination application URL as parameters.
If the external applications are not able to receive the XML, or if there is no response received from the web-service back to BizTalk the message gets suspended in BizTalk.
Presently for this situation we manually go to BizTalk admin and resume each suspended message.
Our clients want this process to be automated all, they want an dashboard which shows list of message details and a button, on its click all the suspended messages have to be resumed.
If you are doing this within an orchestration and catching the connection error, just add a delay shape configured to 5 hours. Or set a retry interval to 300 minutes and multiple retries on the send port if that makes sense. You can do this using the rule engine as well.
Why not implement an asynchronous pattern?
You make it so, so that the orchestration sends the file out via a send shape while initializing a certain correlation set.
You then put a listen shape with at one end:
- the receive (following the initialized correlation set)
- a delay shape set to 5 hours.
When you receive the message, your orchestration can handle it gracefully.
When you don't, the delay shape will kick in and you handle accordingly.
Benefit to this solution in comparison to the solution of 40Alpha will be that your orchestration will only 'wake up' from a dehydrated state if the timeout kicks in OR when the response is received. In the example of 40Alpha, the orchestration would wake up a lot of times, consuming extra resources.
You may want to look a product like BizTalk 360. It has those sort of monitoring and command built into it. I'm not sure it works with BizTalk 2006R2 though, but you should be thinking about moving off that platform anyway as it is going out of Microsoft support.

Sustain an http connection while django processes a big request (20mins+)

I've got a django site that is producing a csv download. The content of the csv is dictated by user defined parameters. It's possible that users will set parameters that require significant thinking time on the server. I need a way of sustaining the http connection so the browser doesn't kick up an error message. I heard that it's possible to send intermittent http headers to do this. Can anyone point me in the right direction to set this up on a django site?
(unfortunatly I'm stuck with the possibility of slow reports - improving my sql won't mitigate this)
Don't do it online. Trigger an offline task, use a bit of Javascript to repeatedly call a view that checks if the task has finished, and redirect to the finished file when it's ready.
Instead of blocking the user and it's browser for 20 minutes (which is not a good idea) do the time-consuming task in the background. When the task will finish and generate the result simply notify the user so that he/she will just need to download the ready result.