ROS Voice recognition - c++

Short summary of what I tried to do and what it's actually doing.
In my project I have two simple nodes: one for listening and publishing, and another for speaking and subscribing. I named them listeningNode and speakingNode, respectively.
The task I wanted to achieve was pretty simple, I would have the user say "remember me" which would be recognized as a keyword and be published to voiceCommandCallback method in the speaking node so my robot could say "Okay, please say your name". Then, back at the listening node, on top of publishing that keyphrase it will also call the method recognize_from_mic_with_dict() which as you can guess will run using a dictionary of names.
This recognize_from_mic_with_dict() method will be listening for a name and will attempt to publish that name to namesCallback() in the listening node and this is where I check if what got published was an actual name or if it was just gibberish, in which case I would kindly ask the user to repeat his/her name and subscribe once again to recognize_from_mic_with_dict() so it could listen once more.
This sounds more complicated that it probably has to be but it's the only way I could think of achieving this "mode switching". The problem is that if it hears gibberish it will say "I'm sorry, I did not quite hear that. Please repeat that!" and I would like to know if there is a way to make the listening node ignore anything that the robot says because it's attempting to recognize names from its own sentence.

you can cast it as a feedback control problem.
given that :
u is the user speech signal
r is the output signal of the robot speaker,
h the feedback path function,
s=h*r the feedback signal; * refers to convolution
y = s + u = h*r + u the input signal of the microphone
the problem is to estimate ĥ and therefore ŝ (r is already known),
so an estimation of the user speech signal is given by û = y - ŝ = y - ĥ * r

Related

Monadic bind with Cmd

I have a function
convertMsg : Msg1 -> List Msg2
where Msg1 and Msg2 are certain message types. And I would like to turn this into a function:
convertCmd : Cmd Msg1 -> Cmd Msg2
Which would for every message in the batch replace it with some messages possibly none or more than 1.
As a Haskell programmer at heart I immediately reach for a monadic bind ((>>=) in Haskell and andThen in the Elm parlance), a function with the type:
bind : (a -> Cmd b) -> Cmd a -> Cmd b
I can easily change my convertMsg to be the following:
convertMsg : msg1 -> Cmd Msg2
At which point it would be just perfect for the bind.
But looking in Platform.Cmd, there isn't such a function I can find. There's a map which is similar, but convertMsg can't really be convertMsg : Msg1 -> Msg2 since it doesn't always give back exactly one message.
Is there a way to achieve this? Is there some limitation to the Cmd type that would prevent this sort of thing?
What you're trying to do to messages, how you might, and whether it's a good plan
I promise I'll try to answer what I think you're trying to do, but first I think there's a more important thing...
You're perhaps assuming that Cmd is analogous to IO from Haskell, but Cmd is asynchronous and isn't designed to chain actions. Your update is what glues consequences to outputs:
update : Msg -> Model -> (Model,Cmd Msg)
At the end of your update, you can issue a Cmd Msg to ask elm to do something externally, usually passing it a constructor with which it can wrap its output. This output comes back to your update function.
What you don't do is chain Cmds together as you would in a monad. There's no bind for Cmd, or to put it another way, the only bind for Cmd is your update function!
Now I suppose that if you wanted to catch a MyComplexMsg : Msg and turn it into [SimpleMsg1,SimpleMsg2], you could pattern match for it in your update function, leave the model unchanged and issue a new Cmd Msg, but what command would you be running the second time?
You could certainly take a pure Msg -> List Msg function and use Cmd.map to apply it, or apply it manually in the pattern match at the beginning of update, like
update msg model = case msg of
MyComplexMsg -> myHandler [SimpleMsg1,SimpleMsg2]
...
or even go full state monad style with
update msg model0 = case msg of
MyComplexMsg ->
let
(model1,cmd1) = update SimpleMsg1 model0
(model2,cmd2) = update SimpleMsg2 model1
in
(model2,Cmd.batch [cmd1,cmd2])
to try to emulate monadic bind, but I don't know why you might ever want this, and a lot of advice in the elm literature and community is that if you're calling update from update you're probably doing it wrong. Make a separate single-purpose helper function for that stuff instead of re-running your entire program logic twice!
Let go of your need to have a monad
I suspect that what's going wrong is that you're not letting go of a monadic control flow mentality. update is where it's at. update is where you make things happen. User input and asynchronous messages are your drivers, not sequencing. Cmd is just for communicating externally. You don't plumb the results back in, the elm architecture does that for you. Just handle the result of your Cmd (which will arrive as a message) as a branch in your update and it'll all progress nicely, and if the user presses some button of their own choice without you making it happen, so be it. You can handle that too.
I worry that you're trying to write a monad transformer stack in elm, which is a bit like trying to write an object oriented programming library in haskell. Haskell doesn't do object oriented programming, and the sooner folks drop the OO thinking and let go of their need to bundle data and functions together, the sooner they're writing good haskell code. Elm doesn't do typeclasses, it does model/view/update, and does it extraordinarily well. Let go of your need to find and use a monad to control the flow of your program, and instead respond to what messages you're given. Make a Msg for something you want to happen, provide a way to trigger it appropriately in your view and then handle it in your update.
When should one message become three messages?
If one of your messages is really three messages, why isn't it three messages already? If it's just that in response to that particular message, you just need to do three things to your model and issue five commands, why not just have one message and get update to do those three things to your model using pure code and issue the five commands in a batch?
If you need to log the successful login, then get the user's photo, then query the database for their recent activity, then display it all, then I disagree about the immediateness, and they're all asynchronous. You can issue commands to do each of those things in a batch, and when the responses come back you will need to separately deal with each - update your model with the image when it arrives, with the list of recent activity when it arrives. Once your model is in the state that the picture and the recent activity are both there you can change the view, but why not show each as soon as they're there?
Using monads sometimes trains us to think sequentially when we're doing effects programming when we needn't, but now, finally, I'll address what to do when there is a compelling need to sequence commands.
Genuinely necessary sequential commands
Perhaps there really is something sequential that you need. Maybe you have to query some data store for something before you send some request elsewhere. You still don't use a bind, you just use your update:
update msg model = case msg of
StartsMultiStageProcess userID ->
({model|multiStageProcessStatus = RequestedData}
, getDataPart1 userID PartOneReceived )
PartOneReceived userData ->
({model|multiStageProcessStatus = Fired}
, fireRockets userData.nemesis.location RocketResult)
RocketResult r -> if model.multiStageProcessStatus == Fire then
case r of
Ok UtterlyDestroyed ->
....
Ok DamagedBeyondUse ->
....
Err disappointment ->
....
...
A handy point is that if your model doesn't have the required data, it automatically won't show the missing data in the view (sum types for the win), and you can store whatever state your multistage process is in in the model.
You might prefer to put all of those messages into a new type so it becomes indented once further in the handler and won't ever be mixed with other ones in the order, like
MSPmsg msg -> case msg of
Started userId ->
...
GotPartOne userData ->
but much more likely use a helper function like MSPmsg msg -> updateMultiStageProcess.
Concluding advice
Maybe there's some great use case for delving into messages and commands and editing them that you haven't made explicit, but Cmd is opaque and all you can do is issue them and handle the resulting messages, so I'm sceptical but definitely interested.
Also in giving you update to write, it's almost like they've given you the app-specific bind to write (but it's not a functor and you absolutely do look at the data), so they've given you the keys to the Tesla. It takes a bit of getting used to but you're really going to like what happens at the traffic lights. Don't attempt to dismantle the door hinges until you've learned to drive it.
Edit: Your specific use case: inter-page communication
It turns out in chat that you're trying to get messages from one page to be usable in other pages or the overall update - sometimes one page needs to tell the app to change page and tell the new page to start an animation. I might have skipped all the advice above if I'd known that initially, but I think it's good advice for anyone coming from Haskell and I'm leaving it in!
Multiple messages
I still think it's important to accept that sometimes a single message needs to cover multiple actions, and you sort that out in your update function rather than try to create multiple messages in response to a single user action.
Lots of elm folks give the advice that your messages, rather than describing something to do like AddProduct they should describe something that happened in the past, partly because that's how messages come to you in your update so your mental model of what the elm runtime is doing is accurate, and partly because you're less likely to want to make two messages and do weird message translations when you ought to make one message.
Do multiple things in your ClickedViewOffers branch of update rather than try to make both a SwitchToOffersPage and a
AnimatePickOfTheDay message.
I'd like to point out that your idea to convert your messages and filter them somehow within the messages type is doing it in the wrong place. Filtering so that your Home page doesn't get all the messages for your Login page is something you have to do in update anyway - don't try to filter them while you're making or passing the messages. update is where it's at for deciding what to do in response to user input. Messages are for describing user input.
OK, but how do you get messages to cross the barriers between pages?!
There are a few ways to achieve this and it might be worth looking into different ways of making a Single Page Application (SPA) in Elm. I found this article by Rogério Chaves on Medium quite enlightening on the topic of various ways of organising messages from child page to parent app. He's done the TodoMVC app all the different ways in this repo A stack overflow post is better if it inlines ideas, so here we go:
Common Msg type across all pages
This can work by having a separate module for your message types which all your modules import. Messages look like ProductsMsg (UserCreatedNewProduct productRecord), as they might well do anyway, but because all the message types are global you can call another page's methods.
Individual pages also return an OutMsg from their update function
Use better names than these (eg Login.Msg rather than LoginMsg), but...
loginPageUpdate : LoginMsg -> LoginModel -> (LoginModel,Cmd LoginMsg,OutMsgFromLogin)
update : GlobalMsg -> GlobalModel -> (GlobalModel,Cmd GlobalMsg)
update msg model = case msg of
LoginMsg loginMsg ->
let (newLoginModel,cmd,outMsgFromLogin) = loginPageUpdate loginMsg model.loginModel
in
...
(You'd need NoOp :: OutMsgFromLogin or use Maybe OutMsgFromLogin there. I'm not a fan of NoOp. It's terribly tempting to use it for unimplemented features, and it's the king of all divorced-from-user-intentions messages that doesn't explain why you ought to do nothing or how you came to write something where you generated a purposeless message. I think it's a code smell that there's a better way of writing something.)
Have a record of messages that you later use to translate your page's Msgs messages into global messages.
(Again, use better domain-specific names, I'm trying to convey usage in my names.)
type LoginMessagesRecord globalMsg =
{ internalLoginMsgTag : LoginMsg -> globalMsg
, loginSucceeded : User -> globalMsg
, loginFailed : globalMsg
, newUserSuccessfullyRegistered : User -> globalMsg
}
and in your main, you would specify these:
loginMessages : LoginMessagesRecord GlobalMsg
loginMessages =
{ internalLoginMsgTag = LocalLoginMsg
, loginSucceeded = LoginSucceeded
, loginFailed = LoginFailed
, newUserSuccessfullyRegistered = NewUserSuccessfullyRegistered
}
You can either parameterise functions in your Login code with those so they all consume a LoginMessagesRecord and produce a msg, or you can use a genuinely local message type and write a translation helper in your Login module:
type HereOrThere here there = Here here | There there
type LocalLoginMessage = EditedUserName String | EditedPassword String | ....
type MessageForElsewhere = LoggedIn User | DidNotLogIn | MadeNewAccount User
type alias LoginMsg = HereOrThere LocalLoginMessage MessageForElsewhere
loginMsgTranslator : LoginMessagesRecord msg -> LoginMsg -> msg
loginMsgTranslator
{ internalLoginMsgTag
, loginSucceeded
, loginFailed
, newUserSuccessfullyRegistered
}
loginMsg = case loginMsg of
Here msg -> internalLoginMsgTag msg
There msg -> case msg of
LoggedIn user -> loginSucceeded user
DidNotLogIn -> loginFailed
MadeNewAccount user -> newUserSuccessfullyRegistered user
and then you can use Html.map loginMsgTranslator loginView in your global view, or Element.map loginMsgTranslator loginView if you're using the utterly brilliant html&css-free way to write elm apps, elm-ui.
Summary / takeaway
Have a single message describing a user intention and use update to handle all the consequences.
Don't edit the messages, respond appropriately in the update
The user is in control. The runtime is in control. You're not in control. Don't generate messages yourself, just respond to them. If you're generating messages rather than the user or the runtime, you're using elm in a weird way that'll be hard.
Your program logic largely resides in update. It doesn't reside in message. Don't try to make things happen in message, just describe what the user did or what the system did in the message.
Use case statements and descriptive tags in message types to help choose which update helper function to run. It can often help to use union types to describe how local a message is. Sometimes you use a local updating function, sometimes a global one.
You might want to also read this reddit thread about scaling elm apps that Rogério Chaves references.

Understanding the point of supply blocks (on-demand supplies)

I'm having trouble getting my head around the purpose of supply {…} blocks/the on-demand supplies that they create.
Live supplies (that is, the types that come from a Supplier and get new values whenever that Supplier emits a value) make sense to me – they're a version of asynchronous streams that I can use to broadcast a message from one or more senders to one or more receivers. It's easy to see use cases for responding to a live stream of messages: I might want to take an action every time I get a UI event from a GUI interface, or every time a chat application broadcasts that it has received a new message.
But on-demand supplies don't make a similar amount of sense. The docs say that
An on-demand broadcast is like Netflix: everyone who starts streaming a movie (taps a supply), always starts it from the beginning (gets all the values), regardless of how many people are watching it right now.
Ok, fair enough. But why/when would I want those semantics?
The examples also leave me scratching my head a bit. The Concurancy page currently provides three examples of a supply block, but two of them just emit the values from a for loop. The third is a bit more detailed:
my $bread-supplier = Supplier.new;
my $vegetable-supplier = Supplier.new;
my $supply = supply {
whenever $bread-supplier.Supply {
emit("We've got bread: " ~ $_);
};
whenever $vegetable-supplier.Supply {
emit("We've got a vegetable: " ~ $_);
};
}
$supply.tap( -> $v { say "$v" });
$vegetable-supplier.emit("Radish"); # OUTPUT: «We've got a vegetable: Radish␤»
$bread-supplier.emit("Thick sliced"); # OUTPUT: «We've got bread: Thick sliced␤»
$vegetable-supplier.emit("Lettuce"); # OUTPUT: «We've got a vegetable: Lettuce␤»
There, the supply block is doing something. Specifically, it's reacting to the input of two different (live) Suppliers and then merging them into a single Supply. That does seem fairly useful.
… except that if I want to transform the output of two Suppliers and merge their output into a single combined stream, I can just use
my $supply = Supply.merge:
$bread-supplier.Supply.map( { "We've got bread: $_" }),
$vegetable-supplier.Supply.map({ "We've got a vegetable: $_" });
And, indeed, if I replace the supply block in that example with the map/merge above, I get exactly the same output. Further, neither the supply block version nor the map/merge version produce any output if the tap is moved below the calls to .emit, which shows that the "on-demand" aspect of supply blocks doesn't really come into play here.
At a more general level, I don't believe the Raku (or Cro) docs provide any examples of a supply block that isn't either in some way transforming the output of a live Supply or emitting values based on a for loop or Supply.interval. None of those seem like especially compelling use cases, other than as a different way to transform Supplys.
Given all of the above, I'm tempted to mostly write off the supply block as a construct that isn't all that useful, other than as a possible alternate syntax for certain Supply combinators. However, I have it on fairly good authority that
while Supplier is often reached for, many times one would be better off writing a supply block that emits the values.
Given that, I'm willing to hazard a pretty confident guess that I'm missing something about supply blocks. I'd appreciate any insight into what that might be.
Given you mentioned Supply.merge, let's start with that. Imagine it wasn't in the Raku standard library, and we had to implement it. What would we have to take care of in order to reach a correct implementation? At least:
Produce a Supply result that, when tapped, will...
Tap (that is, subscribe to) all of the input supplies.
When one of the input supplies emits a value, emit it to our tapper...
...but make sure we follow the serial supply rule, which is that we only emit one message at a time; it's possible that two of our input supplies will emit values at the same time from different threads, so this isn't an automatic property.
When all of our supplies have sent their done event, send the done event also.
If any of the input supplies we tapped sends a quit event, relay it, and also close the taps of all of the other input supplies.
Make very sure we don't have any odd races that will lead to breaking the supply grammar emit* [done|quit].
When a tap on the resulting Supply we produce is closed, be sure to close the tap on all (still active) input supplies we tapped.
Good luck!
So how does the standard library do it? Like this:
method merge(*#s) {
#s.unshift(self) if self.DEFINITE; # add if instance method
# [I elided optimizations for when there are 0 or 1 things to merge]
supply {
for #s {
whenever $_ -> \value { emit(value) }
}
}
}
The point of supply blocks is to greatly ease correctly implementing reusable operations over one or more Supplys. The key risks it aims to remove are:
Not correctly handling concurrently arriving messages in the case that we have tapped more than one Supply, potentially leading us to corrupt state (since many supply combinators we might wish to write will have state too; merge is so simple as not to). A supply block promises us that we'll only be processing one message at a time, removing that danger.
Losing track of subscriptions, and thus leaking resources, which will become a problem in any longer-running program.
The second is easy to overlook, especially when working in a garbage-collected language like Raku. Indeed, if I start iterating some Seq and then stop doing so before reaching the end of it, the iterator becomes unreachable and the GC eats it in a while. If I'm iterating over lines of a file and there's an implicit file handle there, I risk the file not being closed in a very timely way and might run out of handles if I'm unlucky, but at least there's some path to it getting closed and the resources released.
Not so with reactive programming: the references point from producer to consumer, so if a consumer "stops caring" but hasn't closed the tap, then the producer will retain its reference to the consumer (thus causing a memory leak) and keep sending it messages (thus doing throwaway work). This can eventually bring down an application. The Cro chat example that was linked is an example:
my $chat = Supplier.new;
get -> 'chat' {
web-socket -> $incoming {
supply {
whenever $incoming -> $message {
$chat.emit(await $message.body-text);
}
whenever $chat -> $text {
emit $text;
}
}
}
}
What happens when a WebSocket client disconnects? The tap on the Supply we returned using the supply block is closed, causing an implicit close of the taps of the incoming WebSocket messages and also of $chat. Without this, the subscriber list of the $chat Supplier would grow without bound, and in turn keep alive an object graph of some size for each previous connection too.
Thus, even in this case where a live Supply is very directly involved, we'll often have subscriptions to it that come and go over time. On-demand supplies are primarily about resource acquisition and release; sometimes, that resource will be a subscription to a live Supply.
A fair question is if we could have written this example without a supply block. And yes, we can; this probably works:
my $chat = Supplier.new;
get -> 'chat' {
web-socket -> $incoming {
my $emit-and-discard = $incoming.map(-> $message {
$chat.emit(await $message.body-text);
Supply.from-list()
}).flat;
Supply.merge($chat, $emit-and-discard)
}
}
Noting it's some effort in Supply-space to map into nothing. I personally find that less readable - and this didn't even avoid a supply block, it's just hidden inside the implementation of merge. Trickier still are cases where the number of supplies that are tapped changes over time, such as in recursive file watching where new directories to watch may appear. I don't really know how'd I'd express that in terms of combinators that appear in the standard library.
I spent some time teaching reactive programming (not with Raku, but with .Net). Things were easy with one asynchronous stream, but got more difficult when we started getting to cases with multiple of them. Some things fit naturally into combinators like "merge" or "zip" or "combine latest". Others can be bashed into those kinds of shapes with enough creativity - but it often felt contorted to me rather than expressive. And what happens when the problem can't be expressed in the combinators? In Raku terms, one creates output Suppliers, taps input supplies, writes logic that emits things from the inputs into the outputs, and so forth. Subscription management, error propagation, completion propagation, and concurrency control have to be taken care of each time - and it's oh so easy to mess it up.
Of course, the existence of supply blocks doesn't stop being taking the fragile path in Raku too. This is what I meant when I said:
while Supplier is often reached for, many times one would be better off writing a supply block that emits the values
I wasn't thinking here about the publish/subscribe case, where we really do want to broadcast values and are at the entrypoint to a reactive chain. I was thinking about the cases where we tap one or more Supply, take the values, do something, and then emit things into another Supplier. Here is an example where I migrated such code towards a supply block; here is another example that came a little later on in the same codebase. Hopefully these examples clear up what I had in mind.

Responding from a dialogueflow with a dtmf response (play a number)

I am trying to do something that I don't think is a common scenario using Google's DialogFlow api. I am writing an IVR that services inmates in prison.
Dialogueflow appears to assume that the mic is always on when receiving an incoming call. But when a call comes from a person housed in prison, they are using one of the prison system call systems which 'speaks' a pre-recorded message asking the receiver to press an 'accept,' 'reject,' or 'block,' digit before the mic on the caller's phone will be enabled and speech from the caller can occur.
I have set up the parameters for the 'Default Welcome Intent' with a few examples of these pre-recorded messages that are consistent with what the prison phone system will play for the receiver. It looks something like this:
"Hello, you have received a free call from Bob Jones, an inmate at Massachusetts Department of Corrections. You will not be charged for this free call. To accept this free call, press 1. To reject this free call, press 2. To permanently block this number from any future calls, press 3."
What I want the Default Welcome Intent to do is listen to this message, capture the accept digit to press and then 'press' it so that the caller's mic is enabled and then a true dialogue can be presented (main menu for the IVR, response capture etc).
I think that I would deliver back this dtmf tone through a 'custom payload' but the scenario for playing a tone doesn't seem to be an expected/available response.
The payload defines the result to be delivered as a json string and doesn't very much like what I'm defining.
{ "dtmf": {$param.accept-digit}} (syntax error message when this json is defined as the payload)
Does anyone familiar with Dialogueflow know how I might do this?
I'm not sure if this is possible with Dialogflow, but you can write a simple app for that using Dasha.
Sample DSL (DashaScript) code:
start node root {
do {
#connectSafe($phone); //accept incoming call
}
transitions {
accept: goto accept on #messageHasIntent(["press_one_to_accept"]); //listen to the message and use conversational AI to understand that it says "To accept this free call, press 1"
}
}
node accept {
do {
#sendDTMF("1"); //make selection by sending DTMF code
}
}
Then you can design the rest of your conversation flow also using Dasha.
If you need any help, feel free to join our dev community or drop me a line at vlad#dasha.ai.

(OMNeT++) Where do packets go?

I'm trying to do a project described also here: PacketQueue is 0
I've modified the UdpBasicApp.cc to suit my needs, and in the handleMessage() function I added a piece of code to retrieve the length of the ethernet queue of a router. However the value returned is always 0.
My .ned file regarding the queue of routers is this:
**.router*.eth[*].mac.queue.typename = "DropTailQueue"
**.router*.eth[*].mac.queue.packetCapacity = 51
The code added in the UdpBasicApp.cc file is this:
cModule *mod = getModuleByPath("router3.eth[*].mac.queue.");
queueing::PacketQueue *queue = check_and_cast<queueing::PacketQueue*>(mod);
int c = queue->getNumPackets();
So my question is this: is this the right way to create a queue in a router linked to other nodes with an ethernet link?
My doubt is that maybe packets don't pass through the specified interface, i.e. I've set the ini parameters for the wrong queue.
You are not creating that queue. It was already instantiated by the OMNeT++ kernel. You are just getting a reference to an already existing module with the getModuleByPath() call.
The router3.eth[*].mac.queue. module path is rather suspicious in that call. It is hard-coded in all of your application to get the queue length from router3 even if the app is installed in router1. i.e. you are trying to look at the queue length in a completely different node. Then, the eth[*] is wrong. As a router obviously contains more than one ethernet interface (otherwise it would not be a router), you must explicitly specify which interface you want to sepcify. You must not specify patterns in module path (i.e. eth[0] or something like that, with an exact index must be specified). And at this point, you have to answer the question which ethernet interface I'm interested in, and specify that index. Finally the . end the end is also invalid, so I believe, your code never executes, otherwise the check_and_cast part would have raised an error already.
If you wanted to reach the first enthern interface from an UDP app in the same node, you would use relative path, something like this: ^.eth[0].mac.queue
Finally, if you are unsure whether your model works correctly, why not start the model with Qtenv, and check whether the given module receives any packet? Like,, drill down in the model until the given queue is opened as a simple module (i.e. you see the empty inside of the queue module) and then tap the run/fast run until next event in this module. If the simulation does not stop, then that module indeed did not received any packets and your configuration is wrong.

QiChat language syntax _* doesn't work, how to fix?

I want Pepper robot to understand any human input in the chat.
I know that the correct QiChat syntax is '*' and it requires Internet access. (Robot is connected via Wi-fi)
This is my topic file, where I tell the robot my name, he tells it and assigns qiChat variable to my name.
u:(My name {is} _*)
Nice to see you, $1 $name=$1
This is how I define the chat.
conversationalContents = Arrays.asList(
new NavigationControlConversationalContent(), new GestureControlConversationalContent(), new VolumeControlConversationalContent(),
new DateTimeConversationalContent(), new GreetingsConversationalContent(), new FarewellConversationalContent(),
new RepeatConversationalContent()
);
topic = TopicBuilder.with(qiContext).withResource(R.raw.talks).build(); // build topic
chatbot = QiChatbotBuilder.with(qiContext).withTopic(topic).build(); // build chatbot
chat = ConversationalContentChatBuilder.with(qiContext).withChatbot(chatbot).withConversationalContents(conversationalContents).build(); // build chat
chat.async().run();
And I do have this in the manifest
<uses-permission android:name="android.permission.INTERNET" />
When I tell the robot my name, on the action bar (where robot writes what it understands -> it shows "My name <...>") So it doesn't understand and thus won't answer nor assign $name variable, which it should.
You'll probably want to contact Softbank Customer care and give them your robot serial number, because this feature requires a special licence that they need to activate (if your contract allows that of course!).
Jonas