Follow up questions in AWS Lex - amazon-web-services

I'm trying to create a chatbot using Amazon Lex to display results from a database. The designed conversational flow is to show 10 results at first, and then provide an option for the user to "See more results?", which would be a Yes/No question. This would give an additional 10 results from the database.
I have searched through the documentation and forums on the internet, to understand a way to add this follow-up Yes/No question, and have been unsuccessful.
I'm relatively new to LEX and am unable to model this conversational flow.
Can someone explain this/direct me to the right documentation?
Any help/links are highly appreciated.

You can create your own Yes/No custom slot type in the Lex Console.
I've built one as an example here:
I named the slot type affirmation
Then I restricted a list of synonyms to equate to either Yes or No values.
This allows a user to respond naturally in many different ways and the bot will respond appropriately. All you have to do is build your Lambda handling of any slot that uses this slot type to look for either "Yes" or "No".
You can easily monitor this slot too to log any input that was not on your synonym list in order to expand your list and improve your bot's recognition of affirmations and negations.
I even built a parser for this slot in Lambda to be able to recognize emoji's (thumbs up/down, smiley face, sad face, etc.) correctly as positive or negative answers to these type of questions in my bot.
It might be surprising that Lex doesn't have this built-in like Alexa, but it's not hard to build and you can customize it easily which you cannot do with built-in slot types.
Anyway, after making this SlotType, you can create multiple slots that use it in one intent.
Lets say you create a slot called 'moreResults' and another called 'resultsFeedback'. Both would be set to use this 'affirmation' slotType to detect Yes/No responses.
Then when you ElicitSlot either of these slots in the convo, you can form the question specifically for each slot. And you can check whether the slot is filled with values 'Yes' or 'No' in your Lambda on the next response.

Related

AWS Lex built-in slots such as DayOfWeek or Food. How do they work? [duplicate]

In AWS Lex, I am using the default AMAZON.Country as a slot type.
However, when I interact with the test bot, I can enter any value (for instance "I don't know") and the JSON when I inspect the response says that the value for 'country' is "I don't know".
The purpose of a slot type, is that it limits the answers to existing country names. Not some random sentences. Any idea why I don't get the expected behaviour?
"The purpose of a slot type, is that it limits the answers...."
That is not actually true and is a common misconception when starting to develop with Lex.
Experience has taught us that the main purpose of slot types is simply to improve input recognition and fill the slot with what is most expected or desired but it does not limit the values that can fill the slot.
This is why we Lex developers also write parsing and validation code in Lambda to double check the slot values or the inputTranscript ourselves.
It might seem like Lex should do a better job of this for you, (I think we all start out assuming that) but once you start doing your own parsing/validating, you realize how much more control you actually have to make your bot smarter and more natural.
Documentation
Amazon Lex Built-In Slot Types refers Lex developers to Alexa docs.
Amazon Lex supports built-in slot types from the Alexa Skills Kit.
...see Slot Type Reference in the Alexa Skills Kit documentation
There is a warning message in Slot Type Reference in Alexa Skills Kit:
Important: A built-in slot type is not the equivalent of an enumeration. Values outside the list are still returned if recognized by the spoken language understanding system. Although input to the slot type is weighted towards the values in the list, it is not constrained to just the items on the list. Your code still needs to include validation and error checking when using slot values.

How to design my framework of EventTrigger in game with no existing engine?

I'm now learning game designing and to start, SuperMarioBro is the game I'm now going to finish with little engine. I want to design a trigger so that when Mario hit the question mark, the mushroom or the gold coins or white flower etc will appear. So when hitting it, trigger is triggered and it will call its callback function to spawn different things to make it more decoupling.
So my question is that how can I design a perfect framework of Event Trigger like this, I just know it in general.
Should I set trigger in the object that I want to trigger?
And should I make a trigger manager?
And how can these triggers do?
As the question is general, I will give you a naive idea of how I would personally start thinking about it.
I suppose your game already have an event manager, to which you can register to receive notifications of particular events occurring (might be using the observer design pattern).
I would then have a TriggerManager registered as a receiver for any event you might want to trigger specific actions for. For each type of observed event, the TriggerManager would contain two things :
a filter
a handler
The filter would be used to decide if the event fits the wanted condition (e.g. an entity moved within range area).
If the event passes the filter, the handler is executed in reaction to that event : an action/command/event sent to another entity or system.
Also, at some point in the past I read that post on reddit (see comments), which I found quite inspiring too, and describes a different approach.
You may also want to ask your question to the stackexchange game dev community on gamedev.stackexchange.com
Hope this gives you some ideas.

Recursive Alexa Response with DynamoDB

so I am basically trying to tell an interactive story with Alexa. However, I am not sure how to edit an intent response while being said by Alexa. This should happen in a way, that keeps it updating while Alexa tells the story.
In my scenario Alexa is only one daemon fetching strings from DynamoDB and while new story lines are being generated she is supposed to read and then tell them as soon as they're processed. It seems that Alexa needs completed Strings as a return value for its response.
Here an example:
User: Alexa, tell me a story.
Alexa: (checks DynamoDB table for a new sentence, if found) says sentence
Other Device: updates story
Alexa: (checks DynamoDB table for a new sentence, if found) says sentence
...
This would keep on going until the other Device puts an end-signifier to the DynamoDB table making Alexa respond a final This is how the story ends output.
Has anyone experience with such a model or an idea of how to solve it? If possible, I do not want a user to interact more then once for a story.
I am thinking of a solution where I would 'fake' user-intents by simply producing JSON Strings and pushing them through the speechlet requesting the new story-sentences hidden from the user... Anyhow, I am not sure wether this is even possible not to think of the messy solution this would be.. :D
Thanks in regard! :)
The Alexa skills programming model was definitely not designed for this. As you can tell, there is no way to know, from a skill, when a text to speech utterance has finished, in order to be able to determine when to send the next.
Alexa also puts a restriction on how long a skill may take to respond, which I believe is somewhere in the 5-8 seconds range. So that is also a complication.
The only way I can think of accomplishing what you want to accomplish would be to use the GameEngine interface and use input handlers to call back into your skill after you send each TTS. The only catch is that you have to time the responses and hope no extra delays happen.
Today, the GameEngine interface requires you to declare support for Echo Buttons but that is just for metadata purposes. You can certainly use the GameEngine based skills without buttons.
Have a look at: the GameEngine interface docs and the time out recognizer to handle the time out from an input handler.
When you start your story, you’ll start an input handler. Send your TTS and sent a timeout in the input handler of however long you expect the TTS to take Alexa to say. Your skill will receive an event when the timeout expires and you can continue with the story.
You’ll probably want to experiment with setting the playBehavior to ENQUEUE.

How can I use AWS Lex to ask multiple yes/no questions?

I am trying to create a custom slot type to hold the user response for yes / no values but it looks like Lex does not recognize Yes, No or Sure as input on my custom slot type. Are there limits with slot types on what values should we use?
I was hoping to use Lex as a way to solve basic helpdesk problems before forwarding a user on to a human. My questions are things like "Have you turned it off an on?", which I'm expecting a "yes/no" response.
It seems like Lex is unable to understand these answers.
I found a hacky solution.
Within your Lambda function, continuously respond with ConfirmIntent and check intentRequest.currentIntent.confirmationStatus for Confirmed and Denied. State can be managed through a slot parameter or outputSessionAttribute (e.g. incrementing an integer).
This feels like it's breaking the intended flow process of Lex, but it gets the job done.
Please post an answer if you think there is a better way
You should be able to use the built-in AMAZON.YesIntent and AMAZON.NoIntent for these.

How to accept the Free form text as input to Amazon Skill Kit?

I'm required to create a Amazon Skill Kit to open a ticket in our ticketing tool.
By looking at the examples for Amazon Skill Kit, I couldn't find a way of accepting the free form text as input. Other option is by creating a custom slot with all probable set of inputs as custom slot inputs.
But in my case, all i have to do is capture the full content of user input to log it somewhere in the ticket which is very unlikely to expect the probable utterances before hand.
Correction to my comment... I, and others, may be misunderstanding the deprecation of the AMAZON.LITERAL. I found that custom slots still pass through literal content that did not match the predefined entries. If you have a custom slot with the entries "Bob" and "John" and I say "Samuel" my skill is still sent "Samuel". Which seems identical to previous AMAZON.LITERAL behavior. (AMAZON.LITERAL required you to provide example utterances, just as custom slots require to provide example utterances, so it seems only a difference in definition, not function.)
As you think about what users are likely to ask, consider using a built-in or custom slot type to capture user input that is more predictable, and the AMAZON.SearchQuery slot type to capture less-predictable input that makes up the search query.
You can read more here
To get the value in your application you will have to this
event.request.intent.slots.IntentName.value
Update: This is no longer possible as of October 2018.
AMAZON.LITERAL is deprecated as of October 22, 2018. Older skills
built with AMAZON.LITERAL do continue to work, but you must migrate
away from AMAZON.LITERAL when you update those older skills, and for
all new skills.
You can use the AMAZON.LITERAL slot type to capture freeform text. Amazon recommends providing sample phrases, but according to this thread, you may be able to get away with not providing them.