Capturing missed utterances - amazon-web-services

Does anybody know if it is at all possible to capture missed utterances? I don't see the missed ones being logged into CloudWatch. I know that you can view them in the Lex Console after 24 hours, but I'm trying to capture them with data attached to them.
As of right now the console only gives you what the missed utterances is, how many times it was said, and when the last time it was said. I want the rest of the "Data" attached to these missed utterances; for example what customer said it.
Does anybody know if this is possible with AWS or the SDK(.NET) currently with a lambda or something like that?

Missed slot inputs can be caught and logged in your Lambda.
I suggest using sessionAttributes to keep track of something like last_elicit and you can determine if that slot was not filled, then log the missed input from inputTranscript any way you want.
I often force the slot to be filled by what ever is in inputTranscript and then handle it myself, because I found that Lex sometimes ignores legitimate slot inputs.
Missed Intent inputs are handled by Lex and responded to automatically.
The only control you have in Lex with handling missed Intent inputs is to customize the responses. Go to your Lex Console, under the "Editor" Tab, look at the bottom left for "Error Handling",
Open that menu and you will see:
Lex prepares one of these "Clarification Prompts" and returns that without passing anything to your Lambda Function.
That is why you cannot log any information about missed intent utterances with a basic set up of Lex. So here's a more complicated set up using two Lambda Functions:
This "Pre-Lex Lambda" acts as a Proxy between your users and your Lex bot. This means you do not use Lex's built in Channel settings, and you have to build your own integrations between your Channels and the "Pre-Lex Lambda".
Then you also need to use either PostContent or PostText to pass your user's input to your Lex bot.
After setting this up you will finally be able to catch the Lex response for one of the Clarification Prompts, and then log your own details about the missed intent input.
Helpful References:
AWS SDK for .NET
SDK .NET API Docs for Lex
SDK .NET API Docs for Lambda
Example setting up a Lambda using SDK(.NET)

Related

Positive/Negative Feedback from Amazon Lex/CloudFormation

Please forgive the completely noob question.
Background: I am not a developer - at best a hobby programmer who has enough knowledge to be dangerous/useful to my superiors. The AWS/Cloud expert at my company just left, gave me a 30 minutes whirlwind tour of AWS and said I'm now the expert...
AWS Cloudformation allows me to provide (basically) a "user" utterance that signifies Positive/Negative Feedback from the user of the bot: WebAppConfNegativeFeedback WebAppConfPositiveFeedback.
How do I process those utterances to provide useful information to improve the bot's responses?
It's stateless, so I'm not sure how to grab the context of the question and feedback to notify our company that some question provided a bad answer (good answer not so important.)
Any help you can provide, at least to point to me how to interpret this information is more than welcome. I hate feeling like a fish out of water...
Hi there and sorry to hear about your predicament.
AWS Cloudformation is a tool set that allows a developer to script the creation of resources; Cloudformation itself is not processing your user's requests.
As you've alluded to, AWS Lex is the service that is used to interact with users.
Here's a link to the Getting Started guide which I hope will help you get a better understanding of just how Lex works so that the rest of this answer makes more sense.
Essentially Lex uses a combination of intents with slots to complete a task. An intent uses utterances as an entry point to understanding what action a user wants to take while slots are used to collect the detail surrounding that action.
As an example, we could have an utterance "set my alarm clock" that activates an intent called SetAlarm. We then need to ask the user for the time that they'd like the alarm to be set for. This value is store in a slot of type date.
We then harness the power of AWS Lambda functions to 'fulfill' the intent. In this case, we will use the given information to set the alarm at the user's specified time.
With regards to your scenario, I am making an assumption that you have two fields called WebAppConfNegativeFeedback and WebAppConfPositiveFeedback somewhere in your Cloudformation script. These contain positive and negative utterances respectively. Again, making an assumption then that these fields are used to either build a Lex bot or it could be that these values are used in a supporting Lambda function to categorise the utterance as either positive or negative.
If it is a case that you have a Lambda function, you should be able to use that function to fire all another process should it be determined that the user's interaction was negative. That process could be an email to a support team etc. The Lambda function would have the conversation state passed in as an argument. You could interrogate this argument to get the context of the conversation.
Please provide more insight if you can so that a more specific answer can be provided.

Can someone demystify EMR/EC2/Lambda/SNS/SQ in AWS services for me?

I have a reason for this question. So I would appreciate positive responses.
AWS is a black box to me.
In my mind all services offered by AWS are plug play. So AWS has a user interface where you register a bunch of info and you click a button and AWS takes care of it.
But clearly that's not the case because there are AWS services based interview questions.
So I am not sure if there is a coding element to it. Can someone demystify AWS and all its services on a higher level to me ?
Like AWS EMR/EC2/Lambda/SNS/SQS - Just a one word response is fine. Right now to me its a button clicking services with zero coding or skill. But clearly I am wrong.
Right now to me its a button clicking services with zero coding or skill. But clearly I am wrong.
I'm going to assume that your question is something along the lines of why they're called "web services", rather than what each service does. If the latter, then you've gotten a lot of links in the comments. I'd also Google for Amazon's Cloud Practitioner videos.
The reason why AWS is called a web service is because that's exactly how they work: when you want to do something, you make a GET/POST/PUT/DELETE to the endpoint for a service. For example, to send a message to an SQS queue in the us-east-1 region, you make a POST request to https://sqs.us-east-2.amazonaws.com.
Each of these requests is signed, using an access key and secret key. If you don't properly sign a request, it will be rejected.
While you can construct these GET and POST requests yourself, AWS provides a bunch of language-specific SDKs that do it for you. For example, if you want to send a message to an SQS queue using Java, you would write something like this:
AmazonSQS client = AmazonSQSClientBuilder.defaultClient()
// this is available from the console, or you could make another call to retrieve it if you know the queue name
String myQueueUrl = "...";
client.sendMessage(myQueueUrl, "an example message");

Amazon Lex and Amazon Connect Voice Recognition Fails

I have developed a bot in lex and created 3 intents with it. The first intent is to create an incident in a service now platform. I have tested it in lex console. it was working perfectly fine. Now i have created an amazon connect instance and created a contact flow and added these intents to the contact flow. When i tested it with the soft phone. Below are the issues i faced.
The prompt plays what things it can do. I have asked it to create an incident. It doesn't understand my voice and keeps on asking "could you please repeat that". After four to five times it understands my intent and prompt me with the questions to create an incident.
My intent has two slots which collects incident description and urgency and my employee ID. When i say the incident description, the values are not getting captured properly and in the backend where a tickets gets created i don't see the description which i said.
If i say my employee id, it doesn't recognize it properly and takes some random number
Make sure you add lambda function is added to your amazon connect instance as well as LEX.
If you are using the service now trail version, make sure you wake
the instance and keep it live.
make sure you have 'create an incident' utterance in the intent.
To capture the free text use 'AMAZON.Organization' Built in slot.
Use AMAZON.FOUR_DIGIT_NUMBER or AMAZON.Number for capturing your
employee ID.
Note: It depends on the way you spell it. It's not that easy to capture the number through connect, numbers like five and six should be properly pronounced. Add similar utterance like 'create incident', 'can I create an incident' so that it invokes intent easily.

How do I use Amazon Lex to take speech input from a customer and pass that information to a correct path using amazon connect?

So I am trying to streamline a customer support service by removing the need to listen to a set of options, and instead let the customer explain their issue, or say what they need support for. The list of possible service offerings exceeds 400 separate options. And I need to use Lex, Connect, and Lambda to solve this. (I do not want the customer to input a number corresponding to an option or service. I want them to explain their issue and be routed to a correct agent that can help them with the specific issue)
I went from a 100% manual input option to a speech to text option using Amazon Lex. Connect would say the list of options and the customer could say the option they needed help with instead of hitting a number on the phone. I converted all 12 or so options to speech to text. I read through the documentation and its not very helpful with my specific issue. (I also am not an expert in AWS and only just started learning a few weeks ago)
I would like to streamline this further by using Lex, connect, and lambda. But if I could avoid any one of those services, I would like to.
For what you want to do, you will need all three because they each handle a different part of what you want to achieve.
Connect is the channel the user can call and use voice input that is converted to text and passed to your Lex bot. It also converts the Lex response into voice output back to the user.
The Lex bot handles intent recognition and slot value recognition and passes that information to your Lambda. (Very simple bots that only have single responses for each intent do not need Lambda.)
The Lambda function is where you can verify, parse, correct, and build the logic for the smart interaction you want to create. Any time you want to build a response based on the variations of user input more than just the recognition of intents, then you will need Lambda to do that.
Note that Lex is great for parsing user input in Lambda because Lex delivers the inputTranscript along with Lex's interpretation of the intent and slot values. However, Connect has to put the voice input through voice-to-text before delivering to Lex, so this can cause Lex to misinterpret a poorly converted voice input. So you will need to do a lot of testing of inputs and improve your validation code in Lambda to correct common mistakes.
(If you have a more specific problem/question, you should ask a new question and give the details of what you have tried, some code or examples and a clear question. You'll get better answers that way too.)

Is there a way to get more than one intent (and their related scores) as a result when using AWS LEX?

When calling the AWS LEX API (put-text) or even when we are testing the bot by using the AWS Console, we receive only one intent as the output that matches our input.
However, for the feature that we are building, we would like to see not only the intent that best fits our input, but also some other intents that could be related with a lower score.
Ideally, we would also want to get the score for each one of the intents.
We know that this is something that is possible with Azure LUIS, but we are trying to develop this by using AWS.
Is there any way to do this?
Thanks a lot