Response card in Amazon Lex does not show up - amazon-web-services

I tried to create a response card using the console but it doesn't show up and previously it use to give an option in slot(prompt) and now it is not showing up.
I'm building a chatbot from Amazon Lex, and I want a response card in Facebook Messenger, and I have been doing it without using a Lambda function, there was an option to display a card in the prompt (slot) before. However, yesterday when I tried to enable a response card, the prompt doesn't have the option for response card.
As per the Amazon Lex documentation the card has to work, but in my case, it is not even showing an option to enable a card from prompt.

Just enable the message inside Response
and then put any message
after that you can enable Response cards.
may be this can solve your problementer image description here

Related

Amazon Connect error using Lex as customer input

I am trying to create a demo call centre using AWS connect.
Part of my contact flow makes use of the "Get Customer Input", as I want to use an Amazon lex bot. I have created to divert to a specific working queue. For example, if the user says "sales" they should be directed to the sales queue.
I have tested the Lex bot within the Lex console and it works as intended.
However when testing the Lex integration within AWS connect it will always follow the "error" path on the block after a user says something on the phone.
Here is the CloudWatch log showing the Error result of the module.
{
"Results": "Error",
"ContactFlowName": "Inbound Flow",
"ContactFlowModuleType": "GetUserInput",
"Timestamp": "2022-02-12T18:06:10.940Z"
}
Here is the contact flow:
Here is the settings for the "Get Customer Input Block":
Here is a test of the Lex bot in the Lex dashboard:
Any help would be greatly appreciated.
Turns out the solution was if you're using LexV2 make sure you set the proper Language Attribute as well. Easiest way is using the set Voice block in your Contact Flow, on the very bottom of the block you can enable "set language attribute".

Aws lex fulfillment with aws lambda

I have a problem to Play audio message from aws lex code hook .is there any option can return audio file instead of text response on content .guys any ideas please share me.
Amazon LEX does not talk. If you want speaking functionality, look at using Amazon Polly, which is a service that turns text into lifelike speech.
Amazon Lex uses Polly to deliver audio responses.
You'll find the output voice setting under the general settings tab of your Lex bot in the Amazon Lex Console.
Programatically you need to invoke the PostContent method instead of PostText. The PostContent method accepts an audio stream and in turn returns an audio stream.
This page from the Developer Guide describes the main points to consider when sending and receiving voice streams to and from the Lex runtime API.
Amazon Lex Developer Guide | PostContent

Get User Input From Lambda in AWS Connect

I was wondering if anybody has ever experimented with this issue I'm having and could give me any input on the subject.
As it stands right now I'm trying to see if there is a way to grab a users input through the AWS Connect. I understand that there is already a "Get User Input" block in the GUI that is available for me to use, unfortunately it does not offer the fine grain control I am looking for with requests and responses from Lex.
Right now I am able to Post Content to Lex and get responses just fine, as well as output speech using Amazon Polly via my Lambda. This works great for things that do not require a user to have to give feedback for a question.
For example if a client asks
"What time is my appointment?"
and we give back
"Your appointment is for X at X time, would you like an email with
this confirmation?"
I want to be able to capture what the user says back within that same lambda.
So the interaction would go like so:
User asks a question.
Lambda POST's it to Lex and gets a response
Amazon Polly says the response - i.e: 'Would you like an email to confirm?'
Lambda then picks up if the user says yes or no - POST's info to Lex
Gets response and outputs voice through Polly.
If anybody has any information on this please let me know, thank you!
Why do you make so much complications to implement IVR system using Amazon Connect. I have done the complete IVR automated system to one of my biggest US banking client. Use the below procedure to achieve what you desire.
Build a complete interactive lex bot(So that you can avoid amazon poly & using lex post content api). It is advised to build each bot has only one intent in it.
In connect using "Get User Input" node map the lex bot which you have created earlier with the question to be asked "What time is my appointment?". Once this question has been played the complete control goes to lex and then you fulfilled your intent from lex side, you can come back to connect as like that.
Refer AWS contact center for the clear idea.

AWS Lex storage of audio

I’ve created a Lex bot that is integrated with an Amazon Connect work flow. The bot is invoked when the user calls the phone number specified in the Connect instance, and the bot itself invokes a Lambda function for initialisation & validation and fulfilment. The bot asks several questions that require the caller to provide simple responses. It all works OK, so far so good. I would like to add a final question that asks the caller for their comments. This could be any spoken text, including non-English words. I would like to be able to capture this Comment slot value as an audio stream or file, perhaps for storage in S3, with the goal of emailing a call centre administrator and providing the audio file as an MP3 or WAV attachment. Is there any way of doing this in Lex?
I’ve seen mention of ‘User utterance storage’ here: https://aws.amazon.com/blogs/contact-center/amazon-connect-with-amazon-lex-press-or-say-input/, but there’s no such setting visible in my Lex console.
I’m aware that Connect can be configured to store a recording in S3, but I need to be able to access the recording for the current phone call from within the Lambda function in order to attach it to an email. Any advice on how to achieve this, or suggestions for a workaround, would be much appreciated.
Thanks
Amazon Connect call recording can only record conversations once an agent accepts the call. Currently Connect cannot record voice in the Contact Flows. So in regards to getting the raw audio from Connect, that is not possible.
However, it looks like you can get it from lex if you developed an external application (could be lambda) that gets utterances: https://docs.aws.amazon.com/lex/latest/dg/API_GetUtterancesView.html
I also do not see the option to enable or disable user utterance storage in Lex, but this makes me think that by default, all are recorded: https://docs.aws.amazon.com/lex/latest/dg/API_DeleteUtterances.html

Is there a way to find out what Amazon Lex hears?

I have been doing a bit of experimentation with Amazon Lex but I can't get voice to work in the console at all.
I'm using the Flower bot demo with the associated Python Lambda function connected and working with text on Chrome browser running on a Mac (10.13.1).
I am able to log any text entered into the test bot on the console from the Lambda function along with the rest of the event.
By going to the monitoring tab of the bot in the console I can see utterances from previous days (seems to be a one day delay on utterances appearing wether missed or detected, no idea why…).
I made a bunch of attempts to use voice yesterday that appear in the utterance table as a single blank entry with a count of 13 now that it is the next day. I'm not sure if this means that audio isn't getting to Lex or if Lex can't understand me.
I'm a native English speaker with a generic American accent (very few people can identify where I'm from more specifically than the U.S.) and Siri has no trouble understanding me.
My suspicion is that something is either blocking or garbling the audio before it gets to Lex but I don't know how to find what Lex is hearing to check that.
Are there troubleshooting tools I haven't found yet? Is there a way to get a live feed of what is being fed to a bot under test? (All I see for the test bot is the inspect response section, nothing for inspecting the request.)
Regarding the one day delay in appearance of utterances, according to AWS documentation:
Utterance statistics are generated once a day, generally in the
evening. You can see the utterance that was not recognized, how many
times it was heard, and the last date and time that the utterance was
heard. It can take up to 24 hours for missed utterances to appear in
the console.
In addition to #sid8491's answer, you can get the message that Lex parsed from your speech in the response it returns. This is in the field data.inputTranscript when using the Node SDK.
CoffeeScript example:
AWS = require 'aws-sdk'
lexruntime = new AWS.LexRuntime
accessKeyId: awsLexAccessKey
secretAccessKey: awsLexSecretAccessKey
region: awsLexRegion
endpoint: "https://runtime.lex.us-east-1.amazonaws.com"
params =
botAlias: awsLexAlias
botName: awsLexBot
contentType: 'audio/x-l16; sample-rate=16000; channels=1'
inputStream: speechData
accept: 'audio/mpeg'
lexruntime.postContent params, (err, data) ->
if err?
log.error err
else
log.debug "Lex heard: #{data.inputTranscript}"
Go to Monitoring tab of your Bot in Amazon Lex console, click "Utterances", there you can find a list of "Missed" and "Detected" utterance. From the missed utterances table, you can add them to any intent.