I am trying to create a voice bot with aws lex.
In that one of the intents response is "Your incident INC11111111 is closed"(text).
The above response is coming from a lambda function. Please check the code below.
let response = (event, data) => {
let lambda_response = {
"sessionAttributes": {
"incidentNo": event.currentIntent.slots.INCIDENT_NO,
},
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled",
"message": {
"contentType": "PlainText",
"content": "Hi " + data["User ID"].split('.')[0]+", Your Incident Number " + "INC"+event.currentIntent.slots.INCIDENT_NO+ " is ," + data["Status"]
},
}
};
return lambda_response;
};
Ex Incident No: INC11111111
But the voice output is "your incident INC 1 crore 11 lakhs 11 thousand 1hundered eleven is closed".
What I am expecting is "Your incident INC ONE ONE ONE ONE ONE ONE ONE ONE is closed.
thank you in advance.
You need to utilise SSML(Speech Synthesis Markup Language)
Using SSML tags, you can customize and control aspects of speech, such as pronunciation, volume, and speech rate.
There are a variety of directives that you can use in SSML to pronounce things differently. In your case say-as directive can be useful.
As per the question edit, try these changes
"message": {
"contentType": "SSML",
"content": "<speak> Hi " + data["User ID"].split('.')[0]+", Your Incident Number <say-as interpret-as="characters">" + "INC"+event.currentIntent.slots.INCIDENT_NO+ "</say-as> is ," + data["Status"] +"</speak>"
},
Related reading : Announcing Responses Capability in Amazon Lex and SSML Support in Text Response
Related
I am trying to get weather update. The Python code is working well but I am unable to embed it into Amazon Lex. It is showing received error response.
from botocore.vendored import requests
# using openweathermap api
api_address = 'http://api.openweathermap.org/data/2.5/weather?appid=__api_key_here__&q='
city = input("Enter city >> ")
url = api_address + city
json_data = requests.get(url).json()
formatted_data = json_data['weather'][0]['main']
desc_data = json_data['weather'][0]['description']
print(formatted_data)
print(desc_data)
# print(json_data)
Make sure api is running perfectly python code.
Depends on the next state you need to keep type as ElicitSlot or ElicitInten
If you are using lambda as backend for the lex, we need send the response in a below format.
You can refer the link for the Lambda response formats
Lambda response formats
{
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled",
"message": {
"contentType": "PlainText",
"content": "Thanks, your pizza has been ordered."
},
"responseCard": {
"version": integer-value,
"contentType": "application/vnd.amazonaws.card.generic",
"genericAttachments": [
{
"title":"card-title",
"subTitle":"card-sub-title",
"imageUrl":"URL of the image to be shown",
"attachmentLinkUrl":"URL of the attachment to be associated with the card",
"buttons":[
{
"text":"button-text",
"value":"Value sent to server on button click"
}
]
}
]
}
}
}
I'm running an AWS step function with parallel execution branches.
Each branch succeeds individually, however the overall function fails with the following error:
States.DataLimitExceeded - The state/task returned a result with a size exceeding the maximum number of characters service limit.
I then found an article from AWS that describes this issue and suggests a work around:
https://docs.aws.amazon.com/step-functions/latest/dg/connect-lambda.html
That article says:
The Lambda invoke API includes logs in the response by default. Multiple Lambda invocations in a workflow can trigger States.DataLimitExceeded errors. To avoid this, include "LogType" = "None" as a parameter when you invoke your Lambda functions.
My question is where exactly do I put it? I've tried putting it various places in the state machine definition, however I get the following error:
The field 'LogType' is not supported by Step Functions
That error seems contrary to the support article, so perhaps I'm doing it wrong!
Any advice is appreciated, thanks in advance!
Cheers
UPDATE 1 :
To be clear, this is a parallel function, with 26 parallel branches. Each branch has a small output as per the example below. The biggest item in this data is the LogResult, which (when base64 decoded) is just the billing info. I think this info multiplied by 26 has led to the error, so I just want to turn this LogResult off!!!
{
"ExecutedVersion": "$LATEST",
"LogResult": "U1RBUlQgUmVxdWVzdElkOiBlODJjZTRkOS0zMjk2LTRlNDctYjcyZC1iYmEwMzI1YmM3MGUgVmVyc2lvbjogJExBVEVTVApFTkQgUmVxdWVzdElkOiBlODJjZTRkOS0zMjk2LTRlNDctYjcyZC1iYmEwMzI1YmM3MGUKUkVQT1JUIFJlcXVlc3RJZDogZTgyY2U0ZDktMzI5Ni00ZTQ3LWI3MmQtYmJhMDMyNWJjNzBlCUR1cmF0aW9uOiA3NzI5Ljc2IG1zCUJpbGxlZCBEdXJhdGlvbjogNzgwMCBtcwlNZW1vcnkgU2l6ZTogMTAyNCBNQglNYXggTWVtb3J5IFVzZWQ6IDEwNCBNQglJbml0IER1cmF0aW9uOiAxMTY0Ljc3IG1zCQo=",
"Payload": {
"statusCode": 200,
"body": {
"signs": 63,
"nil": ""
}
},
"SdkHttpMetadata": {
"HttpHeaders": {
"Connection": "keep-alive",
"Content-Length": "53",
"Content-Type": "application/json",
"Date": "Thu, 21 Nov 2019 04:00:42 GMT",
"X-Amz-Executed-Version": "$LATEST",
"X-Amz-Log-Result": "U1RBUlQgUmVxdWVzdElkOiBlODJjZTRkOS0zMjk2LTRlNDctYjcyZC1iYmEwMzI1YmM3MGUgVmVyc2lvbjogJExBVEVTVApFTkQgUmVxdWVzdElkOiBlODJjZTRkOS0zMjk2LTRlNDctYjcyZC1iYmEwMzI1YmM3MGUKUkVQT1JUIFJlcXVlc3RJZDogZTgyY2U0ZDktMzI5Ni00ZTQ3LWI3MmQtYmJhMDMyNWJjNzBlCUR1cmF0aW9uOiA3NzI5Ljc2IG1zCUJpbGxlZCBEdXJhdGlvbjogNzgwMCBtcwlNZW1vcnkgU2l6ZTogMTAyNCBNQglNYXggTWVtb3J5IFVzZWQ6IDEwNCBNQglJbml0IER1cmF0aW9uOiAxMTY0Ljc3IG1zCQo=",
"x-amzn-Remapped-Content-Length": "0",
"x-amzn-RequestId": "e82ce4d9-3296-4e47-b72d-bba0325bc70e",
"X-Amzn-Trace-Id": "root=1-5dd60be1-47c4669ce54d5208b92b52a4;sampled=0"
},
"HttpStatusCode": 200
},
"SdkResponseMetadata": {
"RequestId": "e82ce4d9-3296-4e47-b72d-bba0325bc70e"
},
"StatusCode": 200
}
I ran into exactly the same problem as you recently. You haven't said what your lambdas are doing or returning however I found that AWS refers to limits that tasks have within executions https://docs.aws.amazon.com/step-functions/latest/dg/limits.html#service-limits-task-executions.
What I found was that my particular lambda had an extremely long response with 10s of thousands of characters. Amending that so that the response from the lambda was more reasonable got past the error in the step function.
I had the problem a week ago.
I way I solved is like below:
You can define which portion of the result that is transmitted to the next step.
For that you have to use
"OutputPath": "$.part2",
In your json input you have
"part1": {
"portion1": {
"procedure": "Delete_X"
},
"portion2":{
"procedure": "Load_X"
}
},
"part2": {
"portion1": {
"procedure": "Delete_Y"
},
"portion2":{
"procedure": "Load_Y"
}
}
Once part1 is processed, you make sure that part1 is not sent in the output and the resultpath related to it. Just part 2 which is needed for the following steps is sent for the next steps.
With this: "OutputPath": "$.part2",
let me know if that helps
I got stuck on the same issue. Step function imposes a limit of 32,768 characters on the data that can be passed around between two states.
https://docs.aws.amazon.com/step-functions/latest/dg/limits.html
Maybe you need to think and breakdown your problem in a different way? That's what I did. Because removing the log response would give you some elasticity but your solution will not scale after a certain limit.
I handle large data in my Step Functions by storing the result in an S3 bucket, and then having my State Machine return the path to the result-file (and a brief summary of the data or a status like PASS/FAIL).
The same could be done using a DB if that's more comfortable.
This way won't have to modify your results' current format, you can just pass the reference around instead of a huge amount of data, and they are persisted as long as you'd like to have them.
The start of the Lambdas looks something like this to figure out if the input is from a file or plain data:
bucket_name = util.env('BUCKET_NAME')
if 'result_path' in input_data.keys():
# Results are in a file that is referenced.
try:
result_path = input_data['result_path']
result_data = util.get_file_content(result_path, bucket_name)
except Exception as e:
report.append(f'Failed to parse JSON from {result_path}: {e}')
else:
# Results are just raw data, not a reference.
result_data = input_data
Then at the end of the Lambda they will upload their results and return directions to that file:
import boto3
def upload_results_to_s3(bucket_name, filename, result_data_to_upload):
try:
s3 = boto3.resource('s3')
results_prefix = 'Path/In/S3/'
results_suffix = '_Results.json'
result_file_path = '' + results_prefix + filename + results_suffix
s3.Object(bucket_name, result_file_path).put(
Body=(bytes(json.dumps(result_data_to_upload, indent=2).encode('UTF-8')))
)
return result_file_path
result_path = upload_results_to_s3(bucket_name, filename, result_data_to_upload)
result_obj = {
"result_path": result_path,
"bucket_name": bucket_name
}
return result_obj
Then the next Lambda will have the first code snippet in it, in order to get the input from the file.
The Step Function Nodes look like this, where the Result will be result_obj in the python code above:
"YOUR STATE":
{
"Comment": "Call Lambda that puts results in file",
"Type": "Task",
"Resource": "arn:aws:lambda:YOUR LAMBDA ARN",
"InputPath": "$.next_function_input",
"ResultPath": "$.next_function_input",
"Next": "YOUR-NEXT-STATE"
}
Something you can do is, add "emptyOutputPath": "" to your json,
"emptyOutputPath": "",
"part1": { "portion1": { "procedure": "Delete_X"
}, "portion2":{ "procedure": "Load_X" } },
"part2": { "portion1": { "procedure": "Delete_Y"
}, "portion2":{ "procedure": "Load_Y" } }
That will allow you to do "OutputPath":"$.emptyOutputPath" which is empty and will clear ResultPath.
Hope that helps
Just following up on this issue to close the loop.
I basically gave up on using parallel lambdas in favour of using AQS message queues instead
As all Postman collections are basically .json files it's quite difficult to review code written in Postman via code review tools.
At the moment I continue to review such .json files on GitHub, however, it's not quite convenient.
For instance, here's some JS code sent for review in .json format
"listen": "test",
"script": {
"id": "e8f64f04-8ca5-4b5a-8333-1f6bde73cf0c",
"exec": [
"var body = xml2Json(responseBody);",
"// const resp = pm.response.json();",
"let resp = body.projects;",
"var b = \"New_Poject_e2e\"+pm.environment.get(\"random_number\");",
" console.log(b);",
"for (var i = 0; i < resp.project.length; i++)",
"{",
" if (resp.project[i].name === b)",
" {",
" console.log(resp.project[i].id[\"_\"]);",
" pm.environment.set(\"projID\", resp.project[i].id[\"_\"]);",
" }",
"}",
""
],
"type": "text/javascript"
}
Is there an effective way to carry out code review for Pre-request scripts, Requests, and Tests for Postman Collections?
Have you tried creating a fork of the Collection?
https://learning.getpostman.com/docs/postman/collections/version_control/#forking-a-collection
You can make changes in the Forked Collection and then Merge them - Before merging you get taken to the web view which might make these diffs easier.
I have created a simple chatbot with the following flow.
Bot: do you want to buy a book?
Human: yes
Bot: what kind of book are you interested? (Response card)
-drama
-crime
-action
Human: drama (on click or typing)
Bot: Here is list of available drama movies in store(response card)
- Django
- first man
-true story
The last part is problem , I can't figure out how I can achieve that.
Can some one please help me what do I need to do to get what I want? Similar demo or tutorial Will be appreciated.
Here you need to add a response card using your Lambda code because the values are dynamic (available movies).
Here is the example code of adding response card:
"dialogAction": {
"type": "Close",
"fulfillmentState": "Fulfilled or Failed",
"message": {
"contentType": "PlainText or SSML",
"content": "Message to convey to the user. For example, Thanks, your pizza has been ordered."
},
"responseCard": {
"version": "1",
"contentType": "application/vnd.amazonaws.card.generic",
"genericAttachments": [
{
"title":"card-title",
"subTitle":"card-sub-title",
"imageUrl":"URL of the image to be shown",
"attachmentLinkUrl":"URL of the attachment to be associated with the card",
"buttons":[
{
"text":"button-text",
"value":"Value sent to server on button click"
}
]
}
]
}
}
This is an example of adding response card in a fulfimmnet message, you can add this in elicit_slot as well. Play around it and let us know if you have any confusion.
Hope it helps.
I'm working on my final project for my Bachelor degree in software development. The project requires students to pick a topic related to software development and write a paper based on their findings.
Problem Definition
Here's what I need help with:
My topic is on developing skills for Amazon Alexa using Python 3. The current focus is creating custom skills.
My custom skill will calculate the volume of an object.
For the purposes of this question an object is a box, cube, cylinder, or sphere. I am having trouble getting the volume of a box. I need help getting the values from the user to my Python 3 backend.
I want the dialogue to go something like this:
Alexa: "Welcome to Volume Calculator. Would you like to calculate the volume of an object?"
User: "Yes"
Alexa: "Which object would you like me to calculate the volume of?"
User: "A Box"
Alexa: "What is the length of the box?"
User: "5"
Alexa: "What is the width of the box?"
User: "5"
Alexa: "What is the height of the box?"
User: "5"
Alexa "The volume of the box is one-hundred and twenty-five cubic meters."
The current response from Alexa is "There was a problem with the requested skill's response."
Python 3 Backend
#ask.intent("BoxLengthIntent", convert={"length": int})
def box_length():
box_length_prompt = "What is the length of the box?"
return question(box_length_prompt)
#ask.intent("BoxWidthIntent", convert={"width": int})
def box_width():
box_length_prompt = "What is the width of the box?"
return question(box_length_prompt)
#ask.intent("BoxHeightIntent", convert={"height": int})
def box_height():
box_height_prompt = "What is the height of the box?"
return question(box_height_prompt)
#ask.intent("BoxVolumeIntent", convert={"length": int, "width": int,
"height": int})
def calculate_box_volume():
length = box_length()
# session.attributes["length"] = length
width = box_width()
# session.attributes["width"] = width
height = box_height()
# session.attributes["height"] = height
# Question does not define mul. Program crashes here.
volume_of_box = length * width * height
msg = "The volume of the box is {} cubic meters"\
.format(volume_of_box)
return statement(msg).simple_card(title="VolumeCalculator", content=msg)
Intent Schema
{
"intents": [
{
"intent": "YesIntent"
},
{
"intent": "NoIntent"
},
{
"intent": "CubeIntent",
"slots": [
{
"name": "length",
"type": "AMAZON.NUMBER"
}
]
},
{
"intent": "CubeVolumeIntent",
"slots": [
{
"name": "length",
"type": "AMAZON.NUMBER"
}
]
},
{
"intent": "BoxVolumeIntent",
"slots": [
{
"name": "length",
"type": "AMAZON.NUMBER"
},
{
"name": "width",
"type": "AMAZON.NUMBER"
},
{
"name": "height",
"type": "AMAZON.NUMBER"
}
]
}
]
}
Sample Utterances
BoxVolumeIntent box
BoxVolumeIntent give me the volume of a box
BoxVolumeIntent give me the volume of a box with length {length} height
{height} and width {width}
BoxVolumeIntent tell me the volume of a box
BoxVolumeIntent what is the volume of a box
I found the solution to my problem.
Steps
First I needed to create a dialog model using the Amazon Skill Builder. There are plenty of tutorials on how to use it on the Skill Builder page.
I then needed to figure out how flask -ask handled multi - turn dialogs. The solution is the following block of code that I found while browsing through the issues of the repository.
dialog_state = get_dialog_state()
if dialog_state != "COMPLETED":
return delegate(speech=None)
Where the label dialog_state is set to the value of the function get_dialog_state(). If we go into the if statement we return a delegate object which delegates back to the Alexa Voice Service that another prompt is required to fulfill the intent.
The get_dialog_state() function looks like this
def get_dialog_state:
return session['dialogState']
Place the block of code in step 2 anywhere you intend to have a multi-turn dialog. After all of the prompts have been satisfied handle what you would do with that data. Also include any error checking that is required.
In addition to the steps outlined in #bradleyGamiMarques' answer, I found I needed to include the delegate class along with the other Flask-Ask classes I'm using.
from flask_ask import Ask, statement, question, session, context, delegate
A hangup I experienced here was that in this instance my IDE (PyCharm) flagged delegate as an "unresolved reference." I ignored that, though, after manually confirming its existence and everything works out.
One more change in addition to importing delegate from flask-ask
Instead of
if dialog_state != "COMPLETED":
return delegate(speech=None)
Use
if dialog_state != "COMPLETED":
return delegate()