Dialogflow ES dynamic entity or training phrase - google-cloud-platform

I'm using Cisco CVP IVR to integrate with DialogflowES and Google cloud Speech to text. My IVR collected the caller's account number and retrieved the associated caller's name from a database. Now I want the caller to tell me their name and I want to see if it matches that from the database.
Since there are millions of crazy names/spellings, using an Entity Type of #sys.person or #sys.any isn't very useful.
So, I want to pass the name to Google almost like a 'dynamic grammar' (perhaps along with a few other decoy names), and have Google try to match what the caller says to one of these entries.
Is that possible to pass Google Dialogflow ES agent a list of names (I guess in a payload) and have that be used to build (or expand on) an Entity dynamically during the call?
I haven't used API.AI but is it possible to use fulfillment to create something like this dynamically at runtime based on what's passed in from my IVR? This sample is based on Dialogflow Training Phrases through Dialogflow REST API:
"trainingPhrases": [
{
"name":"",
"type": "EXAMPLE",
"parts": [
{
"text": "<<use a name1 passed from the IVR during the call>>"
},
]
}
]
I've tried modifying fulfillment, and I can at least access what the caller said and what the payload passed in was (the actual person's name). But I don't know how to limit what the agent is listening for to be a small list of names passed in.

Related

How to pass query parameters in API gateway with S3 json file backend

I am new to AWS and I have followed this tutorial : https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html, I am now able to read from the TEST console my AWS object stored on s3 which is the following (it is .json file):
[
{
"important": "yes",
"name": "john",
"type": "male"
},
{
"important": "yes",
"name": "sarah",
"type": "female"
},
{
"important": "no",
"name": "maxim",
"type": "male"
}
]
Now, what I am trying to achieve is pass query parameters. I have added type in the Method Request and added a URL Query String Parameter named type with method.request.querystring.type mapping in the Integration Request.
When I want to test, typing type=male is not taken into account, I still get the 3 elements instead of the 2 male elements.
Any reasons you think this is happening ?
For information, the Resources is the following (and I am using AWS Service integration type to create the GET method as explained in the AWS tutorial)
/
/{folder}
/{item}
GET
In case anyone is interested by the answer, I have been able to solve my problem.
The full detailed solution requires a tutorial but here are the main steps. The difficulty lies in the many moving parts so it is important to test each of them independently to make progress (quite basic you will tell me).
Make sure your SQL query to your s3 DB is correct, for this you can go in your s3 bucket, click on your file and select "query with s3 select" from the action.
Make sure that your lambda function works, so check that you build and pass the correct SQL query from the test event
Setup the API query strings in the Method Request panel and setup the Mapping Template in the Integration Request panel (for me it looked like this "TypeL1":"$input.params('typeL1')") using the json content type
Good luck !

Is it possible query if a single RingCentral user has Automatic Call Recording (ACR) enabled?

Are there any APIs that we can query if current extension is added into auto recording list?
We have a web phone using the stop/start recording Call Control API but it doesn't work when auto recording is enabled, so in our app, we need to disable the recording button. We need a way to identify if a user has ACR enabled without retrieving all the users.
We can get extension list with the following account-wide API, but it doesn't take any query parameters to filter the results. If we check this way, we need to load all extensions which takes too long.
https://developers.ringcentral.com/api-reference/Rule-Management/listCallRecordingExtensions
GET /restapi/v1.0/account/{accountId}/call-recording/extensions
Information on whether an individual user has Automatic Call Recording (ACR) enabled is available in the extension endpoint and is separately controlled for inbound and outbound calls.
GET /restapi/v1.0/account/{accountId}/extension/{extensionId}
The extension response includes a serviceFeatures features property which is an array of features. Filter this property for a feature matching the following:
featureName IN (AutomaticInboundCallRecording, AutomaticOutboundCallRecording)
enabled = true
Here's an example response showing only the serviceFeatures property value for those two features:
{
"serviceFeatures": [
{
"featureName": "AutomaticInboundCallRecording",
"enabled": true
},
{
"featureName": "AutomaticOutboundCallRecording",
"enabled": false,
"reason": "ExtensionLimitation"
}
]
}
See more here:
https://developers.ringcentral.com/api-reference/User-Settings/readExtension

Is there a way to create Quicksight analysis purely through code (boto3)?

What I currently have in my Quicksight account is a Data Source (Redshift), some datasets (some Redshift views) and an analysis (graphs and charts that use the datasets). I can view all of these on the AWS Quicksight Console. But when I use boto3 to create a data source and datasets, nothing shows up on the console. They do however show up when I use the list_data_sources and list_data_sets calls.
After this, I need to create all the graphs by code that I created manually. I can't currently find an option to do this through code. There is a 'create_template' api call which is supposed to create a template through an existing Quicksight analysis. But it requires the ARN of the analysis which I can't find.
Any suggestions on what to do?
Note: this only answers why the data sets/sources do not appear in the console. As for the other question, I assume mjgpy3 was of some help.
Summary
Add the permissions at the bottom of this post to your data set and data source in order for them to appear in the console. Make sure to fill in the principal arn with your details.
Details
In order for data sets and data sources to appear in the console when created via the API, you must ensure that the correct permissions have been added to them. Without adding the correct permissions, it is true that the CLI lists them whereas the console does not.
If you have created data sets/sources via the console, you can use the CLI (aws quicksight describe-data-set-permissions and aws quicksight describe-data-source-permissions) to view what permissions AWS gives them so that your account can interact with them.
I've tested this and these are what AWS assigns them as of 25/03/2020.
Data Set permissions:
"permissions": [
{
"Principal": "arn:aws:quicksight:<region>:<aws_account_id>:user/default/{IAM user name}",
"Actions": [
"quicksight:UpdateDataSetPermissions",
"quicksight:DescribeDataSet",
"quicksight:DescribeDataSetPermissions",
"quicksight:PassDataSet",
"quicksight:DescribeIngestion",
"quicksight:ListIngestions",
"quicksight:UpdateDataSet",
"quicksight:DeleteDataSet",
"quicksight:CreateIngestion",
"quicksight:CancelIngestion"
]
}
]
Data Source permissions:
"permissions": [
{
"Principal": "arn:aws:quicksight:<region>:<aws_account_id>:user/default/{IAM user name}",
"Actions": [
"quicksight:UpdateDataSourcePermissions",
"quicksight:DescribeDataSource",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
"quicksight:UpdateDataSource",
"quicksight:DeleteDataSource"
]
}
]
It sounds like your smaller question is regarding the ARN of the analysis.
The format of analysis ARNs is
arn:aws:quicksight:$AWS_REGION:$AWS_ACCOUNT_ID:analysis/$ANALYSIS_ID
Where
$AWS_REGION is replaced with the region in which the analysis lives
$AWS_ACCOUNT_ID is replaced with your AWS account ID
$ANALYSIS_ID is replaced with the analysis ID
If you're looking for the $ANALYSIS_ID it's the GUID-looking thing on the end of the URL for the analysis in the QuickSight URL
So, if you were on an analysis at the URL
https://quicksight.aws.amazon.com/sn/analyses/018ef6393-2c71-4842-9798-1aa2f0902804
the analysis ID would be 018ef6393-2c71-4842-9798-1aa2f0902804 (this is a fake ID I injected for this example).
Your larger question seems to be whether you can use the create_template API to duplicate your analysis. The answer at this moment (12/16/19) is, unfortunately, no.
You can use the create_dashboard API to publish a Dashboard from a template made with create_template but you can't create an Analysis from a template.
I'm answering this bit just to clarify since you may actually be okay with creating a dashboard (basically the published version of an analysis) rather than another analysis.
There are multiple ways you can find analysis id associated. Use any of the following.
A dashboard url has dashboard id included, Use this ID to execute API call describe-dashboard and you would see analysis ARN in the source entity.
Click on "save as" option on the dashboard and it would take you to the associated analysis. [ One might not see this option if a dashboard is created from a template ]
A dashboard ID can also be found by using list_dashboards API call. Print all the dashboard ID and name. You can match the ID with the given dashboard name.Look at the whole list because a dashboard id is unique but the dashboard name is not. One can have multiple dashboards with the same name.
Yes you can create lambda and trigger using cron Job
import boto3
quicksight = boto3.client('quicksight')
response = quicksight.create_ingestion(AwsAccountId=XXXXXXX,
DataSetId=YYYY,IngestionId=ZZZZ)
https://aws.amazon.com/blogs/big-data/automate-dataset-monitoring-in-amazon-quicksight/
https://aws.amazon.com/blogs/big-data/event-driven-refresh-of-spice-datasets-in-amazon-quicksight/
I've been playing with this as well and ran into the same issue. Make sure that your permissions are set up properly for the data source and the data set by referencing the quicksight user as follows:
arn:aws:quicksight:{region}:xxxxxxxxxx:user/default/{user}
I would include all the quicksight permissions found in the docs to start with and shave down from there. If nothing else, create the data source/set from the console, and then use the describe-* CLI call to see what they use.
It's kind of wonky.

Google Batch Geocoding API

I am using Google Geocoding API to conduct both forward and reverse geocoding work in my projects. While this API is only working for a single address or a single pair of geo coordinates per request, not for batch processing. I know I can copy and paste lines of addresses up to 250 to the Batch Geo Website to process, but it would be so limited and ineffectively to me if I did that. Initially, I called REST API by PHP curl in my codes, while for batch data I have no idea how to handle and I did not find the Batch Processing API document defining call URL, parameters, response, or status. Is there any resource I can find concerning Batch Geocoding API? By the way, I found the same issue on Timezone API.
Google Maps APIs support 1 query per request. To batch geocoding multiple address, you need to send 1 request per address. Implementation is entirely up to you.
As miguev mentioned, Google does not allow batch geocoding. However, HERE does have a batch geocoder that you can use go geocode multiple addresses in one call.
I think there is some misunderstanding when you use the word "batch geocoding".
Sending one request at the time is the way to go to implement batch geocoding service. You think the vendor would handle the batch, where in your case, the vendor lets you implement it locally.
Because your volume amount is so small,simply create a loop that runs all of your rows, one at the time, send them to the API, and gets the result back.
There are plenty API for batch geocoding our there and you should be ok using any of them.
In your case it would be like that:
start loop until end of collection
$ curl -X GET 'https://csv2geo.com/api/geocode?id={email}&key={key}&searchtext={search_text}'
get response back
{
"hum_address": "781 Tremont Street,Boston,MA,2118",
"address_components": {
"number": "781",
"street": "Tremont St",
"city": "Boston",
"state": "MA",
"country": "USA",
"zip": "02118"
},
"full_address": "781 Tremont St, Boston, MA 02118, United States",
"lat": 42.33957,
"lon": -71.08034,
"relevance": 1
}
and store it in a array collection
end look

How to add custom metered usage items in IBM Marketplace (AppDirect)

I am trying to do a full integration of a solution into IBM Marketplace. (The one using AppDirect). There are many metering items available (Users, MBs, ...) but I can use none of them. Let's say, for example, we use "Places". I have checked the option "Allow custom metered usage" but that won't allow me to add this "Places" metering item in my pricing option. How can I achieve this?
Note: IBM has discontinued it's Marketplace. Probably this question is of no use anymore but I decided not to delete it as we never know if they will enable it back. Also... before the discontinuation announce, I manage to get a reply from IBM stating that they don't allow the custom unit types and I was invited to use the generic "Item".
If you are billing a custom usage unit, the request looks like:
{
"account": {
"accountIdentifier": "{UUID}"
},
"items": [{
"quantity": 5,
"customUnit": "Places",
"price": 2.99,
"description": "some cool places"
}]
}
Custom units use a different field name than the predefined "units"- I'm not sure which error you were getting back when attempting to bill usage, but that might explain the error if you were getting back a dump of expected unit values.