I am beginner in AWS Sumerian, I want to start scene without clicking start button.
as shown in the image.
please help me thanks in advance.
You need to set an initial state in your state diagram, which may be the same as what is run when you press the start button. However if your start button runs certain scripts like speech, you may need to begin with an AWS SDK ready action, then transition.
Since I don't know your project, here is a sample tutorial from Amazon that you can walk through to see how it works: Sumerian Speech tutorial, will speak when run using initial state
And here are more: Sumerian Tutorials
Related
Do anyone have the similar problem to control Android apps (from google codelab - fitness app) with their voice, using Google Assistant?
In my app, I tried to activate widget using google assistant (app action tool test), but google assistant doesn't give any respond. Therefore, I used the example code google gave to run, but it still has similar problem that widget isn't invoked when I use app action tool test to test the app.
Sample code google provided:
https://github.com/actions-on-google/appactions-fitness-kotlin.git
the shortcut looks like below and how :
enter image description here
Update:
After updating the new running record, the app won't respond when I want to trigger widget after the first time I can do it successfully. Do anyone have similar question or know how to solve it?
App updates running record
The widget cannot update/display the new running record
widget cannot be trigger to get new running record
I'm assuming you're talking about the codelab titled Integrate Android Widgets with Google Assistant.
I've been able to get this codelab to work - mostly.
When testing using the App Actions test tool:
Make sure you have uploaded the app to a project in the Google Play Console. You'll only need to do this once, even if you change your code.
Make sure the account you're using for Android Studio, Play Console, and Google Assistant on your device are all the same.
Make sure you've compiled the app and deployed it to your device. The easiest way to do this just to run it from Android Studio the first time.
When you create the preview, set the name and Locale.
Even tho these are listed as "optional", these make it easier to debug.
The App name should be something you know will be recognized by voice and not used by any other app. I tend to use things like "Splat" and "Bogus".
Set the locale to en-US
Click "Create Preview"
Once the preview is created, you can test the widget running by
Setting the Intent to GET_EXERCISE_OBSERVATION
Changing the value in the "name" property to something like "run". ("Climbing" will also work, but tell you that you don't have that activity logged.)
Making sure the target device is the one you're testing on.
Clicking Run App Action
And you should see the widget appear in the Assistant on the device.
You don't need to use the test tool, however. Once you have created the preview with the Test Tool, you can just test it right on the device:
Start the Assistant by long-pressing on the home button.
Saying or typing a request such as "How far have I run with Bogus" ("Bogus" being the name I've chosen this time
If you have issues with the emulator, there are some other troubleshooting steps which may be useful depending on the error message you get.
Now, I did say "mostly". I have seen issues where it will stop responding at all, either via the Test Tool or directly via the Assistant. A few things that I've done that often (but not always) help:
Delete and then create the preview again.
Remember that the Preview is only available for a few hours.
Force stop the application.
Force stop the Google Assistant.
Even if you don't see a message, check out the Troubleshooting steps for "Sorry, I couldn't find that"
Hello Stackoverflow Friends:
Context:
My goal is to use Amazon Lex Bot to communicate via an SMS text channel using an Amazon Pinpoint phone number associated with my account. Users will send utterances via their native text client, i.e. the Messages application on their iPhone. It would reply to them in the same channel.
I did also want to include a 'middleware' layer of having a Lambda functions extract certain user utterances and or the user's phone number and store that in a Dynamo DB.
Problem(s):
I found this tutorial and I am blocked [Blockers listed below].
There seems to be a disconnect between what I'm seeing in my AWS console and this tutorial (and documentation on AWS) as well as many video tutorials I'm seeing on YouTube - or I'm maybe doing something wrong?
Version 2? I did observe that my AWS Lex console that the URL that includes a "V2" in the url ("https://console.aws.amazon.com/lexv2/home?region=us-east-1#bots") I am not observing that "V2" in various instructors' videos that I've watched. Which leads me to wonder if perhaps V2 is a new version of Lex and the documentation hasn't been released? Here is a link to a video done by one of the author's of the above linked tutorial and as you can see from the screenshot in his video it isn't /lexv2/ it is just /lex/.
Screenshot from instructional video:
Screenshot from my AWS console:
Blockers / Questions:
1. [Tutorial says]1 (in Step 1; Request a long code for your country. When I do that - there is no focus / SMS capability is grayed out indicating [to me anyway] that the outcome / goal of this tutorial is not possible using a long code?
Question: As a workaround I selected a toll free number which had SMS capabilities. Is that permissible?
2. In Step 2; the tutorial says, Use the default IAM role - there is no default, I selected.
Question: Is that a good path forward?
3. Also in Step 2; [the tutorial says]1, When the bot finishes building, choose Publish. For Create an alias, enter Latest. Choose Publish. - I see no "Publish" button and this is highly confusing as in many, many, many tutorials I've watched on YouTube the instructors have that button visible.
Here is my screenshot of what I see [no "Publish" button]:
Here is Amazon document tutorial with a "Publish" button.
And here is a various tutorial I see online with a "Publish" button.
Question: Did I miss a step (I did build it and test it and those controls to do that were on the bottom of the UI not on the top as all the tutorials I've found are. Is it possibly V2 of this Lex bot that has changed?
Assuming I can get past these blockers - in Step 3 of the tutorial it says, Under Execution role, choose View the LexPinpointIntegrationDemoLambda role.
Question: Not to be really dense but I have swirl on how to do that / where to do that. Can I get some direction / steps on exact steps to do that please?
Yes, the problem is that the tutorial, which i also followed, is based on the Version 1 of the service and the console. On the lower left corner there is a button that says "Switch to V1 console"
After this you will get the same interface as the tutorial and you can continue with it.
AWS does not have the Publish Button in the Lex V2. You will have to follow the following steps to publish your Bot in AWS Lex V2:
1. Build the bot
2. Create the bot version
3. Create an Alias
3. Associate that version to a required alias
so once you create the bot version, it is considered published.
everyone!
I can't see the save button but see the test button only in amazon lambda function.
I can't know what's the reason for it.
I searched on goolge, but can't find answer.
Save button has been shown before yesterday, but today I can't see.
I hope anyone can help me.
Thanks.
The user interface changed overnight. Now you have to use Deploy button:
you are right! As per the new deployment "save" button is integrated within each module. So you don't need to globally save all changes at once instead you can save only the particular setting. Likewise you can directly deploy scripts from the function code.
I don't like any of the listed commands in google glass and review team didn't approve my own. I want my glassware to by on official store so my question is:
Can I NOT use voice activation for an app and just start it using touchpad?
Absolutely it is possible to open an app without voice commands.
Instead of saying "Hello Glass" when the screen turns on, just tap on the touch pad, and it'll bring up a list of applications in chronological order of usage. You can scroll through them until you find the name of yours, and then tap to open it.
Is it possible to have different behaviors when a glassware is launched via "OK Glass" voice command vs touch menu selection?
Specifically we are trying to prompt voice recognition if the glassware is launched with "OK Glass" voice command, otherwise go direct to the glassware if it is launched from the touch menu.
Or, is there a way for an app to know in which way it was launched?
We are trying to emulate what Google Play Music Glassware does.
The GDK does not yet provide a way to do this. If you would like to see this feature added, please file an enhancement request in our issue tracker!
There is no published standard way. Perhaps you could explore the call stack on entry (e.g. look at what a stacktrace would produce for the different states?