Do anyone have the similar problem to control Android apps (from google codelab - fitness app) with their voice, using Google Assistant?
In my app, I tried to activate widget using google assistant (app action tool test), but google assistant doesn't give any respond. Therefore, I used the example code google gave to run, but it still has similar problem that widget isn't invoked when I use app action tool test to test the app.
Sample code google provided:
https://github.com/actions-on-google/appactions-fitness-kotlin.git
the shortcut looks like below and how :
enter image description here
Update:
After updating the new running record, the app won't respond when I want to trigger widget after the first time I can do it successfully. Do anyone have similar question or know how to solve it?
App updates running record
The widget cannot update/display the new running record
widget cannot be trigger to get new running record
I'm assuming you're talking about the codelab titled Integrate Android Widgets with Google Assistant.
I've been able to get this codelab to work - mostly.
When testing using the App Actions test tool:
Make sure you have uploaded the app to a project in the Google Play Console. You'll only need to do this once, even if you change your code.
Make sure the account you're using for Android Studio, Play Console, and Google Assistant on your device are all the same.
Make sure you've compiled the app and deployed it to your device. The easiest way to do this just to run it from Android Studio the first time.
When you create the preview, set the name and Locale.
Even tho these are listed as "optional", these make it easier to debug.
The App name should be something you know will be recognized by voice and not used by any other app. I tend to use things like "Splat" and "Bogus".
Set the locale to en-US
Click "Create Preview"
Once the preview is created, you can test the widget running by
Setting the Intent to GET_EXERCISE_OBSERVATION
Changing the value in the "name" property to something like "run". ("Climbing" will also work, but tell you that you don't have that activity logged.)
Making sure the target device is the one you're testing on.
Clicking Run App Action
And you should see the widget appear in the Assistant on the device.
You don't need to use the test tool, however. Once you have created the preview with the Test Tool, you can just test it right on the device:
Start the Assistant by long-pressing on the home button.
Saying or typing a request such as "How far have I run with Bogus" ("Bogus" being the name I've chosen this time
If you have issues with the emulator, there are some other troubleshooting steps which may be useful depending on the error message you get.
Now, I did say "mostly". I have seen issues where it will stop responding at all, either via the Test Tool or directly via the Assistant. A few things that I've done that often (but not always) help:
Delete and then create the preview again.
Remember that the Preview is only available for a few hours.
Force stop the application.
Force stop the Google Assistant.
Even if you don't see a message, check out the Troubleshooting steps for "Sorry, I couldn't find that"
Related
I'm working on making a conversation flow for smart speakers.
with google action builder console. And I'm using google cloud function for webhooks.
However, from this morning I cannot change, simulate or release my google action project.
Cannot save any changes in 'develop' menu on the google action console
(error message : "An error occurred saving the scene")
-> screenshot img showing error in develop
I cannot test any version in 'simulator' menu on the google action console
(error message : "We're sorry, but somethin went wrong. Please try again")
->screenshot showing error for save any changes in develop menu
I can't release alpha or beta version in 'Deploy' menu on the google action console
(error message : "Error submitting Assistant app")
-> screenshot showing error in release
No logs are found in google cloud function related to these problems.
And actually I didn't change anything since the recent release. How can I deal with this one?
UPDATE: another solution is to press ctrl+F5 or clear browser data & cache. This works pretty well specially if you have multiple tabs opened.
It is happening to more people. Maybe this can help:
See this link or this one.
Google console doesn't offer much information about those errors. Only for the model ones. When that happens, and if your model is too big, I suggest to reduce it a little bit (or try with a well-known version that is working) and try deploying/running the simulator again. That way you can find out if it's your fault or Google is updating their system.
It is working again for me, after few hours throwing errors.
I've read in the pepper/aldebaran documentation as well as in posts here that it is possible to launch pepper behaviours directly from the tablet. My problem is that the projects and behaviours that I create with choregraphe do not appear on the tablet. I can launch the installed package/project without any problem using the trigger conditions, but have no idea how to launch them on the tablet. Thus my questions is where can I find them on the tablet. I ticked the box "may start on user request" in the properties dialog.
This was also new to me but i figured out something.
In Choregraph you need to start the application "j-tablet-browser"
Now a promt will appear on the Tablet "Allow USB Debugging on Tablet"
Select "Allow"
Now a promt will appear on the Tablet "Select a Home app"
Select "Launcher 3"
Now you can open an Android app that is called RALViewer.
This App will show you the icons of your Choregraph projects that are installed.
Then after restart Pepper, the tablet should show the normal Android
screen and ask you again to Select a Home app. You can also make "Laucher 3" the default starting app.
Maybe it is also possible to set RAL Viewer as the default starting app so you dont need to navigate in Android after startup.
I dont know if thats what was meant in the tutorials, but it offers a way to start your app from the tablet.
Thanks for explanation! I came already across the RALViewer but it did not show the applications that I have uploaded. In the meantime I have experienced that newly uploaded apps do not appear immediately, but only after a reboot or any other condition that I coul not determine yet. Anyhow this seems to answer the question.
I am looking to dive into a project using AppSync. So far I have been able to find plenty of articles and such online giving all the steps as to what buttons to click in order to get a sample project running, but none of them seem to touch on how one deals with it from a local development or in a CI/CD environment. Its probably my "old school" idea of how dev usually works, but I was expecting some way to simulate enough of the environment locally to do development and run unit tests, but I can't seem to find any way to do just that. And when I get to the UI portion I have no idea how to have a local dev instance of the backend to run against.
Do people not develop in this way anymore, opting to instead stand up a "development stack"? I just want to make sure I am not painting myself into a corner in the future.
Short answer is no. Here are your options:
AppSync Emulator for Serverless Framework. It's a nice emulator, but still limited and differs quite a bit from the the real API in my opinion.
We ended up writing separated unit tests for VTL templates and compare result query to an appropriate fixture. You could deploy full featured VTL Parser on Java but there are simpler solutions: a Python library AirSpeed; for JS you could use one from the AppSync Emulator.
Here is a way to test your Appsync resolvers directly on AWS console.
In the AppSync console, in the Schema tab, select a resolver and you will land into an "Edit resolver" page.
Select the button "Select Test Context" to simulate a context received by your resolver.
Then select "Run test".
I don't like any of the listed commands in google glass and review team didn't approve my own. I want my glassware to by on official store so my question is:
Can I NOT use voice activation for an app and just start it using touchpad?
Absolutely it is possible to open an app without voice commands.
Instead of saying "Hello Glass" when the screen turns on, just tap on the touch pad, and it'll bring up a list of applications in chronological order of usage. You can scroll through them until you find the name of yours, and then tap to open it.
Is it possible to have different behaviors when a glassware is launched via "OK Glass" voice command vs touch menu selection?
Specifically we are trying to prompt voice recognition if the glassware is launched with "OK Glass" voice command, otherwise go direct to the glassware if it is launched from the touch menu.
Or, is there a way for an app to know in which way it was launched?
We are trying to emulate what Google Play Music Glassware does.
The GDK does not yet provide a way to do this. If you would like to see this feature added, please file an enhancement request in our issue tracker!
There is no published standard way. Perhaps you could explore the call stack on entry (e.g. look at what a stacktrace would produce for the different states?