How can I create a simple time-based notification for Google Glass? - google-glass

I'm basically working on an alarm app, where at certain times (e.g. 7pm, 10:30am), Glass should alert the user. I think this will work by creating a live card, but I don't know how to create one at a certain time and I can't find anything online about dealing with this. Any ideas?

You can look at the Timer GDK sample project on the Google Glass GitHub repository.
To give it more alarm functionality, you can use the Calendar class (java.util.Calendar).

Related

How to create and distribute SteamVR dashboard overlays

To be totally clear. This question is about SteamVR dashboard overlays specifically, not regular overlays.
I have been playing around with the C++ SteamVR SDK and working on some overlay application prototypes lately. Something I have not managed to do so far is to get a dashboard overlay to show up. The error I get when I call CreateDashboardOverlay is VROverlayError_PermissionDenied. I'm guessing that this is because I need to be authenticated with a SteamVR developer account, which I don't currently have. Can anyone verify that? There doesn't seem to be any (public) documentation on this at all beyond what's in openvr.h and the openvr github docs page, which is somewhat sparse.
I'm also guessing that any dashboard overlay application would need to be distributed through the official Steam store, but again I can't find anything official on that. I suspect that Valve would require this since otherwise any old malware that happens to be running on the system could easily create an official-looking dashboard overlay.
Note again that I am referring specifically to dashboard overlays. I can get regular overlays to show up just fine.
There are a few repos on github with implementations of steamvr overlays (https://github.com/Hotrian/OpenVRDesktopDisplayPortal for example), but I have yet to find one that is actually creating a dashboard overlay.
Any info or links to documentation I'm somehow missing would be greatly appreciated. I'm starting to think I might be missing something obvious.
Thanks
Edit for clarity:
My questions are: Am I getting the permission denied error when calling CreateDashboardOverlay because I need to satisfy some kind of authentication requirement such as having a steam dev account? And do SteamVR dashboard overlay apps need to be distributed via an official channel?
On further review it appears I was misinterpreting my own debug output and reading a bit too much into it because the conclusions sort of made sense in my mind.
The CreateDashboardOverlay call was working fine. Later on in my code I was calling ShowOverlay, which of course is not allowed for dashboard overlays (They are shown by opening them via the SteamVR dashboard UI).
My dashboard overlay is working fine after all.
To summarize, the answer to both of my questions is no. No Steam developer status is needed to create a dashboard overlay and SteamVR dashboard overlay apps do not need to be distributed through any kind of official channel.

Amazon Echo - Push a message to the device

I have integrated my amazon echo device with the amazon portal associated to my account. I was able to create my own custom question with the Alexa Skills Kit and process with an AWS Lambda function to generate a response.
My question is: is it possible to programatically "push" a message to the echo device? For example, I would like for it to speak without having to ask it a question. I'd like it to do something at a specific moment.
If it is possible, could you please share any sample code to achieve this?
It is not currently possible, but it is an oft requested feature on the AWS forums.
http://forums.developer.amazon.com/forums/thread.jspa;jsessionid=EC0D457A400B594DD0F0561EEB43A8FA?messageID=17713&#17713
I've not done this myself but it seems using the Alexa Voice Service could do the trick. It allows processing of voice from any type of audio capture and sends it to the Alexa Service. It seems possible you could record the proper phrase into a sound file and send that to AVS, thus triggering the Alexa service.
I know it's capable but Amazon hasn't offered it as a feature yet. If you go to the Echo web site http://alexa.amazon.com/spa/index.html#cards, Settings, Connected Home and select Discover devices, the echo will perform a command triggered from the web site and will speak when completed. I didn't have to say a word.
From what I read on an article about notifications here, they were going to do something that meant you still had to ask Alexa to tell you your notifications, which would build up throughout the day and cause your device to light up and chime to let you know you had a notification to listen to.
Rather than allowing Alexa to randomly blurt stuff out any time she feels like it.
Which kinda seems pointless if you don't get the notification at the exact time it was sent, for example a smart home connected device triggering an alert to let you know you left your door open too long or the app automatically reading you the weather when you get up and turn on the coffee pot in a morning, things like that. If you have to ask for the notifications you may as well just do the speaking and request stuff from the beginning.
Maybe I'm wrong but that's how it reads to me.
Surely we want, with opt-in permission per skill, to allow Alexa to just say whatever she likes whenever she likes from the skills we have set up receiving such commands, without us having to say oh I see I have notifications let me ask what those are.
Just for people who stumble across this question in 2021:
The solution is to use Amazon Proactive Events
Your Skill has to request for the notification permission, subscribe for a specific event and then you can generate a access token and POST events to the amazon API.
Took me some hours to find out because Amazon offers different things which all sound quite similar but some are deprecated (ASK CLI v1) and others are just for Alexa device manufacturers.
Wuhu! Sounds like Amazon may enable push notifications this Fall.
Digital Trends Article
Amazon is expected to establish guidelines for developers and manufacturers so that Alexa remains classy and doesn’t become an interrupting nag.

Can we start making a real production app using sneak peak GDK?

At this moment we have a sneak peak GDK and there are rumors that final GDK will come by this summer along with a public google glass device.
Now, we plan to make our google glass app built on GDK and at this moment we can only use sneak peak GDK. So we basically plan to build app along with new GDK SDKs appearing so this summer we can immediately publish our GDK apps once Google starts accepting such apps.
How safe is that we start building using existing GDK? Can anyone confirm it will not be drastically changes so we don't end up in ever-changing loop?
I see that Glass guys are watching this tag so I hope someone of them can give us a direction.
[Disclaimer: I am another Glass Explorer and not a Google employee ... however I have experience in several large corporations involved in software.]
I would expect to have to make minor and perhaps major adjustments in any Glassware application development that we do. In fact, as we find anomalies or other inconsistencies, I would hope that our feedback and requests would actually help shape the initially non-beta released GDK. If we get into a "continually updating" cycle as the GDK evolves, so be it.
Just my opinion and expectation. We will focus on modularizing and hiding important elements so changes to match a new GDK can be contained.

Can I Develop Google Glass Apps Using the Mirror API Without the Hardware?

I went through the Google docs for the Mirror API a few months ago and it mentioned that only developers with the actual Glass hardware would be able to test apps through the online sandbox. Google took the Mirror API out of its "testing" phase last week. Does this mean you no longer need an actual pair of Glass to test out apps or do the same restrictions apply?
That is partly correct. Anyone can add the Mirror API to a project using the API Console or Cloud Console. See also Issue #2 which was opened on this issue and recently closed by a project member.
"Testing" the apps, however, are still a bit tricky. Aside from reading the timeline using your same Client ID/Secret, you have no way to know that the card has been sent. And you have no way to know what the card looks like.
Using Glass is still the only real way to fully understand and appreciate the UX.

Accessing microphone from a service on Glass

I would like a service to access the microphone (and do some signal processing on it a bit like what the google music is doing to recognise songs)
Is there a public API for that ? can't seem to find it :/
Have you tried the AudioRecord class in Android? That should do everything you need. You might also find the waveform sample on the Google Glass GitHub page to be a useful example.
Keep in mind that recording audio from a service (as in a background service) might be dangerous since other applications could need the microphone for voice recognition and so forth. Using an immersion for this might be a better approach.