OpenGL in live cards? - google-glass

I've been playing with the glass GDK and glass 'native' (Java) development in general. I've got a open GL app that works well enough on Glass (using standard android conventions) and I'm looking to port it to the GDK to take advantage of voice triggers and what not.
While I can certainly use it easily enough as an Immersion (I think anyway) what I'd like to do is use it as a Live Card. I'm just not sure how possible or practical this is. The documentation seems to imply that with a high frequency rendering card this should be possible but before I dive in I was hoping someone with more experience could weigh in.
(Forgive me if this is obvious -- I'm new to Android having spent the last few years in IOS/obj-c land)

XE16 supports OpenGL in live cards. Use the class GlRenderer: https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/timeline/GlRenderer

I would look at your app and determine if you want to have more user input or not and whether you want it to live in a specific part of your Timeline or just have it be launched when the user wants it.
Specifically, since Live Cards live in the Timeline, they will not be able to capture the swipe backward or swipe forwards gestures since those navigate the Timeline. See the "When to use Live Cards" section of: https://developers.google.com/glass/develop/gdk/ui/index
If you use an Immersion, however you will be able to use those swipe backwards and forwards gestures as well as these others: https://developers.google.com/glass/develop/gdk/input/touch This will give you complete control over the UI and touchpad, with the exception that swipe down should exit your Immersion.
The downside is that once the user exits your Immersion, they will need to start it again likely with a Voice Trigger, whereas a Live Card can live on in part of your Timeline.
You should be able to do your rendering in both a Surface, which a LiveCard can use or in whatever View you choose to put in your Activity which is what an Immersion is. GLSurfaceView for example may be what you need and that internally uses a Surface: http://developer.android.com/guide/topics/graphics/opengl.html Note that you'll want to avoid RemoteViews but I think you already figured that out.

Related

Make windows 10 narrator to speak some text

I'm making a GUI application in C++.
Is there a simple way to make the narrator of windows 10 to speak some text ?
Given of course that it is currently active.
A trick that sometimes works is to select some text in a text field and then focus it briefly. But
It doesn't work all the time
The focus is moved out and back. Even if it's for a short time, it's invasive and may disturb whatever the user is currently doing
The text field must be present at some place on screen, what is not always desired
If possible, I would like a solution without these three issues.
Other screen readers, in particular Jaws and NVDA, provide API to do this.
I'm even the author of a library, UniversalSpeech, which allow to make the currently running screen reader, if any, to speak text, abstracting the need to detect yourself which one is running.
Given that the narrator has greatly improved with the last 3 or 4 releases of windows 10, it would probably be interesting to support it, not only in my own program and for my particular usecase, but for everybody in my library.
However, I can't find any documentation or anything telling me if the narrator has an API similar to those of Jaws or NVDA.
In fact if there is currently no such API for Narrator, it would probably be interesting to suggest Microsoft to add one.
Note that this question is different from such as this one
where the answer suggests to use speech API directly. Using screen readers API and not speech API directly has great benefits:
Screen reader users are used to specific voice settings (voice, rate, pitch, language and regional accents, etc.). The default settings set in the control panel may not at all be similar. It implies
whether the user must configure speech settings in the control panel, what is global for all applications; not very good
and/or managing application-specific speech settings in a business application which isn't at all devoted to speech stuff; it would be rather strange to find speech settings in a financial app for example.
Using both screen reader and independent speech engine simultenously means that both can speak at the same time, what is of course extremely annoying. In fact in practice it happens quite often, I have already tested.
So, is there a simple way to make narrator to speak some text ?
My program is in C++, the library is in C, so in theory I have access to the whole winapi, through LoadLibrary/GetProcAddress if needed.
Please don't give any C# or VisualStudio-dependent solution.
Thank you.
After quite a while, I answer my own question !
IN fact, the simplest to make narrator speak something is probably to:
Define some label as being a live region
When something has to be spoken by narrator, change the text in the label and then send an update notification
Turning a label into a live region and sending a notification whenever the text changes is explained here:
https://learn.microsoft.com/en-us/accessibility-tools-docs/items/win32/text_livesetting
Live setting can be set to 0=off, 1=polite and 2=assertive. The meaning of polite and assertive are the same as in WAI ARIA.
Though as present (april 2021), narrator always interrupts speech as soon as the text of the label is replaced, even in polite mode.
What changes in assertive mode is that the text even take priority against interruptions due to normal keyboard navigation, i.e. you may not hear where you are when pressing tab, arrow keys, etc.
For that reason, I don't recommand it at all.
Note also that live setting only works on static text controls (win32 STATIC window class).
It is totally ignored when applied to text fields and text areas (win32 EDIT window class).
The label with live setting still works when placed off-screen or even hidden.
As far as I know, Microsoft Narrator doesn't expose the API for developers, and you can suggest a feature for it using Feedback App on Windows 10.
I want to achieve the same behavior as in a live region, i.e. have some text read by the SR as it appears at the bottom of a multiline rich text field, but in a native GUI app. For info, I'm using WXWidgets.
You can use the UI automation events to subscribe the property change of rich text field so that you can get notified when the text is updated. And since you are using third-party control, you also need to implement providers for any third party controls that do not include a provider. And below are links about UI automation provides and
UI Automation events for your reference:
UI Automation Events Overview
UI Automation Providers Overview

Google Glass GDK view error, sad cloud TableView

I have a simple glass application Live Card which displays fine when debugging on Google Glass, however if I add a TableLayout (with or without rows) and debug on glass, I get the sad cloud. The card uses no dependent resources like network connectivity.
As documented here, TableLayout is not one of the types of views/widgets that are supported by RemoteViews. (This is a restriction in the Android framework and not Glass-specific.) Whenever RemoteViews fail to inflate properly, it is manifested as this "sad cloud" image.
You may want to consider redesigning your layout to use nested LinearLayouts instead in order to get around this restriction.

Use a Live Card as a Shortcut to my application

I have an application that my user would frequently have to access quickly and dismiss.
I wanted to allow the user be able to sticky my app as a live card but with only the overhead of a lower frequency card. I want to create a bookmark/shortcut to allow the user to launch and use my activity and then able to quit normally.
When I use high frequency cards they make my application hang and freeze. Is there a better way to accomplish this?
Also is this advisable as a UX/UI design standpoint?
Without more context on what your application does, it's hard to know but I'm hesitant to say this is a good way to go about it.
If it's just as a pure shortcut, it's probably a bad idea. Glass already has the "ok glass" menu paradigm which is supported by both touch and voice input, while a card would only be supported by touch. Further, a bunch of live cards as shortcuts probably doesn't scale well if the user has a number of applications which do that.
On the other hand, if your application shows information on the live card that the user would want to see on a regular basis (think Google now cards, etc) then this might be a good idea depending on how it's executed.
Again, it's hard to know without more context. Glass is a different paradigm than phones or desktops so be careful when importing concepts from those other interaction paradigms.

Detecting swipes to a specific Timeline Card at Mirror API Glassware

Is it possible to trigger an event at my server side application(notify callback) when user swiped to a specific timeline card of my glassware?
I wonder this because I want to know if the timeline card is seen if it is possible.
Detail: We can define menu items to timeline cards at Mirror API Glassware and selections are passed to our notification callback servlet through Mirror API. Thus we are able to handle those menu item selections at server side and do some stuff. I'm looking for a similar pattern for detecting swipes to my timeline cards if it is possible.
Thanks in advance.
This is not something you can do with the Mirror API. If you feel it is vitally necessary, you can request it at the Glass issue tracker, providing as many details and use cases as possible.
But I would really question why you want to do this and if it is truly a "Glass-like" behavior that your users would be expecting. (And even if the results are actually what you're expecting.) One of the core design principles is that Glass is a different user experience than non-Glass software. Users are not likely to swipe back to your card very often - they are more likely to see it and handle it shortly after it arrives, and then less likely later. You can't assume that they will see your card at all if they are in the middle of other activities. If they swipe past it, they may be on their way to another card, in either direction, and you don't know if they're on your card on their way to another or not. Glass also tends to expect their wearers to make conscious decisions and actions on their part, and these are reported to your Glassware; there are far fewer cases where passive actions are reported.
If it is important to your user that your card be seen, you may want to consider ways to repeat updating the card, and the timestamp on the card, either with or without a new alert, and have the user explicitly acknowledge seeing it. You should also be conscious about when this may be inappropriate or unexpected and allow the user to tailor it.

Where can I find API documentation for Windows Mobile phone application skin?

I have to customize the look of Windows Mobile (5/6) dialer application. From bits and pieces of information and the actual custom skin implementations in the wild I know that it is actually possible to change a great deal. I am looking for ways to change the look and feel of the following screens:
Actual dialer (buttons, number display, etc.)
Incoming call notification
Outgoing call screen
In-call screen
At least in the HTC Fuze device there is a custom skin that can be enabled or disabled, and it is actually a dll.
Can anyone point me to a section in MSDN, any kind of sample code, or at least mention the keyword I should be looking for?
Edit: There seem to be a number of "skins" for download. How do they do it?
There is currently no API for the default phone dailer and you can't replace it. The only people that can are the OEM's that make the devices.
I beleave you can add a context menu extender but I can't find the sample but that's about it.
As the other post article link goes into, there are enough API's in WM that you can write your own dailer and kind of replace it in most cases. Altho you can detect incomming calls you may be out of luck displaying a GUI in all situations (e.g. when the phone is locked).
There is an article series by Jeff Cogswell which could be very interesting for you.