Use a Live Card as a Shortcut to my application - google-glass

I have an application that my user would frequently have to access quickly and dismiss.
I wanted to allow the user be able to sticky my app as a live card but with only the overhead of a lower frequency card. I want to create a bookmark/shortcut to allow the user to launch and use my activity and then able to quit normally.
When I use high frequency cards they make my application hang and freeze. Is there a better way to accomplish this?
Also is this advisable as a UX/UI design standpoint?

Without more context on what your application does, it's hard to know but I'm hesitant to say this is a good way to go about it.
If it's just as a pure shortcut, it's probably a bad idea. Glass already has the "ok glass" menu paradigm which is supported by both touch and voice input, while a card would only be supported by touch. Further, a bunch of live cards as shortcuts probably doesn't scale well if the user has a number of applications which do that.
On the other hand, if your application shows information on the live card that the user would want to see on a regular basis (think Google now cards, etc) then this might be a good idea depending on how it's executed.
Again, it's hard to know without more context. Glass is a different paradigm than phones or desktops so be careful when importing concepts from those other interaction paradigms.

Related

Make windows 10 narrator to speak some text

I'm making a GUI application in C++.
Is there a simple way to make the narrator of windows 10 to speak some text ?
Given of course that it is currently active.
A trick that sometimes works is to select some text in a text field and then focus it briefly. But
It doesn't work all the time
The focus is moved out and back. Even if it's for a short time, it's invasive and may disturb whatever the user is currently doing
The text field must be present at some place on screen, what is not always desired
If possible, I would like a solution without these three issues.
Other screen readers, in particular Jaws and NVDA, provide API to do this.
I'm even the author of a library, UniversalSpeech, which allow to make the currently running screen reader, if any, to speak text, abstracting the need to detect yourself which one is running.
Given that the narrator has greatly improved with the last 3 or 4 releases of windows 10, it would probably be interesting to support it, not only in my own program and for my particular usecase, but for everybody in my library.
However, I can't find any documentation or anything telling me if the narrator has an API similar to those of Jaws or NVDA.
In fact if there is currently no such API for Narrator, it would probably be interesting to suggest Microsoft to add one.
Note that this question is different from such as this one
where the answer suggests to use speech API directly. Using screen readers API and not speech API directly has great benefits:
Screen reader users are used to specific voice settings (voice, rate, pitch, language and regional accents, etc.). The default settings set in the control panel may not at all be similar. It implies
whether the user must configure speech settings in the control panel, what is global for all applications; not very good
and/or managing application-specific speech settings in a business application which isn't at all devoted to speech stuff; it would be rather strange to find speech settings in a financial app for example.
Using both screen reader and independent speech engine simultenously means that both can speak at the same time, what is of course extremely annoying. In fact in practice it happens quite often, I have already tested.
So, is there a simple way to make narrator to speak some text ?
My program is in C++, the library is in C, so in theory I have access to the whole winapi, through LoadLibrary/GetProcAddress if needed.
Please don't give any C# or VisualStudio-dependent solution.
Thank you.
After quite a while, I answer my own question !
IN fact, the simplest to make narrator speak something is probably to:
Define some label as being a live region
When something has to be spoken by narrator, change the text in the label and then send an update notification
Turning a label into a live region and sending a notification whenever the text changes is explained here:
https://learn.microsoft.com/en-us/accessibility-tools-docs/items/win32/text_livesetting
Live setting can be set to 0=off, 1=polite and 2=assertive. The meaning of polite and assertive are the same as in WAI ARIA.
Though as present (april 2021), narrator always interrupts speech as soon as the text of the label is replaced, even in polite mode.
What changes in assertive mode is that the text even take priority against interruptions due to normal keyboard navigation, i.e. you may not hear where you are when pressing tab, arrow keys, etc.
For that reason, I don't recommand it at all.
Note also that live setting only works on static text controls (win32 STATIC window class).
It is totally ignored when applied to text fields and text areas (win32 EDIT window class).
The label with live setting still works when placed off-screen or even hidden.
As far as I know, Microsoft Narrator doesn't expose the API for developers, and you can suggest a feature for it using Feedback App on Windows 10.
I want to achieve the same behavior as in a live region, i.e. have some text read by the SR as it appears at the bottom of a multiline rich text field, but in a native GUI app. For info, I'm using WXWidgets.
You can use the UI automation events to subscribe the property change of rich text field so that you can get notified when the text is updated. And since you are using third-party control, you also need to implement providers for any third party controls that do not include a provider. And below are links about UI automation provides and
UI Automation events for your reference:
UI Automation Events Overview
UI Automation Providers Overview

Communicating with a Blackstar ID Core Guitar Amplifier through its USB Entrance and C++

I currently own a Blackstar ID Core 10w Amp. It has a lot of built in guitar effects such as Reverb, Delay and Modulation, all of which have various depths and levels. By connecting a USB cable from the amp to my computer, I'm able to use Blackstar's Insider software, which allows me to save these effects settings and switch to any of them with just a double click. However, the need for a double click makes it impossible for you to play your guitar and change effects during a song (which is what a pedal does).
However, I wanted to know if it's possible to use C++ to do something more ambitious than the manufacturer allows: I wanted to create a software that would play a backing track (voice+drums but no guitar) of a song and allow the user to set where during that song he wants his effects changed, and to what. This way, one would be able to play a song from start to end, not needing to worry about having to change effects.
This would also be a school project, so it can't really be a "mouse manager" or anything of this sort. It would need to be something more robust.
FYI, as far as I'm concerned, Blackstar does not give us any API we could work with. So I'd like to know if this project is even possible and, if so, where I should start.
Thank you!
This existing project is likely to help provide clues for you to reverse engineer what Insider does and rewrite that in C++.
https://github.com/jonathanunderwood/outsider
I feel your pain regarding Blackstar's awful Insider software.
To answer your question, is this project even possible, of course it is, the Insider software obviously is able to control the amp via USB. You just have to figure out what its protocol is.
You can use a USB sniffer like this one to see what commands Insider sends to the amp when you perform actions. With enough experimentation you should be able to reverse-engineer the protocol.
It's probably easier than you think. As evidence I offer the Insider software itself, which is not very sophisticated. The settings are probably modeled more or less as a struct.

Detecting swipes to a specific Timeline Card at Mirror API Glassware

Is it possible to trigger an event at my server side application(notify callback) when user swiped to a specific timeline card of my glassware?
I wonder this because I want to know if the timeline card is seen if it is possible.
Detail: We can define menu items to timeline cards at Mirror API Glassware and selections are passed to our notification callback servlet through Mirror API. Thus we are able to handle those menu item selections at server side and do some stuff. I'm looking for a similar pattern for detecting swipes to my timeline cards if it is possible.
Thanks in advance.
This is not something you can do with the Mirror API. If you feel it is vitally necessary, you can request it at the Glass issue tracker, providing as many details and use cases as possible.
But I would really question why you want to do this and if it is truly a "Glass-like" behavior that your users would be expecting. (And even if the results are actually what you're expecting.) One of the core design principles is that Glass is a different user experience than non-Glass software. Users are not likely to swipe back to your card very often - they are more likely to see it and handle it shortly after it arrives, and then less likely later. You can't assume that they will see your card at all if they are in the middle of other activities. If they swipe past it, they may be on their way to another card, in either direction, and you don't know if they're on your card on their way to another or not. Glass also tends to expect their wearers to make conscious decisions and actions on their part, and these are reported to your Glassware; there are far fewer cases where passive actions are reported.
If it is important to your user that your card be seen, you may want to consider ways to repeat updating the card, and the timestamp on the card, either with or without a new alert, and have the user explicitly acknowledge seeing it. You should also be conscious about when this may be inappropriate or unexpected and allow the user to tailor it.

OpenGL in live cards?

I've been playing with the glass GDK and glass 'native' (Java) development in general. I've got a open GL app that works well enough on Glass (using standard android conventions) and I'm looking to port it to the GDK to take advantage of voice triggers and what not.
While I can certainly use it easily enough as an Immersion (I think anyway) what I'd like to do is use it as a Live Card. I'm just not sure how possible or practical this is. The documentation seems to imply that with a high frequency rendering card this should be possible but before I dive in I was hoping someone with more experience could weigh in.
(Forgive me if this is obvious -- I'm new to Android having spent the last few years in IOS/obj-c land)
XE16 supports OpenGL in live cards. Use the class GlRenderer: https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/timeline/GlRenderer
I would look at your app and determine if you want to have more user input or not and whether you want it to live in a specific part of your Timeline or just have it be launched when the user wants it.
Specifically, since Live Cards live in the Timeline, they will not be able to capture the swipe backward or swipe forwards gestures since those navigate the Timeline. See the "When to use Live Cards" section of: https://developers.google.com/glass/develop/gdk/ui/index
If you use an Immersion, however you will be able to use those swipe backwards and forwards gestures as well as these others: https://developers.google.com/glass/develop/gdk/input/touch This will give you complete control over the UI and touchpad, with the exception that swipe down should exit your Immersion.
The downside is that once the user exits your Immersion, they will need to start it again likely with a Voice Trigger, whereas a Live Card can live on in part of your Timeline.
You should be able to do your rendering in both a Surface, which a LiveCard can use or in whatever View you choose to put in your Activity which is what an Immersion is. GLSurfaceView for example may be what you need and that internally uses a Surface: http://developer.android.com/guide/topics/graphics/opengl.html Note that you'll want to avoid RemoteViews but I think you already figured that out.

C++ Usage Statistics

I have made an application in C++ and would like to know how to go about implementing a usage statistics system so that I may gather some data regarding how users use the program.
Eg. IP Address, Number of hours spent in application, and OS used.
In theory I know I can code this myself if I must, but I was wondering if there is a framework available to make this easier to do. Unfortunately I was unable to find anything on google.
Though there is no any kind of such framework, you could reduce the work you have to do (in order to retrieve all these information) by using some approaches and techniques, which I tried describe below. Please, anybody feel free to correct me.
Let's summarise, what groups of information do we need to complete the task:
User Environment Information. I suggest you to look at Microsoft's WMI infrastructure, in particular to WMI classes: Desktop, File System, Networking, etc. Using this classes in your application can help you retrieve almost all kind of system information. But if you don't satisfy with this, see #2.
Application and System Performance. Under these terms I mean overall system performance, processor's count, processes running in OS, etc. To retrieve these data you can use the NtQuerySystemInformation function. With its help, you will get an access to detailed SystemProcessInformation, SystemProcessorPerformanceInformation (retrieves info about each processor) information, and much more.
User Related Information. It's hard to find a framework to do such things, so I suggest you simply start writing code, having in mind your requirements:
counting how many times each button was pressed, each text field was changes, etc.
measuring delay time between consecutive actions in some kind of predefined sequences (for example, if you have a settings gui form and you expect from the user to fill very fast all required text fields, so using a time delay measuments can give you an information if the user acted as we expected from him or delayed after TextBox2 for a 5 minutes).
anything that could be interested to you.
So, how you could implement the last item (User Related Information) requirements? As for me, I'd do something like folowing (some may seem very hard to implement or too pointless):
- creating a kind of base Counter class and derive from it some controls (buttons, edits, etc).
- using a windows hooks for mouse or keybord while getting a child handle (to recognize a control, for example).
- using Callback class, which can do all "dirty" work (counting, measuring, performing additional actions).
You could store all this information either in a textfile or an SQLite database or there wherever you prefer.
I would recommend taking a look at DeskMetrics. This StackOverflow post summarizes the issue.
Building your own framework could take you months of development (apart from maintenance). With something like Trackerbird Software Analytics you can integrate a DLL with your app and start tracking in 30 minutes and you get all the cool real-time visualizations.
Disclaimer: I am affiliated with company.