Google Glass bone conduction transducer as input device? - google-glass

Does Google Glass's bone conduction transducer also work as an input device, and if so, how can one access its readings?
EDIT: Let me clarify why I ask.
According to Catwig's tear-down "it appears to double as a tactile switch". It's hard to tell from the pictures, so I was wondering how sensitive the switch is, and whether or not it could be used to detect vibrations in the skull. If this is the case, it could be used to enhance voice command accuracy by identifying which sounds are originating from the wearer.

tl;dr: The speaker on Glass is not a button. Do not press it.
The speaker may appear to double as a tactile switch, but it does not. Depressing it like a button makes a clicking sound, but it does not generate a signal and it's not designed to be pressed like a button.

Related

Modifying Gcode mid-print in response to sensor feedback for concept printer

I'm developing a concept printer that burns images onto wood using a magnifying glass on an xy plotter. One of my anticipated challenges is an inconsistent print quality as a result of changing lighting conditions (e.g., atmosphere, clouds).
My plan is to modify my Gcode on-the-fly, (yes, while printing) based on the feedback from photosensors in order to maintain a consistent burn. Modifying my feedrate to accommodate changes in lighting conditions seems like the simplest approach.
What I can't find is how to modify Gcode AFTER a print has begun.
ideas?
CNC machines use ladder logic to change machine parameters while running a program. They never change the code they are running. You need your program to be static and reproducible. You need to be able to version control your code. Editing the code live will most certainly lead to disaster.
You don't need to change the code to change the feed rate. Usually there is a dial to control feedrate. This can be locked out in software and controlled based on sensors.
Your question is lacking the information for a more specific answer. We would need to know what machine and control software you are using.

Use a Live Card as a Shortcut to my application

I have an application that my user would frequently have to access quickly and dismiss.
I wanted to allow the user be able to sticky my app as a live card but with only the overhead of a lower frequency card. I want to create a bookmark/shortcut to allow the user to launch and use my activity and then able to quit normally.
When I use high frequency cards they make my application hang and freeze. Is there a better way to accomplish this?
Also is this advisable as a UX/UI design standpoint?
Without more context on what your application does, it's hard to know but I'm hesitant to say this is a good way to go about it.
If it's just as a pure shortcut, it's probably a bad idea. Glass already has the "ok glass" menu paradigm which is supported by both touch and voice input, while a card would only be supported by touch. Further, a bunch of live cards as shortcuts probably doesn't scale well if the user has a number of applications which do that.
On the other hand, if your application shows information on the live card that the user would want to see on a regular basis (think Google now cards, etc) then this might be a good idea depending on how it's executed.
Again, it's hard to know without more context. Glass is a different paradigm than phones or desktops so be careful when importing concepts from those other interaction paradigms.

Detecting swipes to a specific Timeline Card at Mirror API Glassware

Is it possible to trigger an event at my server side application(notify callback) when user swiped to a specific timeline card of my glassware?
I wonder this because I want to know if the timeline card is seen if it is possible.
Detail: We can define menu items to timeline cards at Mirror API Glassware and selections are passed to our notification callback servlet through Mirror API. Thus we are able to handle those menu item selections at server side and do some stuff. I'm looking for a similar pattern for detecting swipes to my timeline cards if it is possible.
Thanks in advance.
This is not something you can do with the Mirror API. If you feel it is vitally necessary, you can request it at the Glass issue tracker, providing as many details and use cases as possible.
But I would really question why you want to do this and if it is truly a "Glass-like" behavior that your users would be expecting. (And even if the results are actually what you're expecting.) One of the core design principles is that Glass is a different user experience than non-Glass software. Users are not likely to swipe back to your card very often - they are more likely to see it and handle it shortly after it arrives, and then less likely later. You can't assume that they will see your card at all if they are in the middle of other activities. If they swipe past it, they may be on their way to another card, in either direction, and you don't know if they're on your card on their way to another or not. Glass also tends to expect their wearers to make conscious decisions and actions on their part, and these are reported to your Glassware; there are far fewer cases where passive actions are reported.
If it is important to your user that your card be seen, you may want to consider ways to repeat updating the card, and the timestamp on the card, either with or without a new alert, and have the user explicitly acknowledge seeing it. You should also be conscious about when this may be inappropriate or unexpected and allow the user to tailor it.

OpenGL in live cards?

I've been playing with the glass GDK and glass 'native' (Java) development in general. I've got a open GL app that works well enough on Glass (using standard android conventions) and I'm looking to port it to the GDK to take advantage of voice triggers and what not.
While I can certainly use it easily enough as an Immersion (I think anyway) what I'd like to do is use it as a Live Card. I'm just not sure how possible or practical this is. The documentation seems to imply that with a high frequency rendering card this should be possible but before I dive in I was hoping someone with more experience could weigh in.
(Forgive me if this is obvious -- I'm new to Android having spent the last few years in IOS/obj-c land)
XE16 supports OpenGL in live cards. Use the class GlRenderer: https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/timeline/GlRenderer
I would look at your app and determine if you want to have more user input or not and whether you want it to live in a specific part of your Timeline or just have it be launched when the user wants it.
Specifically, since Live Cards live in the Timeline, they will not be able to capture the swipe backward or swipe forwards gestures since those navigate the Timeline. See the "When to use Live Cards" section of: https://developers.google.com/glass/develop/gdk/ui/index
If you use an Immersion, however you will be able to use those swipe backwards and forwards gestures as well as these others: https://developers.google.com/glass/develop/gdk/input/touch This will give you complete control over the UI and touchpad, with the exception that swipe down should exit your Immersion.
The downside is that once the user exits your Immersion, they will need to start it again likely with a Voice Trigger, whereas a Live Card can live on in part of your Timeline.
You should be able to do your rendering in both a Surface, which a LiveCard can use or in whatever View you choose to put in your Activity which is what an Immersion is. GLSurfaceView for example may be what you need and that internally uses a Surface: http://developer.android.com/guide/topics/graphics/opengl.html Note that you'll want to avoid RemoteViews but I think you already figured that out.

Video Game Bots? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Something I've always wondered, especially since it inspired me to start programming when I was a kid, was how video game bots work? I'm sure there are a lot of different methods, but what about automation for MMORPGs? Or even FPS-type bots?
I'm talking about player-made automation bots.
To 'bot' a game, you need to be able to do two things programmatically: detect what's going on in the game, and provide input to the game.
Detecting what's going on in the game tends to be the harder of the two. A few methods for doing this are:
Screen-Scraping This technique captures the image on the screen and parses it, looking for things like enemies, player status, power-ups, game messages, time clocks, etc. This tends to be a particularly difficult method. OCR techniques can be used to process text, but if the text is written on top of the game world (instead of on a UI element with a solid background), the ever-changing backdrop can make it difficult to get accurate and consistent results. Finding non-text objects on the screen can be even more difficult, especially in 3D worlds, because of the many different positions and orientations that a single object may possibly exist in.
Audio Cues In some games, actions and events are accompanied by unique sound effects. It is possible to detect these events by monitoring the audio output of the game and matching it against a recording of the associated sound effect. Some games allow the player to provide their own sound effects for events, which allows the use of sound effects that are designed to be easy to listen for and filter out.
Memory Monitoring If the internal workings of the game are well understood, then you can monitor the state of a game by inspecting the game's memory space. Some cheat tools for console systems (such as the Game Genie) use this method. By detecting what memory the game updates, it is possible to detect what the game is doing. Some games randomize the memory locations they use each time they are launched in an attempt to foil this vulnerability.
Packet Analysis With appropriate drivers, you can intercept the game's data packets as they are sent to or retrieved from your network card (for games played online). Analysis of these packets can reveal what your game client is communicating to the server, which usually revolves around player/enemy actions.
Game Scripting Some games have a built-in scripting interface. If available, this is usually the easiest method because it is something the game software is designed to do (the previous methods would all typically count as "hacks"). Some scripts must be run in-game (through a console or through an add-on system) and some can be run by external programs that communicate through the game via a published API.
Generating input events back into the game is typically the easier task. Some methods include:
Memory "Poking" Similar to the memory monitoring section above, memory poking is the act of writing data directly into the game's memory space. This is the method used by the Game Genie for applying its cheat codes. Given the complexity of modern games, this is a very difficult task and can potentially crash the entire game.
Input Emulation "Fake" keyboard or mouse signals can be generated in lieu of direct human interaction. This can be done in software using tools such as AutoIt. Hardware hacks can also be used, such as devices that connect to the computer's USB or PS/2 port and appear to the system to be a keyboard, but instead generate fake keypress events based on signals received from the computer (for instance, over a serial port). These methods can be harder for games to detect.
Game Scripting As mentioned above, some games provide built-in methods for controlling it programmatically, and taking advantage of those tools is usually the easiest (but perhaps not the most powerful) technique.
Note that running a 'bot' in a game is usually a violation of the game's Terms Of Use and can get you suspended, banned, or worse. In some jurisdictions, this may carry criminal penalties. This is another plus for using a game's built-in scripting capabilities; if it's designed to be a part of the game software, then the game publisher is most likely not going to prohibit you from using it.
Once I wrote a simple MMORPG bot by myself. I used AutoHotkey.
It provides lots of methods to simulate user input -- one will work. It's tedious to program a working one in C++ by oneself (Or look into AutoHotkey's source).
It can directly search the screen for pixel patterns, even game screens (DirectX)
So what I did was to search the screen for the name of an enemy (Stored as a picture with the game's font) and the script clicks a few pixel below it to attack. It also tracks the health bar and pots if it is too low.
Very trival. But I know of an WoW bot that is also made using AutoHotkey. And I see lots of other people had the same idea (Mine was not for WoW, but probably illegal, too).
More advanced techniques do not capture the screen but directly read the game's memory. You have to do a lot of reverse engineering to make this work. And it stops working when the game is updated.
How does an individual person go about their day to day?
This is sort of the problem that AIs in games solve.
What do you want your entity to do? Code your entity to do that. If you want your monster to chase the player's avatar, the monster just needs to face the avatar and then move toward it. When that monster gets within a suitable distance, it can choose to bite the player avatar, and this choice can be as simple as AmICloseEnough(monster, player); or more complex or even random.
Bots in an FPS are tricky to get right because it's easy to make them perfect but not so easy to make them fun. E.g. they always know exactly where the player is (gPlayer.GetPosition()) so it's easy to shoot the player in the head every time. It takes a bit of "art" to make the bot move like a human would.
For FPS-style bots, you could take a look at the Unreal Development Kit. As I understand it, this has got a lot of the actual game source code.
http://udn.epicgames.com/Three/DevelopmentKitHome.html
bta gave a very good reply. I just wanted to add on saying that the different methods are suspectible to different means of detection by the gaming company. Hacking into the game client via memory monitoring or packet analysis generally is more easily detectable. I generally don't recommend it since you can get caught very easily.
Screen-scraping used with input emulation is generally the safest way to bot a game and not get caught. Many people, (myself included) have been doing it for years with no problems.
In addition, to add an additional step between detecting what's going on in the game and providing input, some games require extensive calculation before you can decide what kind of input to provide to the game. For example, there was a game where I had to calculate the number of ships to send when attacking the enemy, and this was based on the number of ships I had, the type of ships, and who and what kind of enemy it was. The calculation is generally the "easy" part since you can do that usually in almost any programming language.
It's called AI (artificial intelligence) and really isn't that hard to replicate, a set of rules and commands in the programming language of your game will do the trick. For example a FPS bot would work by getting the coordinates of your player's body and setting your enemy bot's gun to aim at that coordinate and start shooting when in a certain range.