How to Simulate Pinch on BlackBerry 10 Simulator ? - c++

I am developing a project using the Native SDK for BlackBerry 10. I am using BlackBerry 10 Dev Alpha Simulator for testing purposes. I can't seem to simulate a pinch event, and did some searching just to find out that this is not implemented yet in the simulator.
So basically, I need a method to programatically create a pinch and run it when some other event is triggered. What is the easiest way to do this?
Edit:
I am not looking for language-agnostic solutions. I need an architectural implementation. How would one go on using gesture_pinch_t to create a pinch event (even with hardcoded parameters)?

I'm more involved with the WebWorks and AIR team at RIM, but off the top of my head a language agnostic solution would be something like the following:
You have some handler for the pinch event, which is able to process the data passed by the event (gesture_pinch_t)
Instead of using the pinch event to trigger the callback, you can simulate a pinch with some other obtainable event (perhaps a double tap or a test toggle button that you turn on and then all touch events become the start of a simulated pinch).
You then make the centroid property your starting coordinate, and then as you drag with your finger (or in this case, with your cursor in the simulator), you calculate the distance property by subtracting the current coordinate with the origin coordinate you made your centroid.
Again, I haven't delved into the NDK specifically, but this is the approach I would take with JavaScript or ActionScript and is quite do-able. I wish I could write a code snippet but hopefully this helps take you in the right direction.
Cheers!

Just to let you know that multiple touch gestures are now supported in the simulator. Just right click and drag to add a touch event, do it again to simulate more touch events, then left click to execute them at the same time.
Example of pinch gesture:

Related

Are VCL ListViews ListBox DBGrids full touch aware?

Since we have tablets with Windows 10 I have decided to use again Delphi XE7 and VCL to develop for this multitouch devices.
I have found ListView, ListBox and DBGrid seem not have a standard behavior with pan and scroll (just PanUp & PanDown, ScrollUp ScrollDown). DBGrid does not support touching panning. ListBox, seem doesn't control inertial panning like TListview... and ListView react erratically, sometimes "loose" pannings moving scrollbar but not items list.
Have someone tested this controls on Windows 8.1 or Windows 10 using a multitouch tablet ?. Just load components with, let me say 100 items and try to have a simple vertical smooth scroll / pan using fingers.
All together is kind frustrating, and I cannot focus in develop application which is my task.
Question is: Which is the right component or way to use panning (at least vertical panning / Scrolling) with touch screens and working smooth and without problems ? I thought this components should react to standard actions (like PanUp or PanDown) without need to implement the Gesture Manager and control one by one each touch on screen. I would like to receive your kind feedback. Thank You
Conclusion: Many thanks to all who have helped with their comments. My own conclusion is Delphi is not ready to be used as a RAD for touching screens. The touching implementation is poor and need too much work for very standard using. Should not be necessary invent the wheel again for a very common and standard controls. Actually there are more mobile device users, than desktop users. Perhaps Embarcadero should decide to pay attention to this matter, and give well finished tools wich meet the OS touch and feel controls.
Let me add the same in FM using TGrid works fine.

Mouse events on a cairo context

I'm developing an application with C++ and GTK3 but I'm stucked. I've created a visual application with glade which has three columns and one of them, the middle one, is a DrawingArea. In that DrawingArea I want to draw some circles at the point I want to after pressing a button and have different mouse events on that circles (like drag and drop, double click, right click...). I've made the first thing (draw a circle after pressing a button) following the official documentation, but the problem is that I don't know how to do the mouse events, but I thought about it and I have some different solutions (I don't know if they are the bests solutions or maybe there are better):
I think the best way is to create a signal to the cairomm context, but I didn't see anything to do that. Maybe the way would be to create a cairo surface or something like that.
Every time I click to create a circle, I would have to create a gtk widget in which I can handle mouse events. The problem here is that the widget needs to have circular shape and need to be drawable. Is it possible to create a circular DrawingArea? It could be the best. I saw the way to create custom widgets here.
Use goocanvasmm. The problem here is that goocanvasmm has a little documentation (I'm sorry I can not post more than two links because of my reputation) and I think this is not the best solution, I prefer to use cairomm.
This application was written in C using GTK2, and the circles were drawn using gnomecanvas, adding signals in an easy way to each circle; and now I'm moving this application to C++ and GTK3 to renew it.
I'm very new to GTK (and graphical interfaces in general), but I looked for solutions for hours and I don't know what is the best way in order to continue my work.
Thank you for your help :)
It's best to use a canvas library for this such as GooCanvas. Doing it with cairo alone would require you to listen to mouse events on the whole drawing area, and keep track of where the circles were in order to decide which circle the mouse event belongs to - exactly the problem which the canvas library has already solved for you.
If you are having trouble with goocanvasmm documentation, a look at the documentation for GooCanvas' C API combined with knowledge of how the C API translates into C++ will usually suffice. Although the GooCanvasmm documentation seems fairly extensive to me.

How to detect start and end of a gesture in kinect?

I am working on one-shot learning of gestures. Most of the gestures involve moving the left and the right hand and the hand joints are easily detectable using skeletal tracing library of Kinect SDK. I am facing the problem as to how to guess the start of the gesture and when it ends so that I can feed the coordinates of the hand joint trajectory to my algorithm that finally classifies the gesture?
There is no way you can detect the beginning of a unknown gesture within a learning engine. There must be some discrete action that tells the system that a gesture is about to be started for it to learn. Without this discrete action the system can not know what motion is the beginning of the gesture, v.s. a motion between, v.s. a motion moving towards the beginning, v.s. an arbitrary motion the engine should care nothing about.
There are a few discrete actions that might work, depending on your situation:
a keyboard or mouse action
a known gesture to signify a new gesture is to begin/end
use voice recognition to notify the engine that you are starting/ending
some action with a short countdown timer for the user to get to "position 1" of the gesture and begin when prompted.
have a single origin for all gestures - holding your hand there for a short period to signify beginning of learning action.
Without some form of a discrete action, the system just can not know what you want. It will always guess, and you will always run into a situation where the system guesses wrong.
For executing on a known gesture, your method depends on how you store the data and the complexity of the gesture. Here are two gesture libraries that you can review to see how they work:
http://kinecttoolbox.codeplex.com/
https://github.com/EvilClosetMonkey/Fizbin.Kinect.Gestures
They may also help give ideas of how you want to start/end gestures, based on how the gesture data is stored for each situation.

Qt - Catch events normally handled by the Window Manager

I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam
If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).

How do I get the window that currently has the cursor on top of it with X11?

How can I retrieve the top window of which the cursor is on top of in the X11 server?
The window doesn't have to be ”active” (selected, open, whatever), it just has to have the cursor floating on top of it.
Thanks in advance.
You can use XQueryPointer() to get the mouse position. Then get a window list using XQueryTree(). XQueryTree() returns the window list in proper z-order so you can just loop through all the windows until you find one whose bounding box is under the pointer, XGetWindowAttributes() will give you everything you need to figure out the bounding box. I'm not sure what you would do with shaped windows though.
I haven't work with X11 for a few years so this might be a rather clunky approach but it should work. I also don't have my O'Reilly X11 books anymore, you'll want to get your hands on book one of that series if you're going to work with low level X11 stuff; I think the whole series is available for free online these days.
I haven't programmed X11 for over a decade, so forgive me if I get this wrong.
I believe you can register for mouse movement events on your windows. If you handle such event by storing the window handle in some variable or other, and then handling the event so it doesn't percolate down the tree, then at the time you want to identify the window you can just query the variable.
However this will only work when the mouse is over a window you have registered a suitable event handler for, so you won't know about windows belonging to other applications - unless there is a way to register for events on other people's windows which may be possible.
The advantage over the other answer is that you don't have to traverse the whole tree. The disadvantage is that you need to handle a great many mouse movement events, and it may not work to find other people's windows.
I believe there may also be mouse enter and mouse leave events too which would reduce the amount of processing required.