Touch event and click event are same? - mfc

I am working on a Desktop based project using visual C++/MFC. There are lots of buttons. But problem is, It should work on touch screen monitor where no mouse/keyboard are available.
So, will ON_BN_CLICKED work as touch event in touch screen monitor? Or I have to handle it other ways?

By "Touch" event if you mean screen tap then Yes, they CAN be treated as same.
Windows 7 provides built in support for applications that do not provide any explicit support for touch and ink support to receive input via the Onscreen Keyboard and Writing Pad.
Windows will primarily use touch in much the same mode as a mouse, with screen taps equating to mouse clicks. So, 'ON_BN_CLICKED' will work for screen taps.
That being said, you can provide explicit support for Touch support in one of two ways:
Gestures: Windows-provided mapping of distinctive touch sequences into gestures like zoom and pan. MFC further translates these gestures into a simplified set of CWnd virtual methods that can be overridden as required.
OnGestureZoom(CPoint ptCenter, long lDelta)
OnGesturePan(CPoint ptFrom, CPoint ptTo)
OnGestureRotate(CPoint ptCenter, double dblAngle)
OnGestureTwoFingerTap(CPoint ptCenter)
OnGesturePressAndTap(CPoint ptPress, long lDelta)
Touch Messages: Registering to receive the low-level touch messages which may be coming from multiple touch points simultaneously, and responding to these touch events in the message handler.
virtual BOOL OnTouchInput(CPoint pt, int nInputNumber, int nInputsCount, PTOUCHINPUT pInput);
Source: Check this article for details.

Related

Simulate mouse click without moving the cursor

I wrote an application that detects all active Windows and puts them into a list.
Is there a way to simulate a mouseclick on a spot on the screen relative to the Windows location without actually moving the cursor?
I don't have access to the buttons handle that is supposed to be clicked, only to the handle of the window
Is there a way to simulate a mouseclick on a spot on the screen relative to the Windows location without actually moving the cursor?
To answer your specific question - NO. Mouse clicks can only be directed where the mouse cursor actually resides at the time of the click. The correct way to simulate mouse input is to use SendInput() (or mouse_event() on older systems). But those functions inject simulated events into the same input queue that the actual mouse driver posts to, so they will have a physical effect on the mouse cursor - ie move it around the screen, etc.
How do I simulate input without SendInput?
SendInput operates at the bottom level of the input stack. It is just a backdoor into the same input mechanism that the keyboard and mouse drivers use to tell the window manager that the user has generated input. The SendInput function doesn't know what will happen to the input. That is handled by much higher levels of the window manager, like the components which hit-test mouse input to see which window the message should initially be delivered to.
When something gets added to a queue, it takes time for it to come out the front of the queue
When you call Send­Input, you're putting input packets into the system hardware input queue. (Note: Not the official term. That's just what I'm calling it today.) This is the same input queue that the hardware device driver stack uses when physical devices report events.
The message goes into the hardware input queue, where the Raw Input Thread picks them up. The Raw Input Thread runs at high priority, so it's probably going to pick it up really quickly, but on a multi-core machine, your code can keep running while the second core runs the Raw Input Thread. And the Raw Input thread has some stuff it needs to do once it dequeues the event. If there are low-level input hooks, it has to call each of those hooks to see if any of them want to reject the input. (And those hooks can take who-knows-how-long to decide.) Only after all the low-level hooks sign off on the input is the Raw Input Thread allowed to modify the input state and cause Get­Async­Key­State to report that the key is down.
The only real way to do what you are asking for is to find the HWND of the UI control that is located at the desired screen coordinates. Then you can either:
send WM_LBUTTONDOWN and WM_LBUTTONUP messages directly to it. Or, in the case of a standard Win32 button control, send a single BM_CLICK message instead.
use the AccessibleObjectFromWindow() function of the UI Automation API to access the control's IAccessible interface, and then call its accDoDefaultAction() method, which for a button will click it.
That being said, ...
I don't have access to the buttons handle that is supposed to be clicked.
You can access anything that has an HWND. Have a look at WindowFromPoint(), for instance. You can use it to find the HWND of the button that occupies the desired screen coordinates (with caveats, of course: WindowFromPoint, ChildWindowFromPoint, RealChildWindowFromPoint, when will it all end?).

qt mouse events equal to touch screen events?

I am developing a tiny app that will run on beagleboard with 7" inch touch screen, but I haven't got it so far and I am working as developing standard desktop app. Are mouse events equal to touch screen events? I am working on QTableView and I've disable mouse drag multiple selection via:
void CTableView::mouseMoveEvent(QMouseEvent* event)
{
if(this->state()!=DragSelectingState)
QTableView::mouseMoveEvent(event);
}
Will this code also work on touch screen if user will try to select multiple cells with fingers?
Normally this strongly depends on you touch-driver. Mostly the touch events will be interpreted as left mouse clicks. And depending on your touch-driver somethimes you have to ensure that the touchdriver is calibrated in the right way ( for instance if the touch driver needs to know the origin of a touch event to get the right coordinates ).
For multitouch-device-handling I strogly recommend to use the QML-stuff for your user interface with a MultiPointTouchArea: http://qt-project.org/doc/qt-5/qml-qtquick-multipointtoucharea.html#details . You can easily connect the QML-stuff to your c++-logic as well.

Exclusive mouse/keyboard with the Winapi

DirectInput had an option to have exclusive mouse/keyboard access. I'm now moving away from using DirectInput and was wondering how I could achieve the same behavior by just using the winapi?
Edit: I guess I could just use SetCursorPos() to the middle of the window and hide the cursor via ShowCursor()
In the case of the mouse, use the Windows raw input API.
Use the flag RIDEV_CAPTUREMOUSE in your RAWINPUTDEVICE structure for the call to RegisterRawInputDevices. This will prevent mouse clicks from activating other windows. In combination with that, use the ShowCursor function to hide the mouse cursor. Those 2 things will reproduce the DirectInput exclusive mouse behavior. In its later revisions, DirectInput (for the keyboard and mouse) is just a wrapper around the raw input api.
I don't believe there is any equivalent control over the keyboard (and I don't think there was in DirectInput either.) However, this is generally not a problem since the user won't be able to get the input focus onto another app unless they specifically want to with alt-tab or ctrl-alt-dlt.
Have you looked at SetCapture()?
It would help if your question were clearer. A lack of mouse input (ie WM_MOUSEMOVE messages) to an app is generally something the app is robust to. After all, a perfectly stationary mouse won't generate any such messages. So I'm guessing that you're doing something a little unusual.
There is also a mechanism for tracking the mouse leaving your app's window(s) - see here. It involves setting up a TrackMouseEvent structure which is a little painful but it does all seem to work in my experience. I'm wondering if in fact it is this mechanism which is pausing your app?
Can't help much more than that on the info provided I'm afraid.
Use ClipCursor() to confine the mouse within a specific rectangle of the screen, such as the rectangle of your window.

Qt - Catch events normally handled by the Window Manager

I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam
If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).

Windows 7, cannot receive multitouch events on two different controls

I have Win 7 OS on my machine and have Multi-touch capable monitor which supports up to 2 simultaneous touches.
I have created MFC Dialog application with two sliders and am trying to move them simultaneously with two fingers, but can only move one slider. If I touch the dialog box with two fingers then it receives two touches but two different sliders don't receive simultaneous touches.
On MS Paint I can draw using two fingers.
I also tried to search for multitouch application involving more than one controls but could not find any, and I am starting to wonder if its possible at all on Windows 7
Thanks.
You need not only your OS to support multi-touch, but your controls too. Have you done the Hands on Labs for MFC and Multitouch? http://channel9.msdn.com/learn/courses/Windows7/Multitouch has several Native and MFC examples.
If you don't have a real need in your app for two sliders moving at once, but were just trying it out, try something a little different, like zooming by pinching or panning by dragging two fingers, rotating etc. If you want multiple independent touches (ie not interpreted as a pinch zoom) the source code for games is your best examples.
if using WPF is feasible, the "Surface Toolkit for Windows Touch" provides a full suite of touch optimized controls that can be used simultaneously.
you could perhaps host the WPF controls inside your MFC UI but be aware that all of the WPF controls would need to be in a single hwnd - Win7 has an OS limitation that multitouch can only be done with one hwnd at a time.