Windows 7, cannot receive multitouch events on two different controls - mfc

I have Win 7 OS on my machine and have Multi-touch capable monitor which supports up to 2 simultaneous touches.
I have created MFC Dialog application with two sliders and am trying to move them simultaneously with two fingers, but can only move one slider. If I touch the dialog box with two fingers then it receives two touches but two different sliders don't receive simultaneous touches.
On MS Paint I can draw using two fingers.
I also tried to search for multitouch application involving more than one controls but could not find any, and I am starting to wonder if its possible at all on Windows 7
Thanks.

You need not only your OS to support multi-touch, but your controls too. Have you done the Hands on Labs for MFC and Multitouch? http://channel9.msdn.com/learn/courses/Windows7/Multitouch has several Native and MFC examples.
If you don't have a real need in your app for two sliders moving at once, but were just trying it out, try something a little different, like zooming by pinching or panning by dragging two fingers, rotating etc. If you want multiple independent touches (ie not interpreted as a pinch zoom) the source code for games is your best examples.

if using WPF is feasible, the "Surface Toolkit for Windows Touch" provides a full suite of touch optimized controls that can be used simultaneously.
you could perhaps host the WPF controls inside your MFC UI but be aware that all of the WPF controls would need to be in a single hwnd - Win7 has an OS limitation that multitouch can only be done with one hwnd at a time.

Related

Are VCL ListViews ListBox DBGrids full touch aware?

Since we have tablets with Windows 10 I have decided to use again Delphi XE7 and VCL to develop for this multitouch devices.
I have found ListView, ListBox and DBGrid seem not have a standard behavior with pan and scroll (just PanUp & PanDown, ScrollUp ScrollDown). DBGrid does not support touching panning. ListBox, seem doesn't control inertial panning like TListview... and ListView react erratically, sometimes "loose" pannings moving scrollbar but not items list.
Have someone tested this controls on Windows 8.1 or Windows 10 using a multitouch tablet ?. Just load components with, let me say 100 items and try to have a simple vertical smooth scroll / pan using fingers.
All together is kind frustrating, and I cannot focus in develop application which is my task.
Question is: Which is the right component or way to use panning (at least vertical panning / Scrolling) with touch screens and working smooth and without problems ? I thought this components should react to standard actions (like PanUp or PanDown) without need to implement the Gesture Manager and control one by one each touch on screen. I would like to receive your kind feedback. Thank You
Conclusion: Many thanks to all who have helped with their comments. My own conclusion is Delphi is not ready to be used as a RAD for touching screens. The touching implementation is poor and need too much work for very standard using. Should not be necessary invent the wheel again for a very common and standard controls. Actually there are more mobile device users, than desktop users. Perhaps Embarcadero should decide to pay attention to this matter, and give well finished tools wich meet the OS touch and feel controls.
Let me add the same in FM using TGrid works fine.

Windows events for 3D mouse

I am developing a Windows application that must respond to events of 3d mouse, such as the 3DConnexion mouses, for example.
We have added support for 3DConnexion mouses using their dlls and libraries in C++, and it works properly. Now we would like to extend this support to any such device.
What I want to know is if there is a standard Windows support for these devices (via, for example, responding to Windows messages, like normal mouse events) or if I should give specific support for each type of mouse, loading the corresponding DLLs.
The latter would be rather unpleasant, because a separate library for each device should be used. I note that Google Earth, Photoshop and other applications respond to 3DConnexion events, and I gess they will respond to other 3d mouses. In this case, I doubt Google uses that cumbersome technique, and that makes me suspect that there must be a general mechanism, just as there is for the "normal" mouse.
Any idea?
Thank you.

Can you force a console app to look like a gui app?

I've been weighing the pro's/con's of making a gui app, and I've decided a console app is much more powerful for my calculator, especialy since it does different things like foil, quadratic equations, etc. So my question is make the console look like a gui based app?
The answer to your question depends on exactly what you mean by "console." If you're talking about Windows console windows, then the answer is "maybe." Some Windows installations can emulate VGA/EGA graphics within a console window, making them able to play old games for DOS.
Your mission would be to implement every GUI widget you need, such as clickable buttons, text-entry fields, etc. in terms of simple graphics primitives for drawing lines and rectangles. Then you have to write code that figures out where the mouse is and draws the mouse pointer in the right spot. You'd also have to write code to make the cursor blink, to make the arrow keys move the cursor, and to make it possible to select characters in a text entry box and copy, cut, and paste them.
When you got done, you'd have a program that works on some people's computers, but not on others. On some Windows installations, the console windows can't do graphics or go fullscreen. Your app wouldn't work at all on those systems, although you could write a fullscreen Windows app using a 2D game library such as SDL or Allegro instead of writing a console app, which would bring you back to the previous paragraph.
As you might have guessed by now, rolling your own GUI would be a whole lot more work than writing a Windows GUI program in which the buttons, text fields, etc are already implemented for you, the cursor already blinks, the mouse already clicks, etc.
Also, the code that does the actual calculations should be totally separate
from the code that gets the input from the user and puts the answers on the screen, so that code shouldn't factor into whether you want to write a GUI or a console app. They shouldn't even be in the same .cpp file as the I/O routines.
Now, some programmers use the term "console" to refer to xterm windows on Linux. These are not the same thing at all, and cannot draw graphics (and "console" is the wrong name for them to boot). But sometimes you see menus and stuff within them, "drawn" with colored text. Usually, these are drawn and managed using the external dialog shell command.

Qt - Catch events normally handled by the Window Manager

I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam
If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).

Simulate fullscreen

I've seen an application that simulates a fullscreen application by removing the title bar and the window borders. I've done some research and found getWindowLongPtr() for that.
Now my question: How can I find and identify the application and get the appropriate window handle? How can I distinguish multiple instances of the application (running from different locations on disc)?
Just to make "simulate" more precise. If you make an application go fullscreen and you click on a different monitor, it minimizes itself. If the application runs in a window and you click on a different monitor, the window is not changed. If you remove the borders of the window and position it on the left or right monitor, you can still work with the other monitor without minimizing the application. Still it looks like the application running fullscreen on one of the monitors.
As an example: you can set Eve (www.eveonline.com) to fullscreen and windowed mode. In fullscreenmode you can not click on a second monitor without Eve minimizing itself. In window mode you can. There are tools like evemover that allow you to setup your window on one monitor, looking like fullscreen, but being in window mode. That's what I want to archieve. Evemover actually provides some of it's source code, that's why I know that removing the border and setting the position is done using the Win32-API with setWindowLongPtr and setWindowPos.
Many applications use divergent and confusing applications of the phrase "fullscreen".
A fullscreen application simply - occupies the full screen area.
DirectX applications can request a fullscreen exclusive mode. The advantage of this mode to DirectX applications is, with exclusive access to the (full) screen they are then allowed to change the resolution, bit depth etc, as well as gain access to vertical sync synchronized hardware buffering where the screen surface is 'flipped' between display intervals so that 'tearing' does not occur.
Anyway, the windows desktop understands 'fullscreen windows' - windows that occupy the full area of a monitor and have no non client elements. When windows like this are created, things like desktop gadgets and task bars automatically hide themselves. Modern games have come to call this mode 'fullscreen windowed'.
Back to your question: 'FindWindow' is the API used to discover other applications windows. Getting the path to the application that created the window is much harder. GetWindowThreadProcessId can get you the process id of the owning process. OpenProcess will get you a handle that you can pass to QueryFullProcessImageName (implemented on Vista and above) to get the full path to the process.
I think you are refering to applications like window aggregators, that 'plug in' to the system and act from outside the application.
Look at the code for the freeware app PuttyCM (for aggregating Putty (SSH) shell windows as tabs). IIRC, it ensures that the Window pointer passed to the application has the flags already set.
On applications running from different places, you will probably need some way of identifying it - registry entries / install log etc.