I am developing a Windows application that must respond to events of 3d mouse, such as the 3DConnexion mouses, for example.
We have added support for 3DConnexion mouses using their dlls and libraries in C++, and it works properly. Now we would like to extend this support to any such device.
What I want to know is if there is a standard Windows support for these devices (via, for example, responding to Windows messages, like normal mouse events) or if I should give specific support for each type of mouse, loading the corresponding DLLs.
The latter would be rather unpleasant, because a separate library for each device should be used. I note that Google Earth, Photoshop and other applications respond to 3DConnexion events, and I gess they will respond to other 3d mouses. In this case, I doubt Google uses that cumbersome technique, and that makes me suspect that there must be a general mechanism, just as there is for the "normal" mouse.
Any idea?
Thank you.
Related
For Imageprocessing I want to get all pixel information from a given process.
Concrete its for testing an image hashing algorithm for identifying hearthstone cards, so i need to get a screenshot of the given process.
How can I solve it in windows?
My idea so far:
Get the process name.
Get the process ID
Get Window Handle
I have no idea how to go further from this point.
I hope it understandable what I want to achieve.
Unfortunately, there is no general method for getting the pixels of a particular window that I would be aware of. Depending on how the target application draws itself, this task can be very simple or very complicated. If we were talking about an application that uses good old GDI, then you could just get yourself an HDC to the window via GetWindowDC() and BitBlt/StretchBlt the content over into a bitmap of your own.
Unfortunately, the target application in your case appears to be a game. Games typically use 3D graphics APIs like Direct3D or OpenGL for drawing. Assuming that you cannot simply modify the target application to just send the desired data over to you out of its own free will, the only way to specifically record output from such applications that I'm aware of is to hook into the graphics API and capture the data from underneath the API. This can be done. However, implementing such a system is quite involved. There might be existing libraries to aid with writing such applications, but I don't know any that I could recommend here. If you don't have to capture the game content in real-time, you could just use a screen recording application to, e.g., record a video and then use that video as input for your algorithm. There are also graphics debugging tools like NSight Graphics or RenderDoc that you could use. Be aware that games, particularly online games, these days often have cheat protection systems that are likely to get very angry at you if you attempt to hook into the game…
Apart from all that, one alternative approach might be to use DXGI Output Duplication to just capture the entire desktop. While you won't be able to target one specific application (as far as I know), this would potentially have several advantages: First of all, it's only moderately complex to set up compared to a fully-fledged API-hook-based approach. Second, it should work regardless of what API the target application uses and even if the application is in fullscreen mode. Third, since you will have the data delivered straight from the operating system, you shouldn't have any issues with cheat protection. You can use MonitorFromWindow() to get the monitor your target window appears on and then enumerate all outputs of all DXGI adapters to find the one that corresponds to that HMONITOR…
Since we have tablets with Windows 10 I have decided to use again Delphi XE7 and VCL to develop for this multitouch devices.
I have found ListView, ListBox and DBGrid seem not have a standard behavior with pan and scroll (just PanUp & PanDown, ScrollUp ScrollDown). DBGrid does not support touching panning. ListBox, seem doesn't control inertial panning like TListview... and ListView react erratically, sometimes "loose" pannings moving scrollbar but not items list.
Have someone tested this controls on Windows 8.1 or Windows 10 using a multitouch tablet ?. Just load components with, let me say 100 items and try to have a simple vertical smooth scroll / pan using fingers.
All together is kind frustrating, and I cannot focus in develop application which is my task.
Question is: Which is the right component or way to use panning (at least vertical panning / Scrolling) with touch screens and working smooth and without problems ? I thought this components should react to standard actions (like PanUp or PanDown) without need to implement the Gesture Manager and control one by one each touch on screen. I would like to receive your kind feedback. Thank You
Conclusion: Many thanks to all who have helped with their comments. My own conclusion is Delphi is not ready to be used as a RAD for touching screens. The touching implementation is poor and need too much work for very standard using. Should not be necessary invent the wheel again for a very common and standard controls. Actually there are more mobile device users, than desktop users. Perhaps Embarcadero should decide to pay attention to this matter, and give well finished tools wich meet the OS touch and feel controls.
Let me add the same in FM using TGrid works fine.
let me first specify my development essentials. I am writing an Windows DLL. The programming language i do focus on is C/C++. Asm blocks are possible aswell when required for my task. Maybe even a driver, but i do not have any experience with them at all.
The DLL is being injected into a host process. That's always a Directx environment. Either Dx9, Dx10 or Dx11 and may run in fullscreen or windowed mode.
The method should support windows xp up to windows 7 and is being compiled in x86 only.
The goal is to come up with a function taking a screenshot of a given process-window. The screenshot is never being taken from the host process itself. Its always another process! The window may contain directx or gdi32 content. Maybe other contents are possible i do not think of at the moment (windows forms comes to my mind. i am not sure how that is being rendered internally). The windows may be minimized.
That screenshot needs to be accessable/convertable to an directx texture such as Texture2D, depending on the Directx environment i am working in. Saving the screenshot as an png/bmp is enough thoe, as i do know how to create such a texture from memory.
I've already tried the oldstyle BitBlt way, that didnt work on minimized applications thoe. The minimized applications are being drawn, when i send WM_PAINT messages to the targeting window. That aint a solution for me, as i also need to keep up with directx applications which doesnt react to such messages.
Maybe i need to hook each single DirectX window to accomblish my task, to access the backbuffer directly, i do hope for some better methods anyways.
For the reason that i do take a lot of screenshots from multiple windows, i would like to implement a fast method, which isnt such a cpu bogus. Copying from VideoRAM may be a bad way to go when having such performance needs.
I do hope for some ideas, maybe code samples as i am not familar with all the possibilities i could go for. I've looked at some windows thumbnail api, but that didnt support xp from what i could read.
Thanks in advance,
Frank
I have Win 7 OS on my machine and have Multi-touch capable monitor which supports up to 2 simultaneous touches.
I have created MFC Dialog application with two sliders and am trying to move them simultaneously with two fingers, but can only move one slider. If I touch the dialog box with two fingers then it receives two touches but two different sliders don't receive simultaneous touches.
On MS Paint I can draw using two fingers.
I also tried to search for multitouch application involving more than one controls but could not find any, and I am starting to wonder if its possible at all on Windows 7
Thanks.
You need not only your OS to support multi-touch, but your controls too. Have you done the Hands on Labs for MFC and Multitouch? http://channel9.msdn.com/learn/courses/Windows7/Multitouch has several Native and MFC examples.
If you don't have a real need in your app for two sliders moving at once, but were just trying it out, try something a little different, like zooming by pinching or panning by dragging two fingers, rotating etc. If you want multiple independent touches (ie not interpreted as a pinch zoom) the source code for games is your best examples.
if using WPF is feasible, the "Surface Toolkit for Windows Touch" provides a full suite of touch optimized controls that can be used simultaneously.
you could perhaps host the WPF controls inside your MFC UI but be aware that all of the WPF controls would need to be in a single hwnd - Win7 has an OS limitation that multitouch can only be done with one hwnd at a time.
I've been challenged with a C++ 3D application project that will use 3 displays, each one rendering from a different camera.
Recently I learned about Ogre3D but it's not clear if it supports output of different cameras to different displays/GPUs.
Does anyone have any experience with a similar Setup and Ogre or another engine?
At least on most systems (e.g., Windows, MacOS) the windowing system creates a virtual desktop, with different monitors mapped to different parts of the desktop. If you want to, you can (for example) create one big window that will cover all three displays. If you set that window up to use OpenGL, almost anything that uses OpenGL (almost certainly including Ogre3D) will work just fine, though in some cases producing that much output resolution can tax the graphics card to the point that it's a bit slower than usual.
If you want to deal with a separate window on each display, things might be a bit more complex. OpenGL itself doesn't (even attempt to) define how to handle display in multiple windows -- that's up to a platform-specific set of functions. On Windows, for example, you have a rendering context for each window, and have to use WGLMakeCurrent to pick which rendering context you draw to at any given time.
If memory serves, the Windows port of Ogre3D supports multiple rendering contexts, so this shouldn't be a problem either. I'd expect it can work with multiple windows on other systems as well, but I haven't used it on any other systems, so I can't say with any certainty.
My immediate guess, however, is that the triple monitor support will be almost inconsequential in your overall development effort. Of course, it does mean that you (can tell your boss) need a triple monitor setup for development and testing, which certainly isn't a bad thing! :-)
Edit: OpenGL itself doesn't specify anything about full-screen windows vs. normal windows. If memory serves, at least on Windows to get a full screen application, you use ChangeDisplaySettings with CDS_FULLSCREEEN. After that, it treats essentially the entire virtual desktop as a single window. I don't recall having done that with multiple monitors though, so I can't say much with any great certainty.
There are several things to be said about multihead support in the case of OGRE3D. In my experience, a working solution is to use the source version of Ogre 1.6.1 and apply this patch.
Using this patch, users have managed to render an Ogre application on a 6 monitors configuration.
Personnaly, I've successfully applied this patch, and used it with the StereoManager plugin to hook up Ogre applications with a 3D projector. I only used the Direct3D9 backend. The StereoManager plugin comes with a modified demo (Fresnel_Demo), which can help you to set up your first multihead application.
I should also add that the multihead patch is now part of the Ogre core, as of version 1.7. Ogre1.7 was recently released as a RC1, so this might be the quickest and easiest way to have it working.