SetCursor while dragging files into a window - c++

I'm using windows API to create an application with a window only, so everything inside this window is drawn using Direct2D.
Now I want to drop some files in specific parts of my window's client area, and I'm handling the message WM_DROPFILES. No problem here, when the files are dropped in those specific areas, I can treat them correctly and everything is working properly. BTW, my window is DragAcceptFiles(hWnd, true), it always accepts drag/drops.
I want the mouse cursor to be different depending on the area of the window the mouse is in. In the areas that I don't treat the drop, I want the cursor to be the invalid icon, and for the areas of the window that I do handle the drops, I want the correct drop icon.
The first thing I noticed is that no message is generated when files are being dragged into the window, and for this reason I added a mouse hook (WH_MOUSE_LL using SetWindowsHookEx). When the hook is processed, I only look at the WM_MOUSEMOVE message, so I can change the cursor according to the area the mouse is in.
The problem is that the SetCursor does nothing, if my windows is configured to accept drag files, the cursor is always the drag/drop cursor, no matter how many times I call SetCursor.
It seems impossible to change the cursor this way, but is there any other way of doing what I'm trying to achieve?

You need to write a class in your code that implements the IDropTarget interface, then create an instance of that class and pass it to RegisterDragDrop() to associate it with your window. Do not use DragAcceptFiles() anymore.
Whenever the user drags anything (not just files) over your window, your IDropTarget::DragEnter() and IDropTarget::DragOver(), IDropTarget::DragLeave() methods will be called accordingly, giving you the current coordinates of the drag and information about the data being dragged (so you can filter out any data you don't want to accept). If you choose to accept the data, and the user actually drops the data onto your window, your IDropTarget::Drop() method will be called.
As the drop target, it is not your responsibility to change the cursor. It is the responsibility of the drop source instead to handle that as needed. In your IDropTarget::DragEnter() and IDropTarget::DragOver() implementations, all you need to do is set the pdwEffect output parameter to an appropriate DROPEFFECT value. That value gets passed back to the drop source, which then displays visual feedback to the user (like changing the cursor) in its IDropSource::GiveFeedback() implementation.
It is possible for your IDropTarget to be invoked without user interaction (ie, programmably from another apps, and not just for drag&drop operations). That is why the drop source, not the drop target, decides whether or not to display UI updates to the user, since only the drop source knows why it is invoking your IDropTarget in the first place. The drop target does not know (or care) why it is being invoked, only that it is being given some data and asked whether it will accept or reject that data, nothing more.
Refer to MSDN for more details:
OLE and Data Transfer
Transferring Shell Objects with Drag-and-Drop and the Clipboard

Related

Using two mice to perform completely different actions in Windows

I'm currently trying to develop an application to use two mice to perform completely different actions in Windows. However, after having spent couple days on it, I'm starting to wonder if what I want to do is even possible using Windows APIs. As I'm far from being an expert in Windows APIs, I would like to get your opinions to know whether I'm going in the right direction or whether I should try to do it completely differently (maybe developing a driver ?).
Here's what I want to do : Imagine two mice are plugged in my computer. I would like to use the first one as a regular mouse, while the second one would be used to perform completely different actions. For instance, by clicking the second left mouse button, it would open a new tab in Firefox (sending a CTRL+T command to FireFox app) and when clicking the right button, it would send a CTRL+C. Then, by moving the second mouse upwards, it would zoom in, and when moving it downwards, the firefox page would zoom out (so the mouse cursor on screen would remain fix while doing that !). The idea is to recognize as well which application is currently used (which one has mouse/keyboard focus) and perform different actions depending on it. So for instance, the second mouse left click would generate a CTRL+T in FireFox, a CTRL+B in WORD and a CTRL+S in Notepad (in fact, the idea is to parameterize those actions at will). All of that while the first mouse must continue to act just as a regular mouse.
So, it's important to understand that my application will run in the background and will never, per se, interact directly with the user (no GUI as it doesn't require the user to input anything). Its purpose is just to modify the mouse inputs coming from the second mouse and send other inputs(messages) to the application currently being used.
So far, I'm using raw input. I'm able to differentiate which mouse is being used and I'm able to send messages (application specific) to other applications when an action is performed on the second mouse. I'm even able to lock the cursor on screen when the second mouse is moved (so as only the corresponding message is sent to the application of interest !). However, I'm unable to block the button messages sent by the second mouse to the app with the mouse focus. Hence, when clicking on the second mouse right button in Notepad for instance, my specific command ("aaa" for the moment as I'm just trying with letters for sake of simplicity) is sent (and displayed in the notepad window) BUT the contextual Notepad menu opens as well… (hence it's received as well a WM_RBUTTONDOWN message).
My question is then : How can I block the mouse button messages ((WM_RBUTTONDOWN, and so on…) to be received by other applications when the second mouse is used? Is it even possible ? The problem is that (in my understanding) those messages have higher priority over the WM_input messages… So when I read the WM_input message in my application and detects that the button was pressed from the second mouse, it's already too late and the WM_xBUTTONDOWN was already sent !)
I know that using the mouse hooks, I could block those but then, there is no way to differentiate the origin of the message (and of course, detecting which mouse is used is the main point of my application).
I've tried as well using DirectInput8 but it doesn't support anymore the usage of several mice (Windows specifically says to use raw input to this effect).
So, I guess that by know you've gotten that I'm quite lost and have no idea whether what I want to do it even achievable. Any help would be more than welcome.
Looking forward to reading your replies.
I was about to suggest hooks, but then I read that you looked into that already. I guess, the last resort for your problem would be to write your own driver.
After Windows installed the second mouse in it's usual way, you can go to the Device Manager and change the driver of the mouse you want to "repurpose" to your own driver.
Although, developing a driver is probably nothing one will do as a side task in a project.

Display and use the same MFC CList control in multiple dialogs

I am coding a test application for a windows CE device. This is the first time I am programming for a handheld device. I use MFC VC++ on Visual Studio 2008. I have found that there are many restrictions in the controls and what I could do with them when running the program on a handy versus when I run a similar program on a desktop computer.
Now, the device I am currently deploying my test program to, does not have a touchscreen and has few extra keys other that the numberpad 0-9 keys. So, I have to do with a simple GUI that uses keydowns to call specific functions like add, edit, delete etc... It also forces me to use separate dialogs for each of these functions so as to avoid unnecessary mouse cursor usage.
This leads me to my current problem: The 'ADD' dialog of my test app adds some user data to a CListCtrl that is on the 'MAIN' dialog. The 'EDIT/DELETE' dialog is to allow the user to select the desired data from its own CListCtrl and press the "ENTER" key, which thereby deletes the selected data from the 'MAIN' dialog's CListCtrl. Thus, both the main dialog and the 'EDIT/DELETE' dialog have CListCtrl with the exact same data. So, instead of having to use 2 separate list controls and using loops to copy the data to and fro among them, is there a way in which i could use the exact same CListCtrl (one and only one instance of the CListCtrl exists), but display it on 2 separate dialogs? This would remove all the copying code, as well as halve the amount of data in memory.
I tried passing a pointer to the MAIN dialog's CListCtrl to the 'EDIT/DELETE' dialog in hopes that I could redraw the control there, but in vain. I could call the RedrawWindow, RedrawItems commands, but they seem to have no effect in the 'EDIT/DELETE' dialog (I think it is because the control itself is not present on the edit/delete dialog). Any other suggestions?
You could temporarily change the parent of the ListCtrl using CWnd::SetParent to the EDIT/DELETE dialog, and set the position with CWnd::SetWindowPos to where you want to have it. When the dialog gets closed, set the parent back to the MAIN dialog.

Can i use Global System Hooks to capture which file was clicked on?

I am new to Windows programming, mostly done Java(Java SE, Java ME, Android, Java EE), so be detailed and gentle.
I want to capture "the name of the file/path that was clicked in windows, like clicking a file on the desktop"?
Further research http://www.codeproject.com/Articles/6362/Global-System-Hooks-in-NET, which is a small c#/c++ nice app that uses Global System Hooks, to capture mouse events such as coordinates,clicks,etc.
So what is the right API or Global System Hook that captures events on file icons?
There is no single API that provides that level of detail.
The WH_MOUSE and WH_MOUSE_LL hooks of SetWindowsHookEx(), or the WM_INPUT message delivered by RegisterRawInputDevices(), can tell when the mouse is being intereacted with, and the GetCursorPos() function can tell you where the mouse cursor is located onscreen at the time of a click, but it cannot tell you what it is clicking on. You have to figure that out manually.
For instance, the Desktop is implemented as a ListView control, so you can use the WindowFromPoint() and GetDesktopWindow() functions to check if the mouse is located at coordinates corresponding to the desktop window itself instead of an application window, and if so then use the LVM_HITTEST and LVM_GETITEM messages to determine which icon onthe desktop is being clicked on and extract its display text. Then use the SHGetDesktopFolder() function and the IShellFolder interface, or the SHParseDisplayName() function, to parse that text and see if it returns a PIDL that represents a path/file, and if so then use SHGetPathFromIDList() to get the actual path/file name.
If you want to do the same thing with the Windows Explorer app, it gets a bit more complicated. Use WindowFromPoint(), GetWindowThreadProcessId(), OpenProcess(), and EnumProcessModules() to determine if the mouse is over the Windows Explorer app. However, its UI changes from on Windows version to the next, but the jist is that you have to manually locate the focused control via AttachThreadInput() and GetActiveWindow(), check if it is a TreeView/ListView control, and if so then use control-specific messages to get information about the item/icon underneath the mouse cursor coordinates, and use IShellFolder again to figure out what the text of that item/icon actually represents.
Shell programming is very complex system and not for the feint of heart to interact with. So you need to ask yourself, why do you need this information in the first place?

How can the drop target override the cursor shape in a drag and drop originating from the outside?

I have an MFC window which acts as a drop target. Depending on where the user drops certain types of data, I'd like to change the cursor shape to indicate what action will occur, only the actions are not move/copy/link, but more complex actions for which I have custom cursors.
Here's an example, if it helps. Imagine I have a window with 2 squares where the user can drop a file: in the first square, the file is e-mailed, in the second, the file is stored on Dropbox. I have one e-mail cursor and one dropbox cursor, and I'd like the cursor to change accordingly when the user hovers over the squares.
In MFC, you can create a COleDropSource object and override its GiveFeedback() method to do exactly this. However, this only works if you can pass that object to COleDataSource::DoDragDrop(), ie if you start the drag operation yourself. If the drag originates inside my application, this method works and I can get the desired cursor type. If the drag originates from Windows Explorer, I don't have a chance of providing my own COleDropSource object, and so I can't override the cursor shape.
Setting the cursor directly in OnDragOver() does not work, since Windows uses the result value of that method to change the cursor, so I only see the desired cursor for a fraction of a second before Windows changes it back to one of the standard shapes.
Is there any other way of solving this?
(This question is similar to this one, only I'm using MFC so the proposed solution there does not work.)
I'm afraid the source application is reponsible for the user feedback. You can provide hints to the source application via IDropTarget but it's the sources reponsiblity for using that feedback.
It makes sense really, the source application is the one that really knows what the data is and what can be done with it (think dragging a file from a zip file etc.).

How to move the cursor to the last opened window (possibly popup) in c++

I need to move the mouse to the last opened window. This last window will be a popup created by whatever website.
I guess all I need is to get the position of the last opened window and use SetMousePos, right?
I'm not really familiar with the windows API and any help is welcome - Thanks!
Edit:
To answer the questions, we are writing a program that gets malware data. Unfortunately some malware only start working after the mouse moves to a popup they opened. Its a research-based application
I haven't tested this but I believe you could try the following:
Enumerate running processes and order by PID.
The highest number PID should be the "newest" process.
For the newest process enumerate its windows (use GetWindowThreadProcessId)
At this point I guess you'd have to pick which window you think is the "main" window, for example if the malware opens two windows I don't know how you're going to choose which one to give focus to?
Of your picked HWND get its position on the desktop.
Use SetMousePos to move the mouse to the position of the window.
I haven't included all the API's you'll need for these tasks as its generally quite easy to find on here :)
One way to track recently opened windows is to use SetWinEventHook to listen to the EVENT_OBJECT_CREATE and EVENT_OBJECT_SHOW events. In the callback, filter:
just events with a non-null HWND where idObject==OBJID_WINDOW to get just window creation events (vs other creation events such as for items within a listbox)
for top-level only windows, also filter by checking GetAncestor(hwnd, GA_PARENT) is GetDesktopWindow()
And check that the window is indeed currently visible (WS_VISIBLE style is set in GetWindowLong(GWL_STYLE)).
Also filter by GetWindowThreadProcessId() and via the thread/process you pass into SetWinEventHook if you only care about HWNDs from a specific app.
The reason for checking both of these events is that some windows are created hidden and then shown, others are created fully visible, while others are created once, then shown/hidden many times over their lifetime.
You can then cache this 'last known created hwnd' in a global and check it as needed, using GetWindowRect() to get its location, and SetCursorPos() to move the mouse to that location.
--
If the most recent popup is an active window which takes focus - as is the case with dialogs, but not usually the case with 'pop-under' windows - you can use GetGUIThreadInfo(NULL, ...) to determine the currently active HWND, which might be the one you are looking for, returned in the GUITHREADINFO.hwndActive member of the struct you pass it.