Detect when a Window has stopped moving? - c++

Does anyone know how to detect if a Win32(c++) window has stopped moving?
WM_MOVE detects when the window is moving, but how does one detect when it has stopped moving?

The windows message you wish to handle is WM_EXITSIZEMOVE.
WM_EXITSIZEMOVE message (Windows) # MSDN
Depending on what you wish to accomplish, there's also the possibility that you might be better served by reacting to WM_NCLBUTTONUP, which is sent when the mouse button is released in the non-client areas of a window, such as the title bar of any window with a caption, border chrome, etc.
WM_NCLBUTTONUP message (Windows) # MSDN

Related

CEF borderless window handling renderer's WM_NCHITTEST messages

I created a CEF browser within a borderless window (WS_POPUP style).
The CEF renderer overlaps my window's client area.
The problem is that I'd like to allow the user to resize the window but I can't
handle the WM_NCHITTEST message.
I found many topics on the Internet but there was no actual solution.
Of course, I could create a 1px border with WM_NCCALCSIZE but I don't want to, as I would need to change the color of the border depending on the browser's content.
Is there a way to subclass the renderer's window (used internally by CEF)?
Do I really need to handle the WM_NCHITTEST message? Is there another way of doing that?
Any help would be greatly appreciated.
winapiwrapper

Reliably identifying any window's client area

I am working on a program that will replicate, and then extend the functionality of Aero Snap.
Aero Snap restores a maximized window, if the user "grabs" it's title bar, and I am having difficulties identifying this action.
Given a cursor position in screen coordinates, how would I check if the position is within the window's title bar? I am not really at home in the Win32 API, and could not find a way that works reliably for complicated scenarios such as:
Note that tabs that chrome inserts into the title bar. Office does something similar with the quick launch menu.
title bar hits are via the message "non client" messages - ie the area of a window that is not the client (or inner) window.
WM_NCLBUTTONDOWN is probably the message you want to trap.
You also probably want to set a window hook to hook mouse messages, if its the NC message you want, you handle it your way, if not - pass it on to the message chain.
Edit: if Chrome is using the DwmExtendFrameIntoClientArea calls to draw the tabs, then you will need to use WM_NCHITTEST.

Window clicked - what happens then?

I am working on a limited remote control of another PC over network. At first the controlled window is chosen and the client may control that window and all child windows. I am having a problem with the mouse though, I can move it using SetCursorPos, but when I try to send the WM_LBUTTONDOWN and WM_LBUTTONUP messages, there is no result. I believe it's necessary for the window to be in the foreground first, but I am uncertain if SetForegroundWindow does exactly what happens after a click before the WM_ message is posted. Do you know how I can send a mouseclick directly to the window (if it's not a child window of a particular HWND, it's not allowed to be clicked).
It may be better (and possibly easier) to use SendInput. I believe that is the recommended way to mimic a user using the mouse, instead of trying to mess with window messages directly.

Constraining window position to desktop working area

I want to allow a user to drag my Win32 window around only inside the working area of the desktop. In other words, they shouldn't be able to have any part of the window extend outside the monitor(s) nor should the window overlap the taskbar.
I'd like to do it in a way that does cause any stuttering. Handling WM_MOVE messages and calling MoveWindow() to reposition the window if it goes off works, but I don't like the flickering effect that's caused by MoveWindow().
I also tried handling WM_MOVING which prevents the need to call MoveWindow() by altering the destination rectangle before the move actually happens. This resolves the flickering problem, but another issue I run into is that the cursor some times gets aways from the window when a drag occurs allowing the user to drag the window around while the cursor is not even inside the window.
How do I constrain my window without running into these issues?
Windows are, ultimately, positioned via the SetWindowPos API.
SetWindowPos starts by validating its parameters by sending the window being sized or moved a WM_WINDOWPOSCHANGING message, and then a WM_WINDOWPOSCHANGED message notifying the window proc of the changed size and/or position.
DefWindowProc handling of these messages is to, in turn, send WM_GETMINMAXINFO and then WM_SIZE or WM_MOVE messages.
Anyway, handle WM_WINDOWPOSCHANGING to filter both user, and code, based attempts to position a window out of bounds.
Keep in mind that users with multi-monitor setups may have a desktop that extends into negative x- and y-coordinates, or that is not rectangular. Also, some users use alternative window managers such as LiteStep, which implement virtual desktops by moving them off-screen; if you try to fight this, your application will break for these users.
You can do this by handling the WM_MOVING message and changing the RECT pointed to by the lParam.
lParam: Pointer to a RECT structure with the current position of the window, in screen coordinates. To change the position of the drag rectangle, an application must change the members of this structure.
you may also want to handle WM_ENTERSIZEMOVE to know when the window is beginning to move, and WM_EXITSIZEMOVE
WM_GETMINMAXINFO is what you seem to be looking for.

How to select and highlight a window in another application?

I would like to send some keystrokes from a C++ program into another window.
For that reason I would like to have the user select the target window similar to how it is done in the Spy++ utility that comes with Visual Studio (drag a crosshair cursor over target window and have target window highlighted by a frame).
How is this dragging and selecting done in Windows? I am completely lost as to where I might start to look for a mechanism to implement this feature.
Here's how it's usually done:
Capture the mouse using SetCapture. This will cause all mouse messages to be routed toward your app's window.
Handle the WM_MOUSEMOVE message. In your handler code, grab the window underneath the mouse using WindowFromPoint. That will get you the HWND of the window the mouse is currently over.
Now that you've got the window, you need a device context (HDC). You can get one using GetWindowDC for the specified window.
Now you can draw into the DC using typical GDI functions.
There are some things you have to look out for - cleanly erasing the selection rectangle and so forth, but that's one way to do it.
You could also draw into a screen DC to do this, but in any case you'll need the window handle in order to get the window rect.
If you Google around Spy++ source code you'll see a few examples of this technique.
Formers answers are wrong.
Spy++ source code has been given on G. Groups for years (see mainly Win32 api ng news://194.177.96.26/comp.os.ms-windows.programmer.win32)