I have a LayeredWindow GUI that contains some child's and all of them contains the WS_EX_TRANSPARENT style.
The style is used to be able to remove their background.
When i move the mouse over the GUI only the LayeredWindow receives the message WM_MOUSEMOVE.
I tried calling ChildWindowFromPointEx using the XY pos got from the WM_MOUSEMOVE lParam
to detect the control being hovered, but the API didn't recognize any of the controls belonging to the child GUI's.
Docs says:
The search is restricted to immediate child windows. Grandchildren and deeper descendants are not searched.
The other option i tried was EnumChildWindow and compare each control rect to the XY position of the message, this method is using around 1% of CPU only from moving the mouse.
I wonder if there's any 'better' option?
According to the Doc:Layered Windows
Hit testing of a layered window is based on the shape and transparency
of the window. This means that the areas of the window that are
color-keyed or whose alpha value is zero will let the mouse messages
through. However, if the layered window has the WS_EX_TRANSPARENT
extended window style, the shape of the layered window will be ignored
and the mouse events will be passed to other windows underneath the
layered window.
You could try to use GetCursorPos function to get the position of the mouse cursor, in screen coordinates.
Related
I have a window which is set to AlwaysOnTop using the WS_EX_TOPMOST flag. Now, it is possible that some other application might also have a window which has WS_EX_TOPMOST set and override the topmost flag for my window.
How should I check if my window is indeed the window that is the top most window and nothing is being painted over it (the nothing is being painted over my window is the important part). If something is painting over my window, I want to hide my window and show it again when I can make it the top most window (but that's probably the second step)
Call GetWindow passing your topmost window's handle and the GW_HWNDFIRST flag. The window returned will be the topmost window that is highest in the Z-order. You can then use the GW_HWNDNEXT flag to walk through the topmost windows in order of decreasing Z-order until you find yours. If any of the windows overlap your window, then your window is underneath.
The old standard way was to call WindowFromPoint for a point on your supposedly visible window and compare the returned handle against your own window handle. There is a better way using the clipping system. I discuss this here.
I have a program which is not written by me. I dont have its source and the developer of that program is developing independently. He gives me the HWND and HINSTANCE handles of that program.
I have created a child window ON his window, using win32 api.
First thing I need is to make this child window have transparency on some area and opaque on other area(like a Heads up display(HUD) for a game), so that the user may see things in both windows.
The second thing that I need is to direct all the input to the parent window. My child window needs no input.
I know that WS_EX_TRANSPARENT only makes the child draw at the end like in painters algorithm.
I cant use WS_EX_LAYERED because its a child window.
p.s.
I have looked everywhere but didn't find any solution though there were similar questions around the internet.
Actually this is a HUD like thing for that game. I can't draw directly on parent window because of the complexity with multi-threads any many other reasons.
-- EDIT ---------------------------
I am still working on it. I am trying different ways with what you all suggested. Is there a way to combine directX and SetWindowRgn() function or directx with BitBlt() function? I think that will do the trick. Currently I am testing all the stuff as a child window and a Layered window.
You can use WS_EX_LAYERED for child windows from Windows 8 and up.
To support earlier versions of windows, just create a level layered window as a popup (With no chrome) and ensure its positioned over the game window at the appropriate location. Most users don't move the windows they are working with all the time, so, while you will need to monitor for the parent window moving, and re position the HUD, this should not be a show stopper.
Not taking focus (in the case of being a child window) or activation (in the case of being a popup) is more interesting, but still quite do-able:- The operating system does not actually automatically assign either focus, or activation, to a clicked window - the Windows WindowProc always takes focus, or activation, by calling SetFocus, or some variant of SetActiveWindow or SetForegroundWindow on itself. The important thing here is, if you consume all mouse and non client mouse messages without passing them on to DefWindowProc, your HUD will never steal activation or keyboard focus from the Game window as a result of a click.
As a popup window, or a window on another thread, you might have to manually handle any mouse messages that your window proc does get, and post them to the game window. Otherwise, responding to WM_NCHITTEST with HTTRANSPARENT (a similar effect to that which WS_EX_TRANSPARENT achieves) can get the system to keep on passing the mouse message down the stack until it finds a target.
OK friends, finally I did some crazy things to make it happen. but its not very efficient, like using DirectX directly for drawing.
What I dis:
Used (WS_EX_TRANSPARENT | WS_EX_LAYERED | WS_EX_ TOOLWINDOW) and () on CreateWindowEx
After creating the window, removed (WS_EX_DLGMODALFRAME | WS_EX_CLIENTEDGE | WS_EX_STATICEDGE) from window styles, and also removed (WS_EX_DLGMODALFRAME | WS_EX_CLIENTEDGE | WS_EX_STATICEDGE | WS_EX_APPWINDOW) from extended window styles.
This gives me a window with no borders and its also now shown in the taskbar. also the hittest is passed to whatever that is behind my window.
Subclassed the window procedure of the other window and got the
WM_CLOSE,WM_DESTROY, to send the WM_CLOSE or WM_DESTROY respectively to my window
WM_SIZE,WM_MOVE, to resize and move my window according to the other window
WM_LBUTTONUP,WM_RBUTTONUP,WM_MBUTTONUP, to make my window brought to the top, and still keep focus on the other window, so that my window doesn't get hidden behind the other window
Made the DirectX device have two passes:
In the first pass it draws all the elements in black on top of a white background and copy the backbuffer data to an another surface (so it give a binary image of black & white).
In the second pass it draws the things normally.
Another thread is created to keep making the window transparency by reading that black & white surface, using the SetWindowRgn() function.
This is working perfectly, the only thing is it's not very good at making things transparent.
And the other issue is giving alpha blending to the drawn objects.
But you can easily set the total alpha (transparency) using the SetLayeredWindowAttributes() function.
Thanks for all the help you guys gave, all the things you guys told me was used and they guided me, as you can see. :)
The sad thing is we decided not to use this method because of efficiency problems :(
But I learned a lot of things, and it was an awesome experience. And that's all that matters to me :)
Thank You :)
You can make a hole in the parent window using SetWindowRgn.
Also, just because it is not your window doesn't mean you can't make it a layered window.
http://msdn.microsoft.com/en-us/library/ms997507.aspx
Finally, you can take control of another window by using subclassing - essentially you substitute your Wndproc in place of theirs, to handle the messages you wish to handle, then pass the remainder to their original wndproc.
I have a transparent window (created with WS_EX_LAYERED) and I'd like to receive mouse events of the zero-alpha regions.
As far as I know, I could:
1) Use mouse hook
2) Paint the background with almost completely transparent color (that has an opacity of 1)
However, the first solution is time consuming and the 2nd one will slow my rendering time as my window is stretched almost all over the desktop and most of the pixels are completely transparent at the moment.
Is there another way receiving those mouse events?
According to MSDN:
Hit testing of a layered window is
based on the shape and transparency of
the window. This means that the areas
of the window that are color-keyed or
whose alpha value is zero will let the
mouse messages through. However, if
the layered window has the
WS_EX_TRANSPARENT extended window
style, the shape of the layered window
will be ignored and the mouse events
will be passed to other windows
underneath the layered window.
However, in a new thread you could get continuously the coordinates of the mouse with GetCursorPos and if the position is inside one of your icons (regardless, that it's over a zero alpha pixel inside the icon) you handle it. Not too much better than the hook
I want to allow a user to drag my Win32 window around only inside the working area of the desktop. In other words, they shouldn't be able to have any part of the window extend outside the monitor(s) nor should the window overlap the taskbar.
I'd like to do it in a way that does cause any stuttering. Handling WM_MOVE messages and calling MoveWindow() to reposition the window if it goes off works, but I don't like the flickering effect that's caused by MoveWindow().
I also tried handling WM_MOVING which prevents the need to call MoveWindow() by altering the destination rectangle before the move actually happens. This resolves the flickering problem, but another issue I run into is that the cursor some times gets aways from the window when a drag occurs allowing the user to drag the window around while the cursor is not even inside the window.
How do I constrain my window without running into these issues?
Windows are, ultimately, positioned via the SetWindowPos API.
SetWindowPos starts by validating its parameters by sending the window being sized or moved a WM_WINDOWPOSCHANGING message, and then a WM_WINDOWPOSCHANGED message notifying the window proc of the changed size and/or position.
DefWindowProc handling of these messages is to, in turn, send WM_GETMINMAXINFO and then WM_SIZE or WM_MOVE messages.
Anyway, handle WM_WINDOWPOSCHANGING to filter both user, and code, based attempts to position a window out of bounds.
Keep in mind that users with multi-monitor setups may have a desktop that extends into negative x- and y-coordinates, or that is not rectangular. Also, some users use alternative window managers such as LiteStep, which implement virtual desktops by moving them off-screen; if you try to fight this, your application will break for these users.
You can do this by handling the WM_MOVING message and changing the RECT pointed to by the lParam.
lParam: Pointer to a RECT structure with the current position of the window, in screen coordinates. To change the position of the drag rectangle, an application must change the members of this structure.
you may also want to handle WM_ENTERSIZEMOVE to know when the window is beginning to move, and WM_EXITSIZEMOVE
WM_GETMINMAXINFO is what you seem to be looking for.
I seeking solution for Windows first.
I need to add visual effects to screen image without breaking the interactivity of all and every controls, that are on this screen.
The solution can be straightforward:
1. take a screenshot
2. show this screenshot in a separate window, above other windows
3. apply the effect to image, being shown on this window
But, this window (containing screenshot) makes any buttons and other controls below, unreachable for mouse interaction (clicking, hovering)
Is there any way to do this ?
From MSDN's documentation on layered windows...
"Hit testing of a layered window is based on the shape and transparency of the window. This means that the areas of the window that are color-keyed or whose alpha value is zero will let the mouse messages through. However, if the layered window has the WS_EX_TRANSPARENT extended window style, the shape of the layered window will be ignored and the mouse events will be passed to other windows underneath the layered window."
Here's an article about it. Notice this additional warning:
"setting the WS_EX_TRANSPARENT attribute affects the entire window: the user can't close the window using the 'x' button, select it with the mouse, or select any controls on the window. The application can still close the window programmatically."
Depending on what you're actually drawing and where, it might be more appropriate to use Owner-Drawn Controls rather than hover an "onion skin" over your whole window.