Creating a program that creates a full screen overlay - c++

I want to write a program that would create a transparent overlay filling the entire screen in Windows 7, preferably with C++ and OpenGL. Though, if there is an API written in another language that makes this super easy, I would be more than willing to use that too. In general, I assume I would have to be able to read the pixels that are already on the screen somehow.
Using the same method screen capture software uses to get the pixels from the screen and then redrawing them would work initially, but the problem would then be if the screen updates. My program would then have to minimize/close and reappear in order for me to be able to read the underlying pixels.

Windows Vista introduced a new flag into the PIXELFORMATDESCRIPTOR: PFD_SUPPORT_COMPOSITION. If the OpenGL context is created with an alpha channel, i.e. AlphaBits of the PFD is nonzero, the alpha channel of the OpenGL framebuffer is respected by the Windows compositor.
Then by creating a full screen, borderless, undecorated window you get this exakt kind of overlay you desire. However this window will still receive all input events, so you'll have to do some grunt work and pass on all input events to the underlying windows manually.

Related

winapi - How to use LayeredWindows properly

I am haveing trouble understanding the concept of the UpdateLayaredWindow api, how it works and how to implement it. Say for example I want to override CFrameWnd and draw a custom, alpha blended frame with UpdateLayeredWindow, as I understand it, the only way to draw child controls is to either: Blend them to the frame's Bitmap buffer (Created with CreateCompatibleBitmap) and redraw the whole frame, or create another window that sits ontop of the layered frame and draws child controls regularly (which defeats the whole idea of layered windows, because the window region wouldn't update anyway).
If I use the first method, the whole frame is redrawn - surely this is inpractical for a large application..? Or is it that the frame is constantly updated anyway so modifying the bitmap buffer wouldn't cause extra redrawing.
An example of a window similar to what I would like to achieve is the Skype notification box/incoming call box. A translucent frame/window with child contorls sitting ontop, that you can move around the screen.
In a practical, commercial world, how do I do it? Please don't refer me to the documentation, I know what it says; I need someone to explain practical methods of the infrastructure I should use to implement this.
Thanks.
It is very unclear exactly what aspect of layered windows gives you a problem, I'll just noodle on about how they are implemented and explaining their limitations from that.
Layered windows are implemented by using a hardware feature of the video adapter called "layers". The adapter has the basic ability to combine the pixels from distinct chunks of video memory, mixing them before sending them to the monitor. Obvious examples of that are the mouse cursor, it gets super-imposed on the pixels of the desktop frame buffer so it doesn't take a lot of effort to animate it when you move the mouse. Or the overlay used to display a video, the video stream decoder writes the video pixels directly to a separate frame buffer. Or the shadow cast by the frame of a toplevel window on top of the windows behind it.
The video adapter allows a few simple logical operations on the two pixel values when combining their values. The first one is an obvious one, the mixing operation that lets some of the pixel value overlap the background pixel. That effect provides opacity, you can see the background partially behind the window.
The second one is color-keying, the kind of effect you see used when the weather man on TV stands in front of a weather map. He actually stands in front of a green screen, the camera mixing panel filters out the green and replaces it with the pixels from the weather map. That effect provides pure transparency.
You see this back in the arguments passed to UpdateLayeredWindow(), the function you must call in your code to setup the layered window. The dwFlags argument select the basic operations supported by the video hardware, ULW_ALPHA flag enables the opacity effect, the ULW_COLORKEY flag enables the transparency effect. The transparency effect requires the color key, that's specified with the crKey argument value. The opacity effect is controlled with the pblend argument. This one is built for future expansion, one that hasn't happened yet. The only interesting field in the BLENDFUNCTION struct is SourceConstantAlpha, it controls the amount of opacity.
So a basic effect available for a layered window is opacity, overlapping the background windows and leaving the partially visible. One restriction to that the entire window is partially opaque, including the border and the title bar. That doesn't look good, you typically want to create a borderless window and take on the burden of creating your own window frame. Requires a bunch of code btw.
And a basic effect is transparency, completely hiding parts of a window. You often want to combine the two effects and that requires two layered windows. One that provides the partial opacity, another on top and owned by the bottom one that displays the parts of the window that are opaque, like the controls. Using the color key to make its background transparent and make the the bottom window visible.
Beyond this, another important feature for custom windows is enabled by SetWindowRgn(). It lets you give the window a shape other than a rectangle. Again it is important to omit the border and title bar, they don't work on a shaped window. The programming effort is to combine these features in a tasteful way that isn't too grossly different from the look-and-feel of windows created by other applications and write the code that paints the replacement window parts and still makes the window functional like a regular window. Things like resizing and moving the window for example, you typically do so by custom handling the WM_NCHITTEST message.

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Draw OpenGL on the windows desktop without a window

I've seen things like this and I was wondering if this was possible, say I run my application
and it will show the render on whatever is below it.
So basically, rendering on the screen without a window.
Possible or a lie?
Note: Want to do this on windows and in c++.
It is possible to use your application to draw on other application's windows. Once you have found the window you want, you have it's HWND, you can then use it just like it was your own window for the purposes of drawing. But since that window doesn't know you have done this, it will probably mess up whatever you have drawn on it when it tries to redraw itself.
There are some very complicated ways of getting around this, some of them involve using windows "hooks" to intercept drawing messages to that window so you know when it has redrawn so that you can do your redrawing as well.
Another option is to use clipping regions on a window. This can allow you to give your window an unusual shape, and have everything behind it still look correct.
There are also ways to take over drawing of the desktop background window, and you can actually run an application that draws animations and stuff on the desktop background (while the desktop is still usable). At least, this was possible up through XP, not sure if it has changed in Vista/Win7.
Unfortunately, all of these options are too very complex to go in depth without more information on what you are trying to do.
You can use GetDesktopWindow(), to get the HWND of the desktop. But as a previous answer says (SoapBox), be careful, you may mess up the desktop because the OS expects that it owns it.
I wrote an open source project a few years ago to achieve this on the desktop background. It's called Uberdash. If you follow the window hierarchy, the desktop is just a window in a sort of "background" container. Then there is a main container and a front container. The front container is how windows become full screen or "always on top." You may be able to use Aero composition to render a window with alpha in the front container, but you will need to pass events on to the lower windows. It won't be pretty.
Also, there's a technology in some video cards called overlays/underlays. You used to be able to render directly to an overlay. Your GPU would apply it directly, with no interference to main memory. So even if you took a screen capture, your overlay/underlay would not show up in the screen cap. Unfortunately MS banned that technology in Vista...

Shatter Glass desktop Win32 effect for windows?

I would like a win32 program that takes the desktop and acts like it is shattering glass and in the end would put the pieces back together is there way reference on Doing this kind of effect with C++?
I wrote a program (unfortunately now lost) to do something like this a few years ago.
The desktop image can be retrieved by creating a DC for the screen, creating a compatible bitmap, then using BitBlt to copy the screen contents into the bitmap. Then use GetDIBits to get the pixels from this bitmap in a known format.
This link doesn't do exactly that, but it demonstrates the principle, albeit using MFC. I couldn't find a Win32-specific example:
http://www.flounder.com/screencapture.htm
For the shattering effect, best to use Direct3D or OpenGL. (Further details are up to you.) Create a texture using the bitmap data saved earlier.
By way of window for associating with OpenGL or D3D, create a borderless window that fills the entire screen and doesn't do painting or background erasing. This will prevent any flicker when switching from the desktop image to the copy of the desktop image being used to draw.
(If using D3D, you'll also find GetMonitorInfo useful in conjunction with IDirect3D9::GetAdapterMonitor and friends, as you'll need to create a separate device for each monitor and you'll therefore need to know which portion of the desktop corresponds to that device.)

Acquire direct write access to window's backbuffer, yet still allow read access to what is on the screen already

I was wondering; is it possible to acquire write access to the graphics card's primary buffer though the windows api, yet still allow read access to what should be there? To clarify, here is what I want:
Create a directx device on a window
and hide it. Use the stencil buffer
to apply an alpha channel to pixels
not written to by my code.
Acquire the entirety
of current display adapter's buffer.
I.e. have a pointer to a buffer, in
the current bit depth and
resolution, that contains the
current screen without whatever I
drew to the screen. I was thinking
of, instead of hiding my window,
simply use a LAYERED window and
somehow acquire the buffer before my
window's pixels get blitted to it.
Copy the buffer acquired in step 2 into a new memory location
Blit my directx's device's primary buffer to the buffer built in step 3
Blit the buffer in step 4 to the screen
GOTO 2
So the end result is drawing hardware accelerated 3D directly to the window's desktop while still rendering other applications.
There are better ways to create a window without borders. You might try experimenting with the dwStyle parameter of CreateWindow, for example. It looks as if you pass in WS_OVERLAPPED | WS_POPUP and it results in a borderless window, which is what you appear to want. (see this forum post).
I also think the term "borderless window" is not correct, because I'm hardly getting any results in Google for searches including those words.
Is there any reason why you wouldn't just do this normally with GDI and use windowed mode for DirectX? Why bother with full-screen mode when you need to render with a window?