How to screencapture window that uses OpenGL? - c++

I'm trying to capture the pixels of an OpenGL application (specifically, MEmu) and later convert it to an OpenCV Mat. I used the hwnd2mat() function from this post as a reference but ran into some problems.
Only the window frame is captured, as seen here:
Further investigation led me to believe that the problem is that StretchBlt (or BitBlt) can't capture the OpenGL pixels.
I tried to use glReadPixels (and convert to Mat using this), but it is not reading any pixels. The wglCreateContext returns NULL, probably because my application does not own the DC of MEmu. Therefore, wglMakeCurrent does nothing and the pixels are not read.
I was able to create a workaround modified version of hwnd2mat that gets the WindowRect of MEmu's hwnd but later uses GetDC(NULL) to capture only the portion of the screen where MEmu is located. This works but any windows that get on top of MEmu get captured aswell.
I can work with this, sure, but was wondering if there is a way to use glReadPixels on a window that I don't own or a way to ensure that hwnd2mat works on the contents of the window that is using OpenGL.

Related

Taking a screenshot, analyzing it, then deleting it

I've been trying to code an auto clicker for a simple game online (a php coded one), but I've had trouble analyzing the colors on-screen. (English isn't my first language, sorry!) I've already done a bit of C++ in university, but only for science-oriented simple console programs. (Edit: I'm working on windows!! forgot to mention)
I've already tried the getpixel function, but since my chrome window is zoomed out at 80% to get the full game in frame, it seems I'm having some DPI related issues, but looking into this made my head dizzy.
After watching a Codebullet video, I thought a better approach to this would be to take a screenshot of the problematic area, analyze it to see if the condition is filled, then delete the screenshot. The problem is, I have no idea how I could achieve this and Google didn't help much this time :\
My code is extremely messy so I can't show it right now, but it's basically just a:
-click there
-click there after 5 seconds
-click there if this pixel is this color
-repeat
Is there an easy answer to this? I'd be really thankful if there is. Have a nice day! :)
You don't need to save the screen shot if you don't want to:
Pass the target window handle to GetDC(), t will return the the device context of the window.
Pass the device context to CreateCompatibleDC() to create a compatible DC.
Use CreateCompatibleBitmap(), passing in the DC and the size of the window. This returns a handle to a bitmap
Use SelectObject() to select the bitmap
Use BitBlt() to do a bitblock transfer of the selected pixels from the regular DC into the compatible DC using the SRCCOPY raster operation code to do a normal copy.
Create a BITMAP object. Use GetObject() and pass the handle to the bitmap you created.
Create a BITMAPINFOHEADER and define the member vars. Create an array of unsigned chars big enough to fit all the pixels from your bitmap.
Use GetDIBits() passing in the handle to the compatible bitmap, the bitmap header and a pointer to the pixel array. This loads the pixels from the bitmap into the pixel array.
Now parse all that juicy pixel data, search for the colors you're looking for and test the results against your conditionals to decide what to do next.
Don't forget to delete objects and release memory & device contexts.
I believe this is the tutorial I followed where I learned this, courtesy of MSDN: https://learn.microsoft.com/en-us/windows/desktop/gdi/capturing-an-image

Direct2D/C++ - Offscreen Rendering using Bitmap

I already have implemented a Direct2D application for windows desktop application using C++, where I show the graphical results (with points, lines, and ellipses) during the simulation. I keep a buffer for storing the simulation values as long as the simulation remains running, and every time interval I simply plot the values. Right now, the situation is, I draw directly on Hwnd (ID2D1HwndRenderTarget) like
pRenderTarget->BeginDraw()
for(values of simulation results)
pRenderTarget->DrawLine(....)
pRenderTarget->EndDraw()
Now I want use the offscreen rendering/drawing using Bitmap, as I need to store the bitmap as an image on the computer (equivalent to taking/capturing screenshot to store the simulation results). How should I proceed in this case (with/without Direct2D IWICBitmapFactory - for later screen capturing)?
create ID2D1HwndRenderTarget pHwndRenderTarget - using pD2DFactory->CreateHwndRenderTarget()
create ID2D1BitmapFactory pBitmapFactory - using pHwndRenderTarget->CreateCompatibleRenderTarget()
Create an empty bitmap ID2D1Bitmap ID2D1Bitmap pBmp - using pBitmapFactory->CreateBitmap()
?? On this Bitmap should I draw lines? if not, where should I draw lines
In the end, between whose BeginDraw() and EndDraw(), I should place the bitmap?
Later at some point, I'd capture a screenshot of this bitmap. Without IWICBitmapFactory can I achieve this? Any code samples would be appreciated.

Creating a program that creates a full screen overlay

I want to write a program that would create a transparent overlay filling the entire screen in Windows 7, preferably with C++ and OpenGL. Though, if there is an API written in another language that makes this super easy, I would be more than willing to use that too. In general, I assume I would have to be able to read the pixels that are already on the screen somehow.
Using the same method screen capture software uses to get the pixels from the screen and then redrawing them would work initially, but the problem would then be if the screen updates. My program would then have to minimize/close and reappear in order for me to be able to read the underlying pixels.
Windows Vista introduced a new flag into the PIXELFORMATDESCRIPTOR: PFD_SUPPORT_COMPOSITION. If the OpenGL context is created with an alpha channel, i.e. AlphaBits of the PFD is nonzero, the alpha channel of the OpenGL framebuffer is respected by the Windows compositor.
Then by creating a full screen, borderless, undecorated window you get this exakt kind of overlay you desire. However this window will still receive all input events, so you'll have to do some grunt work and pass on all input events to the underlying windows manually.

Shatter Glass desktop Win32 effect for windows?

I would like a win32 program that takes the desktop and acts like it is shattering glass and in the end would put the pieces back together is there way reference on Doing this kind of effect with C++?
I wrote a program (unfortunately now lost) to do something like this a few years ago.
The desktop image can be retrieved by creating a DC for the screen, creating a compatible bitmap, then using BitBlt to copy the screen contents into the bitmap. Then use GetDIBits to get the pixels from this bitmap in a known format.
This link doesn't do exactly that, but it demonstrates the principle, albeit using MFC. I couldn't find a Win32-specific example:
http://www.flounder.com/screencapture.htm
For the shattering effect, best to use Direct3D or OpenGL. (Further details are up to you.) Create a texture using the bitmap data saved earlier.
By way of window for associating with OpenGL or D3D, create a borderless window that fills the entire screen and doesn't do painting or background erasing. This will prevent any flicker when switching from the desktop image to the copy of the desktop image being used to draw.
(If using D3D, you'll also find GetMonitorInfo useful in conjunction with IDirect3D9::GetAdapterMonitor and friends, as you'll need to create a separate device for each monitor and you'll therefore need to know which portion of the desktop corresponds to that device.)

Acquire direct write access to window's backbuffer, yet still allow read access to what is on the screen already

I was wondering; is it possible to acquire write access to the graphics card's primary buffer though the windows api, yet still allow read access to what should be there? To clarify, here is what I want:
Create a directx device on a window
and hide it. Use the stencil buffer
to apply an alpha channel to pixels
not written to by my code.
Acquire the entirety
of current display adapter's buffer.
I.e. have a pointer to a buffer, in
the current bit depth and
resolution, that contains the
current screen without whatever I
drew to the screen. I was thinking
of, instead of hiding my window,
simply use a LAYERED window and
somehow acquire the buffer before my
window's pixels get blitted to it.
Copy the buffer acquired in step 2 into a new memory location
Blit my directx's device's primary buffer to the buffer built in step 3
Blit the buffer in step 4 to the screen
GOTO 2
So the end result is drawing hardware accelerated 3D directly to the window's desktop while still rendering other applications.
There are better ways to create a window without borders. You might try experimenting with the dwStyle parameter of CreateWindow, for example. It looks as if you pass in WS_OVERLAPPED | WS_POPUP and it results in a borderless window, which is what you appear to want. (see this forum post).
I also think the term "borderless window" is not correct, because I'm hardly getting any results in Google for searches including those words.
Is there any reason why you wouldn't just do this normally with GDI and use windowed mode for DirectX? Why bother with full-screen mode when you need to render with a window?