Problem with CreateDC and wglMakeCurrent - c++

PIXELFORMATDESCRIPTOR pfd = { /* otherwise fine for a window with 32-bit color */ };
HDC hDC = CreateDC(TEXT("Display"),NULL,NULL,NULL); // always OK
int ipf = ChoosePixelFormat(hDC,&pfd); // always OK
SetPixelFormat(hDC,ipf,&pfd); // always OK
HGLRC hRC = wglCreateContext(hDC); // always OK
wglMakeCurrent(hDC,hRC); // ! read error: 0xbaadf039 (debug, obviously)
But the following works with the same hRC:
wglMakeCurrent(hSomeWindowDC,hRC);
The above is part of an OpenGL 3.0+ initialization system for Windows.
I am trying to avoid creating a dummy window for the sake of aesthetics.
I have never used CreateDC before, so perhaps I've missed something.
edit: hSomeWindowDC would point to a window DC with an appropriate pixel format.
More info:
I wish to create a window-independent OpenGL rendering context.
Due to the answer selected, it seems I need to use a dummy window (not really a big deal, just a handle to pass around all the same).
Why I would want to do this: Since it is possible to use the same rendering context for multiple windows with the same pixel format in the same thread, it is possible to create a rendering context (really, just a container for gl-related objects) that is independent of a particular window. In this way, one can create a clean separation between the graphics and UI initializations.The purpose of the context initially isn't for rendering (although I believe one could render into textures using it). If one wanted to change the contents of a buffer within a particular context, the desired context object itself would just need to be made current (since it's carrying the dummy window around with it, this is possible). Rendering into a window is simple: As implied by the above, the window's DC only needs to have the same pixel format. Simply make the rendering context and the window's DC current, and render.Please note that, at the time of this writing, this idea is still in testing. I will update this post should this change (or if I can remember :P ).

I've got a dormant brain cell from reading Petzold 15 years ago that just sprang back to life. The DC from CreateDC() is restricted. Good for getting info about the display device, measurement, that sort of stuff. Not good to use as a regular painting DC. You almost certainly need GetDC().

My current OpenGL 3+ initialization routine doesn't require a dummy window. You can simply attempt to make a second RC and make it current using the DC of the real window. Take a look at the OpenGL wiki Tutorial: OpenGL 3.1 The First Triangle (C++/Win)

Related

ways for Direct2D and Direct3D Interoperability

I want make a Direct2D GUI that will run on a DLL and will render with the Direct3D of the application that I inject into it.
I know that I can simply use ID2D1Factory::CreateDxgiSurfaceRenderTarget to make a DXGI surface and use it as d2d render target, but this require enabling the flag D3D11_CREATE_DEVICE_BGRA_SUPPORT on Direct3D's device.
The problem is that the application creates its device without enabling this flag and, for this reason, ID2D1Factory::CreateDxgiSurfaceRenderTarget fails.
I am trying to find a other way to draw on the application window (externally or inside window's render target) that also works if that window is in full-screen.
I tried these alternatives so far:
Create a d2d render target with ID2D1Factory::CreateDCRenderTarget. This worked, but the part I rendered was blinking/flashing (show and hide very fast in loop). I also called ID2D1DCRenderTarget::BindDC before ID2D1RenderTarget::BeginDraw, but it just blinks but a bit less, so I still had the same issue.
Create a new window that will always be on the top of every other window and render there with d2d but, if the application goes into full-screen, then this window does not show on screen.
Create a second D3D device with enabled the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag and share an ID3D11Texture2D resource between the device of the window and my own, but I wasn't able to make it work... There are not a lot of examples on how to do it. The idea was to create a 2nd device, draw with d2d on that device and then sync the 2 D3D devices – I followed this example (with direct11).
Create a D2D device and share the data of d2d device with d3d device; but, when I call ID2D1Factory1::CreateDevice to create the device it fails because the D3D device is created without enabling the D3D11_CREATE_DEVICE_BGRA_SUPPORT flag. I started with this example.
I've heard of hardware overlay but it works only on some graphics cards and I think I will have problems with this https://learn.microsoft.com/el-gr/windows/win32/medfound/hardware-overlay-support.
I am currently at a dead end; I don't know what to do. Does anyone have any idea that may help me?
Maybe is there any way to draw on screen and work even if a window is in full-screen?
The #3 is the correct one. Here’s a few tips.
Don’t use keyed mutexes. Don’t use NT handles. The only flag you need is D3D11_RESOURCE_MISC_SHARED.
To properly synchronize access to the shared texture across devices, use queries. Specifically, you need a query of type D3D11_QUERY_EVENT. The workflow should look like following.
Create a shared texture on one device, open in another one. Doesn’t matter where it’s created and where imported. Don’t forget the D3D11_BIND_RENDER_TARGET flag. Also create a query.
Create D2D device with CreateDxgiSurfaceRenderTarget of the shared texture, render your overlay into the shared texture with D2D and/or DirectWrite.
On the immediate D3D device context with the BGRA flag which you use for D2D rendering, call ID3D11DeviceContext.End once, passing the query. Then wait for the ID3D11DeviceContext.GetData to return S_OK. If you care about electricity/thermals use Sleep(1), or if you prioritize latency, busy wait with _mm_pause() instructions.
Once ID3D11DeviceContext.GetData returned S_OK for that query, the GPU has finished rendering your 2D scene. You can now use that texture on another device to compose into 3D scene.
The way to compose your 2D content into the render target depends on how do you want to draw your 2D content.
If that’s a small opaque quad, you can probably CopySubresourceRegion into the render target texture.
Or, if your 2D content has transparent background, you need a vertex+pixel shaders to render a quad (4 vertices) textured with your shared texture. BTW you don’t necessarily need a vertex/index buffer for that, there’s a well-known trick to do without one. Don’t forget about blend state (you probably want alpha blending), depth/stencil state (you probably want to disable depth test when rendering that quad), also the D3D11_BIND_SHADER_RESOURCE flag for the shared texture.
P.S. There’s another way. Make sure your code runs in that process before the process created their Direct3D device. Then use something like minhook to intercept the call to D3D11.dll::D3D11CreateDeviceAndSwapChain, in the intercepted function set that BGRA bit you need then call the original function. Slightly less reliable because there’re multiple ways to create a D3D device, but easier to implement, will work faster, and use less memory.

What is necessary to toggle fullscreen in DirectX 11?

I've just started learning DX so I know almost nothing about it although I do know OpenGL (to certain extent). I'm follow a tutorial (http://www.rastertek.com/tutdx11.html) and I have a working window rendering just a white background (clear).
Now - how do I actually switch from windowed mode to fullscreen and vice versa? I know there are many tutorials, some even provide a code for doing that but since I'm a newbie that's not really helpful. Why? Because every code sample is different and trying to find a pattern in all of them is apparently too difficult for me.
So I don't ask for code - instead I would like you to tell me what things I need to release/recreate/change to toggle correctly (and all of them). I know I need to change the display settings, I know I have to change something about the swap chain and release/recreate some buffers - but not really sure which exactly.
You can use SetFullScreenState on your swap chain:
swapChain->SetFullScreenState(true, NULL);
MSDN
The main thing you have to do is release all reference to the IDXGISwapChain, call ResizeBuffers, then re-create everything.
Since Win32 throws the WM_SIZE message upon window initialization, it's entirely possible to:
Clear the previous window-size-specific context
If the swap chain already exists, resize it, otherwise create one
Obtain the backbuffer for this window which will be the final 3D rendertarget.
Create a view interface on the rendertarget to use on bind.
Allocate a 2-D surface as the depth/stencil buffer and create a DepthStencil view on this surface to use on bind.
Create a viewport descriptor of the full window size.
Set the current viewport using the descriptor.
inside a static function (unless WinMain has an object from which to call), and call that function when the WM_SIZE message is triggered.
You can check out how the DirectXTK does it here:
https://directxtk.codeplex.com/

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Shatter Glass desktop Win32 effect for windows?

I would like a win32 program that takes the desktop and acts like it is shattering glass and in the end would put the pieces back together is there way reference on Doing this kind of effect with C++?
I wrote a program (unfortunately now lost) to do something like this a few years ago.
The desktop image can be retrieved by creating a DC for the screen, creating a compatible bitmap, then using BitBlt to copy the screen contents into the bitmap. Then use GetDIBits to get the pixels from this bitmap in a known format.
This link doesn't do exactly that, but it demonstrates the principle, albeit using MFC. I couldn't find a Win32-specific example:
http://www.flounder.com/screencapture.htm
For the shattering effect, best to use Direct3D or OpenGL. (Further details are up to you.) Create a texture using the bitmap data saved earlier.
By way of window for associating with OpenGL or D3D, create a borderless window that fills the entire screen and doesn't do painting or background erasing. This will prevent any flicker when switching from the desktop image to the copy of the desktop image being used to draw.
(If using D3D, you'll also find GetMonitorInfo useful in conjunction with IDirect3D9::GetAdapterMonitor and friends, as you'll need to create a separate device for each monitor and you'll therefore need to know which portion of the desktop corresponds to that device.)

Acquire direct write access to window's backbuffer, yet still allow read access to what is on the screen already

I was wondering; is it possible to acquire write access to the graphics card's primary buffer though the windows api, yet still allow read access to what should be there? To clarify, here is what I want:
Create a directx device on a window
and hide it. Use the stencil buffer
to apply an alpha channel to pixels
not written to by my code.
Acquire the entirety
of current display adapter's buffer.
I.e. have a pointer to a buffer, in
the current bit depth and
resolution, that contains the
current screen without whatever I
drew to the screen. I was thinking
of, instead of hiding my window,
simply use a LAYERED window and
somehow acquire the buffer before my
window's pixels get blitted to it.
Copy the buffer acquired in step 2 into a new memory location
Blit my directx's device's primary buffer to the buffer built in step 3
Blit the buffer in step 4 to the screen
GOTO 2
So the end result is drawing hardware accelerated 3D directly to the window's desktop while still rendering other applications.
There are better ways to create a window without borders. You might try experimenting with the dwStyle parameter of CreateWindow, for example. It looks as if you pass in WS_OVERLAPPED | WS_POPUP and it results in a borderless window, which is what you appear to want. (see this forum post).
I also think the term "borderless window" is not correct, because I'm hardly getting any results in Google for searches including those words.
Is there any reason why you wouldn't just do this normally with GDI and use windowed mode for DirectX? Why bother with full-screen mode when you need to render with a window?