Why does CreateWICTextureFromFile require ShowWindow? - c++

I'm writing some D3D11 apps and am using DirectXTK's CreateWICTextureFromFile to load a texture file into SRV. I wanted to show my rendering window only when I start to draw the scene (after initializing models, textures, shaders, constant buffers, etc.) so I've created the window early on but I omit the ShowWindow until later.
Unfortunately I get an error unless I show the window prior to creating the texture:
// ShowWindow(hwnd, SW_SHOW); // works
hr = DirectX::CreateWICTextureFromFile(device.Get(), L"../../Common/Resources/Textures/green_grid.png", nullptr, psTexture.GetAddressOf());
ShowWindow(hwnd, SW_SHOW); // fails
HResult error:
No such interface supported
Also it seems to work fine if I show the window at the end of initialization as long as I don't load any textures with this function.
Maybe I don't have a good understanding of how a window works with respect to the D3D API. Looking at CreateWICTextureFromFile's parameters, I only see a dependency on device and the SRV. I'm not sure why there's a dependency on the window visibility?

Before you call WICTextureLoader (which uses the Windows Imaging Component) you need to initialize COM as noted in the documentation.
In your main entry-point, add:
if (FAILED(CoInitializeEx(nullptr, COINIT_MULTITHREADED)))
// error
The fact that ShowWindow happens to initalize COM is an interesting side-effect, but that's definitely not a function you are required to call to use my GitHub libraries.

Related

Using d3d9's backbuffer functionalities correctly

I'm trying to write a simple DLL which allow me to take a screenshot of the program calling it.
The program give me its Window Handle using the DLL call.
The CreateDevice functions, CreateOffScreenPlainSurface, works fine using the provided handle, according to HRESULT values returned.
But the output image is blank, then I think that I'm not using the DirectX's GetBackBuffer functions correctly.
Here is how I proceed:
HRESULT GFBD_r = g_pd3dDevice->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &pSurface);
HRESULT LR_r = pSurface->LockRect(&lr, &rect, D3DLOCK_READONLY);
memcpy(&Frame[0], lr.pBits, Width*Height*4);
pSurface->UnlockRect();
My Frame vector is still remaining empty. Maybe the problem came from Multisampling ? I don't exactly understand how does it works, except that i'm using d3dpp.MultiSampleType = D3DMULTISAMPLE_NONE; in my D3DPRESENT_PARAMETERS .
Creating a D3D device for an existing window does not allow you to capture its contents, even if that window is rendered to using hardware acceleration. If there is already a D3D device for the window, the API call to create the device should fail (iirc). If the window is drawn to using regular Windows APIs (i.e. GDI), the D3D content will, depending on the Windows version, be either composed with that other content or behave in an undefined way. Such composition would happen after you access the back buffer, and (likely) elsewhere.

Coping with lost device in VMR9 custom Allocator-Presenter

First some necessary pre-amble:
I'm using DirectX9 and have no choice I'm afraid. Dealing with legacy code. Sorry!
I'm using DirectX9 SDK June 2010.
What I'm trying to achieve is the following. I have an application which can render in both windowed mode and fullscreen using DirectX. I need to be able to play video when in fullscreen mode and having a visible transition between rendering of the application and playing video is unacceptable (i.e. kick out of fullscreen and go back in again). Thus I need to use the same DirectX device to render the application and the video.
I've achieved this by putting together a custom allocator/presenter as per the vmr9allocator example in the DirectX SDK (see e.g. C:\Program Files\Microsoft SDKs\Windows\v7.1\Samples\multimedia\directshow\vmr9\vmr9allocator). My version differs somewhat as the DirectX device used by the allocator class isn't owned by that class but is passed in from the rendering part of the application (actually from Angle GL layer to be precise, but this isn't particularly relevant for this discussion). However the setup of the filter is all the same.
Everything works fine apart from the following scenario. If the application loses focus then the fullscreen DirectX device is lost. In this situation I want to stop the video and terminate the device, enter windowed mode and create a new DirectX device for rendering in windowed mode. I can achieve this, but I seem to leave DirectShow in some unstable state. The problem manifests itself when I try to subsequently render video in windowed mode. Rather than using the DX device in this case, I do this by creating a subwindow of my main window for DirectShow to render to. So in particular for the DX rendering methodology I call
SetRenderingMode(VMR9Mode_Renderless)
and for the windowed version I call
SetRenderingMode(VMRMode_Windowless)
on the IVMRFilterConfig interface.
What I see (after the fullscreen device loss during video) is that the windowed video will not render to my manually specified window. Instead it insists on opening it's own parentless window exactly as if
SetRenderingMode(VMRMode_Windowed)
had been called. If I debug the code then I see that SetRenderingMode(VMRMode_Windowless) returns an "unknown error"!
It'll be difficult for me to get advice here on what's wrong with my code as there's a lot of it and posting it all is probably not helpful. So what I'd like to know is what the correct way to deal with loss of device during video rendering is. Maybe then I can pinpoint what's going wrong with my code. With reference to the afore mentioned DX sample, the key problem occurs in the CAllocator::PresentImage function:
HRESULT CAllocator::PresentImage(
/* [in] */ DWORD_PTR dwUserID,
/* [in] */ VMR9PresentationInfo *lpPresInfo)
{
HRESULT hr;
CAutoLock Lock(&m_ObjectLock);
if( NeedToHandleDisplayChange() )
{}
hr = PresentHelper( lpPresInfo );
if( hr == D3DERR_DEVICELOST)
{
if (m_D3DDev->TestCooperativeLevel() == D3DERR_DEVICENOTRESET)
{
DeleteSurfaces();
FAIL_RET( CreateDevice() );
HMONITOR hMonitor = m_D3D->GetAdapterMonitor( D3DADAPTER_DEFAULT );
FAIL_RET( m_lpIVMRSurfAllocNotify->ChangeD3DDevice( m_D3DDev, hMonitor ) );
}
hr = S_OK;
}
return hr;
}
This function is called every frame on a thread which is managed by DirectShow and which is started when the video is played. The example code indicates that you'd recreate the device here. Firstly this doesn't make any sense to me as you're only supposed to create/reset/TestCooperativeLevel on the window message thread for DX9, so this breaks that usage rule! Secondly, I can't actually do this anyway as the DX device is provided extraneously thus we can't reset it. However I can't find any sensible way to tell the system not to continue rendering. I can do nothing and return S_OK or some failure code but the problem persists.
So finally, the question! Does anyone know what the correct aproach is to handling this situation, i.e. on device lost just stop video!
N.B. I'm not ruling out some other problem deep in my code somewhere. But if I at least know what the correct approach to doing what I want is then I can hopefully rule in/out at least one part of the code.

How can I draw inside two separate 3D windows within the same application on Windows using OpenGL?

I am implementing a plug-in inside a 3rd party program in C++ on Windows.
The 3rd party program has a window that displays 3D graphics using OpenGL.
However I need the plug-in to create another window that also displays 3D graphics using OpenGL.
Do I need to create a new OpenGL rendering context for my window or is there some way that I can "reuse" the OpenGL rendering context used by the 3rd party program?
I assumed that I had to create a new OpenGL rendering context and tried the following:
// create a rendering context
hglrc = wglCreateContext (hdc);
// make it the calling thread's current rendering context
wglMakeCurrent (hdc, hglrc);
However the last function failed.
Reading the documentation of wglMakeCurrent I notice that
A thread can have one current rendering context. A process can have multiple rendering contexts by means of multithreading.
Does this mean that my window need to run in a separate thread from the 3rd party program?
You didn't post error code generated by wglMakeCurrent(), so I won't be guessing the reason. It's not the binding itself, however. Sentence 'A thread can have one current rendering context' means, that new context will 'replace' the old one and become the current. I don't know why are you trying to set two contexts as current (or run another thread), but it's not the way to go. Avoid multithreading in rendering unless it's absolutely necessary.
So, answering your question:
Yes, you CAN 'reuse' OpenGL rendering context.
Why, you may ask? Rendering context is created for specific device context (HDC), which is exclusive property of each window (HWND)! How is this possible, then?!
Well, it seems somehow impossible because of function prototypes:
HWND my_window = CreateWindow(...);
HDC my_dc = GetDC(my_new_window);
//Setup pixel format for 'my_dc'...
HGLRC my_rc = wglCreateContext(my_dc);
wglMakeCurrent(my_dc, my_rc);
This really lets you think that rendering context is bound to this specific device context and valid only for it. But it's not.
The critical part is the comment (setup pixel format). Rendering context is created for specific CLASS of DCs, to be more precise: for DCs with the same pixel format. So code below is perfectly valid:
//window_1 = main window, window_2 = your window
HDC dc_1 = GetDC(window_1);
Set_pixel_format_for_dc_1(); //Usual stuff
HGLRC rc = wglCreateContext(dc_1);
wglMakeCurrent(dc_1, rc);
ultra_super_draw();
//.....
HDC dc_2 = GetDC(window_2);
//Get dc_1's PF to make sure it's compatible with rc.
int pf_index = GetPixelFormat(dc_1);
PIXELFORMATDESCRIPTOR pfd;
ZeroMemory(&pfd, sizeof(PIXELFORMATDESCRIPTOR));
DescribePixelFormat(dc_1, pf_index, sizeof(PIXELFORMATDESCRIPTOR), &pfd);
SetPixelFormat(dc_2, pf_index, &pfd);
wglMakeCurrent(dc_2, rc);
another_awesome_render();
wglMakeCurrent(NULL, NULL);
If you are still not convinced, MSDN:
wglMakeCurrent(hdc, hglrc): The hdc parameter must refer to a drawing surface supported by OpenGL. It need not be the same hdc that was passed to wglCreateContext when hglrc was created, but it must be on the same device and have the same pixel format.
I guess you are already familiar with these calls. Now, I don't know what are the conditions that your rendering must meet, but without additional requirements, I don't see any difficulties from this point:
HDC my_dc = Create_my_DC();
//...
void my_new_render
{
//Probably you want to save current binding:
HDC current_dc = wglGetCurrentDC();
HGLRC current_context = wglGetCurrentContext();
wglMakeCurrent(my_dc, current_context);
MyUltraSuperRender(...);
wglMakeCurrent(current_dc, current_context);
}
Hope this helps :)
First things first, you actually should create a separate OpenGL context for your plugin, for the simple reason that it gives you a separate state space that doesn't interfere with the main programs OpenGL context.
You misunderstood the part about multiple rendering contexts though. It's perfectly possible to have an arbitrary number of OpenGL contexts for a process. But each thread of the process can bind only one context at a time. That one binding also includes the window DC the context is bound to. It is however perfectly legal change a context binding at any time. Either you change the window a given context is bound to, or you switch the context or you do both at the same time.
So in your situation I suggest you create a custom context for your plug-in, that you use for all the windows your plug-in creates.
That your simple context "creation" code fails has one simple reason: Your window will most likely not have a pixel format descriptor set.
I suggest you use the following method to create your new windows and contexts:
/* first get hold of the HDC/HRC of the parent */
HDC parentDC = wglGetCurrentDC();
HRC parentRC = wglGetCurrentContext();
int pixelformatID = GetPixelFormat(parentDC);
/* we use the same PFD as the parent */
PIXELFORMATDESCRIPTOR pixelformat;
memset(pixelformat, 0, sizeof(pixelformat);
DescribePixelFormat(parentDC, pixelformatID, sizeof(pixelformat), &pixelformat);
/* create a window and set it's pixelformat to the parent one's */
HWND myWND = create_my_window();
HDC myDC = GetDC(myWND);
SetPixelFormat(myDC, pixelformatID, &pixelformat);
/* finally we can create a rendering context
* it doesn't matter if we create it against
* the parent or our own DC.
*/
HRC myRC = wglCreateContext(myDC);
/* we're done here... */
Now whenever your plugin wants to render something it should bind its own context, do its thing and bind the context that was bound before:
HDC prevDC = wglGetCurrentDC();
HRC prevRC = wglGetCurrentContext();
wglMakeCurrent(myDC, myRC);
/* do OpenGL stuff */
wglMakeCurrent(prevDC, prevRC);

Problem with CreateDC and wglMakeCurrent

PIXELFORMATDESCRIPTOR pfd = { /* otherwise fine for a window with 32-bit color */ };
HDC hDC = CreateDC(TEXT("Display"),NULL,NULL,NULL); // always OK
int ipf = ChoosePixelFormat(hDC,&pfd); // always OK
SetPixelFormat(hDC,ipf,&pfd); // always OK
HGLRC hRC = wglCreateContext(hDC); // always OK
wglMakeCurrent(hDC,hRC); // ! read error: 0xbaadf039 (debug, obviously)
But the following works with the same hRC:
wglMakeCurrent(hSomeWindowDC,hRC);
The above is part of an OpenGL 3.0+ initialization system for Windows.
I am trying to avoid creating a dummy window for the sake of aesthetics.
I have never used CreateDC before, so perhaps I've missed something.
edit: hSomeWindowDC would point to a window DC with an appropriate pixel format.
More info:
I wish to create a window-independent OpenGL rendering context.
Due to the answer selected, it seems I need to use a dummy window (not really a big deal, just a handle to pass around all the same).
Why I would want to do this: Since it is possible to use the same rendering context for multiple windows with the same pixel format in the same thread, it is possible to create a rendering context (really, just a container for gl-related objects) that is independent of a particular window. In this way, one can create a clean separation between the graphics and UI initializations.The purpose of the context initially isn't for rendering (although I believe one could render into textures using it). If one wanted to change the contents of a buffer within a particular context, the desired context object itself would just need to be made current (since it's carrying the dummy window around with it, this is possible). Rendering into a window is simple: As implied by the above, the window's DC only needs to have the same pixel format. Simply make the rendering context and the window's DC current, and render.Please note that, at the time of this writing, this idea is still in testing. I will update this post should this change (or if I can remember :P ).
I've got a dormant brain cell from reading Petzold 15 years ago that just sprang back to life. The DC from CreateDC() is restricted. Good for getting info about the display device, measurement, that sort of stuff. Not good to use as a regular painting DC. You almost certainly need GetDC().
My current OpenGL 3+ initialization routine doesn't require a dummy window. You can simply attempt to make a second RC and make it current using the DC of the real window. Take a look at the OpenGL wiki Tutorial: OpenGL 3.1 The First Triangle (C++/Win)

When using SDL_SetVideoMode, is there a way to get the internal SDL_Window pointer or ID?

If you create a window by using SDL_SetVideoMode(), you are returned a surface, not a window handle. Is there a way to get the SDL_Window handle? I know there is a SDL_GetWindowFromID function, but I'm also not sure how to get the ID, other than the SDL_GetWindowID function, which would require me to already have the window handle.
Any suggestions? Note that it is very important that I maintain cross platform portability, so I prefer to stick with built in SDL functionality if at all possible.
If it helps any, I'm trying to get and set the window position and window size, and those functions require a window handle.
Thanks!
EDIT: I should mention also that I am changing video modes at the user's request, so I cannot just use the default ID of 1, since this ID changes every time I call SDL_SetVideoMode().
I had the same problem with SDL-1.2.15 for windows ,but the problem solved by GetActiveWindow.
You can get SDL window handle like this :
...
screen = SDL_SetVideoMode(w, h, 0, flags);
...
HWND hnd= GetActiveWindow();
See this :
GetActiveWindow function
I had this exact problem - old SDL 1.2 only uses one window, so it keeps the handle to itself. Here's the method I found from reading the source code:
Include SDL_syswm.h then get the window handle using SDL_GetWMInfo
e.g. my code for getting the handle in Windows:
SDL_SysWMinfo wmInfo;
SDL_GetWMInfo(&wmInfo);
HWND window = wmInfo.window;
SDL_SetVideoMode returns a surface based on the video frame buffer, not on a window (just like SDL_GetVideoSurface). You seem to assume that all surfaces correspond to windows, but that is not the case.