OK, I managed to create an OpenGL context with wglcreatecontextattribARB with version 3.2 in my attrib struct (So I have initialized a 3.2 opengl context).
It works, but the strange thing is, when I use glBindBuffer e,g. I still get unreferenced linker error, shouldn't a newer context prevent this?
I'm on windows BTW, Linux doesn't have to deal with older and newer contexts (it directly supports the core of its version).
The code:
PIXELFORMATDESCRIPTOR pfd;
HGLRC tmpRC;
int iFormat;
if (!(hDC = GetDC(hWnd)))
{
CMsgBox("Unable to create a device context. Program will now close.", "Error");
return false;
}
ZeroMemory(&pfd, sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = attribs->colorbits;
pfd.cDepthBits = attribs->depthbits;
pfd.iLayerType = PFD_MAIN_PLANE;
if (!(iFormat = ChoosePixelFormat(hDC, &pfd)))
{
CMsgBox("Unable to find a suitable pixel format. Program will now close.", "Error");
return false;
}
if (!SetPixelFormat(hDC, iFormat, &pfd))
{
CMsgBox("Unable to initialize the pixel formats. Program will now close.", "Error");
return false;
}
if (!(tmpRC=wglCreateContext(hDC)))
{
CMsgBox("Unable to create a rendering context. Program will now close.", "Error");
return false;
}
if (!wglMakeCurrent(hDC, tmpRC))
{
CMsgBox("Unable to activate the rendering context. Program will now close.", "Error");
return false;
}
strncpy(vers, (char*)glGetString(GL_VERSION), 3);
vers[3] = '\0';
if (sscanf(vers, "%i.%i", &glv, &glsubv) != 2)
{
CMsgBox("Unable to retrieve the OpenGL version. Program will now close.", "Error");
return false;
}
hRC = NULL;
if (glv > 2) // Have OpenGL 3.+ support
{
if ((wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB")))
{
int attribs[] = {WGL_CONTEXT_MAJOR_VERSION_ARB, glv, WGL_CONTEXT_MINOR_VERSION_ARB, glsubv,WGL_CONTEXT_FLAGS_ARB, 0,0};
hRC = wglCreateContextAttribsARB(hDC, 0, attribs);
wglMakeCurrent(NULL, NULL);
wglDeleteContext(tmpRC);
if (!wglMakeCurrent(hDC, hRC))
{
CMsgBox("Unable to activate the rendering context. Program will now close.", "Error");
return false;
}
moderncontext = true;
}
}
if (hRC == NULL)
{
hRC = tmpRC;
moderncontext = false;
}
You will still need to
Declare function pointers with the apropriate names and function signatures.
Fetch the correct memory locations for those pointers with wglGetProcAddress
#define the actual OpenGL API names to the corresponding function pointers.
That's right, the OpenGL API functions are actually function pointers.
If you don't have the time and patience to do this, then it is advised to use an OpenGL loader library, like GL3W or GLEW. This'll also save you of the burden of first creating your dummy context and then the "real" context.
Also see the OpenGL wiki page on loading function pointers.
Related
So i wanna draw an overlay over another window, but im getting no real runtime error the visual Studio debugging tools tell me that the result of
HRESULT res = object->CreateDeviceEx(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hWND, D3DCREATE_HARDWARE_VERTEXPROCESSING, ¶ms, NULL, &device);
is 0x8876086c. So here are the snippets of my code that are important and lead to this error(D3DERR_INVALIDCALL), which leads to the device being a nullpointer, which means i can't do anything with it.
I couldn't really figure out what led to this as i pretty much followed the documentation
int Paint::init(HWND hWND) {
if (FAILED(Direct3DCreate9Ex(D3D_SDK_VERSION, &object))) {
exit(1);
}
ZeroMemory(¶ms, sizeof(params));
params.BackBufferWidth = width;
params.BackBufferHeight = height;
params.Windowed = true;
params.hDeviceWindow = hWND;
params.MultiSampleQuality = D3DMULTISAMPLE_NONE;
params.BackBufferFormat = D3DFMT_A8R8G8B8;
params.EnableAutoDepthStencil = TRUE;
params.AutoDepthStencilFormat = D3DFMT_D16;
HRESULT res = object->CreateDeviceEx(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, hWND, D3DCREATE_HARDWARE_VERTEXPROCESSING, ¶ms, NULL, &device);
and in the header file:
class Paint {
private:
IDirect3D9Ex* object = NULL;
IDirect3DDevice9Ex* device = NULL;
DWORD behaviorFlags = D3DCREATE_HARDWARE_VERTEXPROCESSING;
D3DPRESENT_PARAMETERS params;
ID3DXFont* font = 0;
HWND TargetHWND;
int width, height;
int init(HWND(hWND));
}
D3DPRESENT_PARAMETERS params = {};
// Use Win32 BOOL "TRUE" instead of C++ "true"
params.Windowed = TRUE;
params.hDeviceWindow = m_window;
// params.BackBufferWidth, BackBufferHeight are ignored for Windowed = TRUE
// For Windowed = TRUE, use params.BackBufferFormat = D3DFMT_UNKNOWN, which is zero.
// For params.BackBufferCount zero is assumed to be 1, but best practice
// would be to set it
params.BackBufferCount = 1;
// You used D3DMULTISAMPLE_NONE for the MultiSampleQuality instead of MultiSampleType.
// It's all zero anyhow.
params.EnableAutoDepthStencil = TRUE;
params.AutoDepthStencilFormat = D3DFMT_D16;
// --->>> This is the actual bug: there is no valid SwapEffect that has a value of zero <<<---
params.SwapEffect = D3DSWAPEFFECT_DISCARD;
You are making the assumption that the Direct3D9 device supports D3DCREATE_HARDWARE_VERTEXPROCESSING, but you haven't validated it actually supports it. That said, D3DCREATE_SOFTWARE_VERTEXPROCESSING has known performance issues on Windows 10 so you should probably just require HW anyhow.
You should not be using legacy Direct3D9 or Direct3D9Ex for new projects. It's mostly emulated on newer versions of Windows, has lots of strange behaviors, and is almost 20 years old at this point. There's no support for the Direct3D 9 debug device on Windows 8.x or Windows 10. You should consider Direct3D 11 as a much better starting place for developers new to DirectX.
I'm building an application that is used for taking and sharing screenshots in real time between multiple clients over network.
I'm using the MS Desktop Duplication API to get the image data and it's working smoothly except in some edge cases.
I have been using four games as test applications in order to test how the screencapture behaves in fullscreen and they are Heroes of the Storm, Rainbow Six Siege, Counter Strike and PlayerUnknown's Battlegrounds.
On my own machine which has a GeForce GTX 1070 graphics card; everything works fine both in and out of fullscreen for all test applications. On two other machines that runs a GeForce GTX 980 however; all test applications except PUBG works. When PUBG is running in fullscreen, my desktop duplication instead produces an all black image and I can't figure out why as the
Desktop Duplication Sample works fine for all test machines and test applications.
What I'm doing is basically the same as the sample except I'm extracting the pixel data and creating my own SDL(OpenGL) texture from that data instead of using the acquired ID3D11Texture2D directly.
Why is PUBG in fullscreen on GTX 980 the only test case that fails?
Is there something wrong with the way I'm getting the frame, handling the "DXGI_ERROR_ACCESS_LOST" error or how I'm copying the data from the GPU?
Declarations:
IDXGIOutputDuplication* m_OutputDup = nullptr;
Microsoft::WRL::ComPtr<ID3D11Device> m_Device = nullptr;
ID3D11DeviceContext* m_DeviceContext = nullptr;
D3D11_TEXTURE2D_DESC m_TextureDesc;
Initialization:
bool InitializeScreenCapture()
{
HRESULT result = E_FAIL;
if (!m_Device)
{
D3D_FEATURE_LEVEL featureLevels = D3D_FEATURE_LEVEL_11_0;
D3D_FEATURE_LEVEL featureLevel;
result = D3D11CreateDevice(
nullptr,
D3D_DRIVER_TYPE_HARDWARE,
nullptr,
0,
&featureLevels,
1,
D3D11_SDK_VERSION,
&m_Device,
&featureLevel,
&m_DeviceContext);
if (FAILED(result) || !m_Device)
{
Log("Failed to create D3DDevice);
return false;
}
}
// Get DXGI device
ComPtr<IDXGIDevice> DxgiDevice;
result = m_Device.As(&DxgiDevice);
if (FAILED(result))
{
Log("Failed to get DXGI device);
return false;
}
// Get DXGI adapter
ComPtr<IDXGIAdapter> DxgiAdapter;
result = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), &DxgiAdapter);
if (FAILED(result))
{
Log("Failed to get DXGI adapter);
return false;
}
DxgiDevice.Reset();
// Get output
UINT Output = 0;
ComPtr<IDXGIOutput> DxgiOutput;
result = DxgiAdapter->EnumOutputs(Output, &DxgiOutput);
if (FAILED(result))
{
Log("Failed to get DXGI output);
return false;
}
DxgiAdapter.Reset();
ComPtr<IDXGIOutput1> DxgiOutput1;
result = DxgiOutput.As(&DxgiOutput1);
if (FAILED(result))
{
Log("Failed to get DXGI output1);
return false;
}
DxgiOutput.Reset();
// Create desktop duplication
result = DxgiOutput1->DuplicateOutput(m_Device.Get(), &m_OutputDup);
if (FAILED(result))
{
Log("Failed to create output duplication);
return false;
}
DxgiOutput1.Reset();
DXGI_OUTDUPL_DESC outputDupDesc;
m_OutputDup->GetDesc(&outputDupDesc);
// Create CPU access texture description
m_TextureDesc.Width = outputDupDesc.ModeDesc.Width;
m_TextureDesc.Height = outputDupDesc.ModeDesc.Height;
m_TextureDesc.Format = outputDupDesc.ModeDesc.Format;
m_TextureDesc.ArraySize = 1;
m_TextureDesc.BindFlags = 0;
m_TextureDesc.MiscFlags = 0;
m_TextureDesc.SampleDesc.Count = 1;
m_TextureDesc.SampleDesc.Quality = 0;
m_TextureDesc.MipLevels = 1;
m_TextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_READ;
m_TextureDesc.Usage = D3D11_USAGE::D3D11_USAGE_STAGING;
return true;
}
Screen capture:
void TeamSystem::CaptureScreen()
{
if (!m_ScreenCaptureInitialized)
{
Log("Attempted to capture screen without ScreenCapture being initialized");
return false;
}
HRESULT result = E_FAIL;
DXGI_OUTDUPL_FRAME_INFO frameInfo;
ComPtr<IDXGIResource> desktopResource = nullptr;
ID3D11Texture2D* copyTexture = nullptr;
ComPtr<ID3D11Resource> image;
int32_t attemptCounter = 0;
DWORD startTicks = GetTickCount();
do // Loop until we get a non empty frame
{
m_OutputDup->ReleaseFrame();
result = m_OutputDup->AcquireNextFrame(1000, &frameInfo, &desktopResource);
if (FAILED(result))
{
if (result == DXGI_ERROR_ACCESS_LOST) // Access may be lost when changing from/to fullscreen mode(any application); when this happens we need to reaquirce the outputdup
{
m_OutputDup->ReleaseFrame();
m_OutputDup->Release();
m_OutputDup = nullptr;
m_ScreenCaptureInitialized = InitializeScreenCapture();
if (m_ScreenCaptureInitialized)
{
result = m_OutputDup->AcquireNextFrame(1000, &frameInfo, &desktopResource);
}
else
{
Log("Failed to reinitialize screen capture after access was lost");
return false;
}
}
if (FAILED(result))
{
Log("Failed to acquire next frame);
return false;
}
}
attemptCounter++;
if (GetTickCount() - startTicks > 3000)
{
Log("Screencapture timed out after " << attemptCounter << " attempts");
return false;
}
} while(frameInfo.TotalMetadataBufferSize <= 0 || frameInfo.LastPresentTime.QuadPart <= 0); // This is how you wait for an image containing image data according to SO (https://stackoverflow.com/questions/49481467/acquirenextframe-not-working-desktop-duplication-api-d3d11)
Log("ScreenCapture succeeded after " << attemptCounter << " attempt(s)");
// Query for IDXGIResource interface
result = desktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void**>(©Texture));
desktopResource->Release();
desktopResource = nullptr;
if (FAILED(result))
{
Log("Failed to acquire texture from resource);
m_OutputDup->ReleaseFrame();
return false;
}
// Copy image into a CPU access texture
ID3D11Texture2D* stagingTexture = nullptr;
result = m_Device->CreateTexture2D(&m_TextureDesc, nullptr, &stagingTexture);
if (FAILED(result) || stagingTexture == nullptr)
{
Log("Failed to copy image data to access texture);
m_OutputDup->ReleaseFrame();
return false;
}
D3D11_MAPPED_SUBRESOURCE mappedResource;
m_DeviceContext->CopyResource(stagingTexture, copyTexture);
m_DeviceContext->Map(stagingTexture, 0, D3D11_MAP_READ, 0, &mappedResource);
void* copy = malloc(m_TextureDesc.Width * m_TextureDesc.Height * 4);
memcpy(copy, mappedResource.pData, m_TextureDesc.Width * m_TextureDesc.Height * 4);
m_DeviceContext->Unmap(stagingTexture, 0);
stagingTexture->Release();
m_OutputDup->ReleaseFrame();
// Create a new SDL texture from the data in the copy varialbe
free(copy);
return true;
}
Some notes:
I have modified my original code to make it more readable so some cleanup and logging in the error handling is missing.
None of the error or timeout cases(except DXGI_ERROR_ACCESS_LOST) trigger in any testing scenario.
The "attemptCounter" never goes above 2 in any testing scenario.
The test cases are limited since I don't have access to a computer which produces the black image case.
The culprit was CopyResource() and how I created the CPU access texture.
CopyResource() returns void and that is why I didn't look into it before; I didn't think it could fail in any significant way since I expected it to return bool or HRESULT if that was the case.
In the documentation of CopyResource() does however disclose a couple of fail cases.
This method is unusual in that it causes the GPU to perform the copy operation (similar to a memcpy by the CPU). As a result, it has a few restrictions designed for improving performance. For instance, the source and destination resources:
Must be different resources.
Must be the same type.
Must have identical dimensions (including width, height, depth, and size as appropriate).
Must have compatible DXGI formats, which means the formats must be identical or at least from the same type group.
Can't be currently mapped.
Since the initialization code runs before the test application enters fullscreen, the CPU access texture description is set up using the desktop resolution, format etc. This caused CopyResouce() to fail silently and simply now write anything to stagingTexture in the test cases where a non native resoltuion was used for the test application.
In conclusion; I just moved the m_TextureDescription setup to CaptureScreen() and used the description of copyTexture to get the variables I didn't want to change between the textures.
// Create CPU access texture
D3D11_TEXTURE2D_DESC copyTextureDesc;
copyTexture->GetDesc(©TextureDesc);
D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = copyTextureDesc.Width;
textureDesc.Height = copyTextureDesc.Height;
textureDesc.Format = copyTextureDesc.Format;
textureDesc.ArraySize = copyTextureDesc.ArraySize;
textureDesc.BindFlags = 0;
textureDesc.MiscFlags = 0;
textureDesc.SampleDesc = copyTextureDesc.SampleDesc;
textureDesc.MipLevels = copyTextureDesc.MipLevels;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_READ;
textureDesc.Usage = D3D11_USAGE::D3D11_USAGE_STAGING;
ID3D11Texture2D* stagingTexture = nullptr;
result = m_Device->CreateTexture2D(&textureDesc, nullptr, &stagingTexture);
While this solved the issues I was having; I still don't know why the reinitialization in the handling of DXGI_ERROR_ACCESS_LOST didn't resolve the issue anyway. Does the DesctopDuplicationDescription not use the same dimensions and format as the copyTexture?
I also don't know why this didn't fail in the same way on computers with newer graphics cards. I did however notice that these machines were able to capture fullscreen applications using a simple BitBlt() of the desktop surface.
I have another hopefully not THAT stupid question- I wanted to clean up some old code I updated from OpenGL1.3 to 3.1. So I changed now in a final step the context creation to something like this (own code below):
https://www.opengl.org/wiki/Tutorial:_OpenGL_3.1_The_First_Triangle_%28C%2B%2B/Win%29
I am using GLEW 1.13.0 and if I do only glewInit additionally to the old context creation everything works fine, but after switching to a pure 3.1 context:
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST); is giving me GL_INVALID_ENUM
glAlphaFunc( GL_GREATER, 0.5 ); is giving me GL_INVALID_OPERATION
glDisable( GL_ALPHA_TEST ); is giving me GL_INVALID_ENUM
glEnable(GL_NORMALIZE); is giving me GL_INVALID_ENUM
now from what I read none of these is deprecated in OpenGL3.1 - What did I miss?
It currently looks like this:
hWnd = (HWND)GetWindow();
hDC = GetDC( hWnd );
INT DesiredColorBits = 32;
INT DesiredStencilBits = 8;
INT DesiredDepthBits = 24;
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = DesiredColorBits;
pfd.cDepthBits = DesiredDepthBits;
pfd.iLayerType = PFD_MAIN_PLANE;
pfd.cStencilBits = DesiredStencilBits;
INT nPixelFormat = ChoosePixelFormat( hDC, &pfd );
printf("Using pixel format %i", nPixelFormat);
if (!SetPixelFormat( hDC, nPixelFormat, &pfd ))
return 0;
hRC = wglCreateContext( hDC );
wglMakeCurrent(hDC, hRC);
//init glew
glewExperimental = GL_TRUE;
GLenum err = glewInit();
if (GLEW_OK != err)
return 0;
INT MajorVersion=3;
INT MinorVersion=1;
if (WGLEW_ARB_create_context && WGLEW_ARB_pixel_format)
{
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize= sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_DRAW_TO_WINDOW;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pfd.cDepthBits = 24;
pfd.iLayerType = PFD_MAIN_PLANE;
const INT iPixelFormatAttribList[] =
{
WGL_DRAW_TO_WINDOW_ARB, GL_TRUE,
WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
WGL_DOUBLE_BUFFER_ARB, GL_TRUE,
WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
WGL_COLOR_BITS_ARB, DesiredColorBits,
WGL_DEPTH_BITS_ARB, DesiredDepthBits,
WGL_STENCIL_BITS_ARB, DesiredStencilBits,
0 // End of attributes list
};
INT iContextAttribs[] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, MajorVersion,
WGL_CONTEXT_MINOR_VERSION_ARB, MinorVersion,
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
0 // End of attributes list
};
INT iPixelFormat, iNumFormats;
wglChoosePixelFormatARB(hDC, iPixelFormatAttribList, NULL, 1, &iPixelFormat, (UINT*)&iNumFormats);
// pfd oldstyle crap...
if (!SetPixelFormat(hDC, iPixelFormat, &pfd))
return 0;
hRC = wglCreateContextAttribsARB(hDC, 0, iContextAttribs);
}
if(hRC)
{
printf("GL_VENDOR : %s", glGetString(GL_VENDOR));
printf("GL_RENDERER : %s", glGetString(GL_RENDERER));
printf("GL_VERSION : %s", glGetString(GL_VERSION));
printf("GLEW Version : %s", glewGetString(GLEW_VERSION));
wglMakeCurrent(hDC, hRC);
}
else
return 0;
I stripped out the code, so sorry if I missed something, but the context is initialized and I am getting:
Using pixel format 9
GL_VENDOR : NVIDIA Corporation
GL_RENDERER : GeForce GT 740M/PCIe/SSE2
GL_VERSION : 4.4.0
GLEW Version : 1.13.0
The first part in the code initializes the old style context in order to get GLEW initialized and while I am a bit unsure about the PFD part in the 3.1 creation (although many examples show it like this). Nevertheless I tried a few different variants from different examples and tutorials and it always resulted in the GL_INVALID_OPERATION and GL_INVALID_ENUM when trying to set the states above.
GL_PERSPECTIVE_CORRECTION_HINT isn't in core anymore, and neither it was in 3.1. Refer to the OpenGL wiki for details, but allowed enums in 3.1 (and staying the same up to 4.5) are:
GL_LINE_SMOOTH_HINT
GL_POLYGON_SMOOTH_HINT
GL_TEXTURE_COMPRESSION_HINT
GL_FRAGMENT_SHADER_DERIVATIVE_HINT
That being said, don't bother with creating 3.1 contexts. If you can, go for 4.4/4.5, if you want to support previous generation, 3.3 is the reasonable minimum.
I've written an OpenGL application which will crash on some machines (on my own test machines it runs: Windows 8, Windows 7, Windows Vista (x86) - but on some client machines with Windows Vista (x86) it crashes) - I don't know yet why exactly, but I downgraded the application to an empty skeleton, which will just do glClear(). Then the application runs at least without crash (OpenGL context could be created, glew could be loaded), but the screen is not cleared in the specified glClearColor color. I suspect some issue in either my PIXELFORMATDESCRIPTOR or SwapBuffer doesn't work as expected there.
My code (I left out the windows creation and main() for simplicity):
hdc = GetDC(hWnd);
int pf;
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
pf = ChoosePixelFormat(hdc, &pfd);
if (pf == 0) {
MessageBox(NULL, "ChoosePixelFormat() failed: "
"Cannot find a suitable pixel format.", "Error", MB_OK);
}
if (SetPixelFormat(hdc, pf, &pfd) == FALSE) {
MessageBox(NULL, "SetPixelFormat() failed: "
"Cannot set format specified.", "Error", MB_OK);
}
DescribePixelFormat(hdc, pf, sizeof(PIXELFORMATDESCRIPTOR), &pfd);
hglrc = wglCreateContext(hdc);
if(!wglMakeCurrent(hdc, hglrc)){
MessageBox(NULL, "wglMakeCurrent() failed: "
"Cannot make context current.", "Error", MB_OK);
}
GLenum err = glewInit();
if (GLEW_OK != err){
/* Problem: glewInit failed, something is seriously wrong. */
fprintf(stdout, "Error: %s\n", glewGetErrorString(err));
}
fprintf(stdout, "Status: Using GLEW %s\n", glewGetString(GLEW_VERSION));
This code outputs: "Status: Using GLEW 1.10.0"
My main loop then:
while(true){
timeDuration = std::chrono::duration_cast<std::chrono::duration<double> >(std::chrono::high_resolution_clock::now() - lastTime);
lastTime = std::chrono::high_resolution_clock::now();
time += timeDuration.count();
glClearColor(0.7f, 0.7f, 0.7f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
SwapBuffers(hdc);
while(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)){
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
On my own machine I receive an gray screen (as expected) - but on the machines where my original program crashed, the screen is just black (but none of the MessageBoxes appear and the output is also: "Status: Using GLEW 1.10.0"). So I can't see any evidence for an error - but the output is different and glClearColor() seems to be ignored.
Any ideas on how I could hunt down this issue further?
Well, I'm still not sure why exactly it did run on my machine before, but the issue was the line:
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
after changing it to:
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
it did run anywhere. I didn't specified double buffering in the PIXELFORMATDESCRIPTOR and that seemed to bring strange issues with it (worked on my machine for some reason - but not on others).
I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down.
My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit.
My workflow goes like this:
Create default window context using GLFW.
Map the GPU devices.
Destroy the default GLFW context.
Create new GL context for each one of the devices (currently trying only one)
Setup boost thread for each context and make it current in that thread.
Run rendering procedures on each thread separately.(No resources share)
Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here :
GLubyte* ptr = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
Also glError returns "UNKNOWN ERROR"
I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread.
So I believe it is related to contexts creations.
Here is how I do it :
////Creating default window with GLFW here .
.....
.....
Creating offscreen contexts:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
24, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
void glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){
int pf;
HGPUNV hGPU[MAX_GPU];
HGPUNV GpuMask[MAX_GPU];
UINT displayDeviceIdx;
GPU_DEVICE gpuDevice;
bool bDisplay, bPrimary;
// Get a list of the first MAX_GPU GPUs in the system
if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {
printf("Device# %d:\n", gpuIndex);
// Now get the detailed information about this device:
// how many displays it's attached to
displayDeviceIdx = 0;
if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
{
bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
printf(" Display# %d:\n", displayDeviceIdx);
printf(" Name: %s\n", gpuDevice.DeviceName);
printf(" String: %s\n", gpuDevice.DeviceString);
if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
{
printf(" Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
}
else
{
printf(" Not attached to the desktop\n");
}
// See if it's the primary GPU
if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
{
printf(" This is the PRIMARY Display Device\n");
}
}
///======================= CREATE a CONTEXT HERE
GpuMask[0] = hGPU[gpuIndex];
GpuMask[1] = NULL;
_affDC = wglCreateAffinityDCNV(GpuMask);
if(!_affDC)
{
printf( "wglCreateAffinityDCNV failed");
}
}
printf("GPU context created");
}
glMultiContext::renderingContext *
glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
glMultiContext::renderingContext *rc;
rc = new renderingContext(gpuIndex);
_pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);
if(_pixelFormat == 0)
{
printf("failed to choose pixel format");
return false;
}
DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);
if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
{
printf("failed to set pixel format");
return false;
}
rc->_affRC = wglCreateContext(rc->_affDC);
if(rc->_affRC == 0)
{
printf("failed to create gl render context");
return false;
}
return rc;
}
//Call at the end to make it current :
bool glMultiContext::makeCurrent(renderingContext *rc)
{
if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
{
printf("failed to make context current");
return false;
}
return true;
}
//// init OpenGL objects and rendering here :
..........
............
AS I said ,I am getting no errors on any stages of device and context creation.
What am I doing wrong ?
UPDATE:
Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread.
But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class:
GPUThread::GPUThread(void)
{
_thread =NULL;
_mustStop=false;
_frame=0;
_rc =glMultiContext::getInstance().createRenderingContext(GPU1);
assert(_rc);
glfwTerminate(); //terminate the initial window and context
if(!glMultiContext::getInstance().makeCurrent(_rc)){
printf("failed to make current!!!");
}
// init engine here (GLEW was already initiated)
engine = new Engine(800,600,1);
}
void GPUThread::Start(){
printf("threaded view setup ok");
///init thread here :
_thread=new boost::thread(boost::ref(*this));
_thread->join();
}
void GPUThread::Stop(){
// Signal the thread to stop (thread-safe)
_mustStopMutex.lock();
_mustStop=true;
_mustStopMutex.unlock();
// Wait for the thread to finish.
if (_thread!=NULL) _thread->join();
}
// Thread function
void GPUThread::operator () ()
{
bool mustStop;
do
{
// Display the next animation frame
DisplayNextFrame();
_mustStopMutex.lock();
mustStop=_mustStop;
_mustStopMutex.unlock();
} while (mustStop==false);
}
void GPUThread::DisplayNextFrame()
{
engine->Render(); //renders frame
if(_frame == 101){
_mustStop=true;
}
}
GPUThread::~GPUThread(void)
{
delete _view;
if(_rc != 0)
{
glMultiContext::getInstance().deleteRenderingContext(_rc);
_rc = 0;
}
if(_thread!=NULL)delete _thread;
}
Finally I solved the issues by myself. First problem was that I called glfwTerminate after I set another device context to be current. That probably unmounted the new context too.
Second problem was my "noobiness " with boost threads. I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above.