glewInit function throws error code 0x0000007F (C++) [duplicate] - c++

I try to write down code from this tutorial. I have the code of InitializeOGL():
bool Ogl::InitializeOGL(bool vSync)
{
cout<<"Init OpenGL"<<endl;
int pixelFormat;
PIXELFORMATDESCRIPTOR pixelFormatDescriptor;
int result;
char *vendorChar, *rendererChar;
hDC = GetDC(hWnd);
if(!hDC)
return false;
pixelFormat = ChoosePixelFormat(hDC,&pixelFormatDescriptor);
if(pixelFormat==0)
return false;
result = SetPixelFormat(hDC,pixelFormat,&pixelFormatDescriptor);
if(result!=1)
return false;
HGLRC tempDeviceContext = wglCreateContext(hDC);
wglMakeCurrent(hDC,tempDeviceContext);
// glewExperimental = GL_TRUE;
if(glewInit()!=GLEW_OK)
return false;
int attribList[5] =
{
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 1, 0
};
hGLRC = wglCreateContextAttribsARB(hDC,0,attribList);
if(hGLRC!=NULL)
{
wglMakeCurrent(NULL,NULL);
wglDeleteContext(tempDeviceContext);
result = wglMakeCurrent(hDC,hGLRC);
if(result!=1)
return false;
}
vendorChar = (char*)glGetString(GL_VENDOR);
rendererChar = (char*)glGetString(GL_RENDERER);
strcpy_s(videoCardInfo,vendorChar);
strcat_s(videoCardInfo,"-");
strcat_s(videoCardInfo,rendererChar);
if(vSync)
result = wglSwapIntervalEXT(1);
else
result = wglSwapIntervalEXT(0);
if(result!=1)
return false;
int glVersion[2] = {-1,-1};
glGetIntegerv(GL_MAJOR_VERSION,&glVersion[0]);
glGetIntegerv(GL_MINOR_VERSION,&glVersion[1]);
cout<<"Initializing OpenGL"<<endl;
cout<<"OpenGL version"<<glVersion[0]<<"."<<glVersion[1]<<endl;
cout<<"GPU"<<videoCardInfo<<endl;
return 0;
}
When I try to change context version to OpenGL 3.1, here crashes wglCreateContextAttribsARB()
function (Pointer for this is getting in LoadExtensions() correctly). When I try to create OpenGL 4.0 here crashes function wglSwapIntervalEXT(). My graphic card handles only OpenGL 3.1.
My question is how to succefully init OpenGL context here? What I have to do to create OpenGL context in version 3.1.

There are a couple of things that need to be mentioned here:
1. Driver Version
If your graphics card / driver only support OpenGL 3.1, then WGL_CONTEXT_MAJOR_VERSION_ARB and friends are generally going to be undefined. Before OpenGL 3.2 introduced core / compatibility, context versions were not particularly meaningful.
This requires support for either WGL_ARB_create_context or WGL_ARB_create_conext_profile.
2. Incorrect usage of ChoosePixelFormat and SetPixelFormat
PIXELFORMATDESCRIPTOR pixelFormatDescriptor; // <--- Uninitialized
At minimum, the Win32 API needs you to initialize the size field of this structure. In years past, the size of structures was used to determine the version of Windows that a particular piece of code was written for. These days structures like PIXELFORMATDESCRIPTOR are generally static in size because they are used by a part of Windows that is deprecated (GDI), but if you do not set the size you can still thoroughly confuse Windows. Furthermore, you need to flag your pixel format to support OpenGL to guarantee that you can use it to create an OpenGL render context.
Also note that once you set the pixel format for a device context on Windows, it cannot be changed. Generally, this means if you want to create a dummy render context to initialize your extensions you should also create a dummy window with a dummy pixel format. After you initialize your extensions, you can use wglChoosePixelFormatARB (...) and its associated functions to select the pixel format for your main window's device context.
This is (was) particularly important back in the days before FBOs when you wanted to implement multi-sampling. You cannot get a multi-sample pixel format using ChoosePixelFormat (...), but you need to call ChoosePixelFormat (...) to setup the extension necessary to get a multi-sample pixel format. Kind of a catch 22.

Related

imgui is not rendering with a D3D9 Hook

ok so basically I am trying to inject a DLL into a game for an external menu for debugging and my hook works completely fine, i can render a normal square fine to the screen but when i try to render imgui, some games DirectX just dies and some others nothing renders at all. The issue makes no sense because I've tried everything, i've switched libraries, tried different compile settings and just started doing random shit but still to no avail, the library i am using for hooking is minhook (was using kiero but from trying to figure out the issue switched to manually getting the D3D Device).
My hooks work entirely fine as I said earlier, I can render a square to the screen without issues but I cant render imgui (and yes i checked it is the DX9 version of imgui), code:
long __stdcall EndSceneHook(IDirect3DDevice9* pDevice) // Our hooked endscene
{
D3DRECT BarRect = { 100, 100, 200, 200 };
pDevice->Clear(1, &BarRect, D3DCLEAR_TARGET, D3DCOLOR_ARGB(255, 0, 255, 0), 0.0f, 0);
if (!EndSceneInit) {
ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
ImGui_ImplWin32_Init(TrackmaniaWindow);
ImGui_ImplDX9_Init(pDevice);
EndSceneInit = true;
return OldEndScene(pDevice);
}
ImGui_ImplDX9_NewFrame();
ImGui_ImplWin32_NewFrame();
ImGui::NewFrame();
ImGui::ShowDemoWindow();
ImGui::EndFrame();
ImGui::Render();
ImGui_ImplDX9_RenderDrawData(ImGui::GetDrawData());
return OldEndScene(pDevice); // Call original ensdcene so the game can draw
}
And if you are considering saying i forgot to hook Reset I did but the game pretty much never calls it so I probably did it wrong, code for that:
long __stdcall ResetHook(IDirect3DDevice9* pDevice, D3DPRESENT_PARAMETERS Parameters) {
/* Delete imgui to avoid errors */
ImGui_ImplDX9_Shutdown();
ImGui_ImplWin32_Shutdown();
ImGui::DestroyContext();
/* Check if its actually being called */
if (!ResetInit) {
std::cout << "Reset called correctly" << std::endl;
ResetInit = true;
}
/* Return old function */
return OldReset(pDevice, Parameters);
}
Just incase I did mess up the hooking process for 1 of the functions i will also include the code i used to actually hook them
if (MH_CreateHook(vTable[42], EndSceneHook, (void**)&OldEndScene) != MH_OK)
ThrowError(MinHook_Hook_Creation_Failed);
if (MH_CreateHook(vTable[16],OldReset,(void**)&OldReset)!=MH_OK)
ThrowError(MinHook_Hook_Creation_Failed);
MH_EnableHook(MH_ALL_HOOKS);
Ok so I solved the issue already, but just incase anyone else needs help I have found a few fixes as to why it would crash/not render.
First one being EnumWindow(), if you are using EnumWindows() to get your target processes HWND then that is likely one or your entire issue,
For internal cheats, Use GetForegroundWindow() when the game is loaded, or you can use FindWindow(0,"Window Name") (works for both external and internal [game needs to be loaded])
void MainThread(){
HWND ProcessWindow = 0;
WaitForProcessToLoad(GameHandle); // This is just an example of waiting for the game to load
ProcessWindow = GetForegroundWindow(); // We got the HWND
// or
ProcessWindow = FindWindow(0,"Thing.exe");
}
To start off with the 2nd possible issue, make sure your replacement functions for the functions your hooking are actually passing the right arguments (this is if your hook instantly crashes), and make sure you are returning the original function.
Make sure your WndProc function is working correctly (if you dont know how then google DX9 Hooking Tutorials and copy + paste there code for that).
Last fix is how you are rendering imgui to the screen, if imgui isnt rendering after the first fix then it is likely because you arent calling a function that is required, This is an example of correctly made imgui rendering
long __stdcall EndSceneHook(IDirect3DDevice9* pDevice) // Our hooked endscene
{
if (!EndSceneInit) {
ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
ImGui::StyleColorsDark();
ImGui_ImplWin32_Init(Window);
ImGui_ImplDX9_Init(pDevice);
EndSceneInit = true;
return OldEndScene(pDevice);
}
ImGui_ImplDX9_NewFrame();
ImGui_ImplWin32_NewFrame();
ImGui::NewFrame();
ImGui::ShowDemoWindow();
ImGui::EndFrame();
ImGui::Render();
ImGui_ImplDX9_RenderDrawData(ImGui::GetDrawData());
return OldEndScene(pDevice); // Call original ensdcene so the game can draw
}
If none of these fixes worked then google the error or youtube DX9 hooking tutorials

GLX Context Creation Error: GLXBadFBConfig

I used glXCreateContext to create the contexts, but the function is deprecated and always results in an OpenGL Version 3.0, where I would need at least 4. Now, if I have understood it right, GLXContext glXCreateContextAttribsARB(Display* dpy, GLXFBConfig config, GLXContext share_context, Bool direct, const int* attrib_list); replaced glXCreateContext. The "new" function allows for explicitly specifying the major version, minor version, profile et cetera in it's attrib_list like this for example:
int context_attribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 4,
GLX_CONTEXT_MINOR_VERSION_ARB, 5,
GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_COMPABILITY_PROFILE_BIT_ARB,
None
};
Then use the function:
glXCreateContextAttribsARB(dpy, config, NULL, true, context_attribs);
That is how I have done it in my program. The window is already created and dpy is a valid pointer to Display. config I have defined like this:
// GLXFBConfig config; created at the beginning of the program
int attrib_list[] =
{
GLX_RENDER_TYPE, GLX_RGBA_BIT,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_DEPTH_SIZE, 24,
GLX_DOUBLEBUFFER, True,
None
};
int nAttribs;
config = glXChooseFBConfig(dpy, 0, attrib_list, &nAttribs);
Checking with glxinfo, I have the correct visual for it; vi has been set to 0x120, which I can confirm with glxinfo | grep 0x120. It exactly fulfills the above.
So far, so good. But when running the application (compiling works fine), I get the following error:
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 31
Current serial number in output stream: 31
Now, this is what the error is about:
If <config> does not support compatible OpenGL contexts providing the requested API major and minor version, forward-compatible flag, and debug context flag, GLXBadFBConfig is generated.
So, the problem is pretty straightforward. I don't know how to solve it though. What it essentially means is that no OpenGL context corresponding both to the attributes I specified in attrib_list[] and the attributes in context_attribs can be found. With glxinfo | grep Max I confirmed that my highest possible OpenGL Version is 4.5. I would like to hear your advice on what I should do now. I have played around with the attributes in context_attribs for a while, but did not get anywhere. Maybe the problem really is in another place. Maybe my conception of the GLX functions is flawed in general, please point it out if so!
The specification of GLX_ARB_create_context is clear about when GLXBadFBConfig error may be returned:
* If <config> does not support compatible OpenGL contexts
providing the requested API major and minor version,
forward-compatible flag, and debug context flag, GLXBadFBConfig
is generated.
This maybe confusing (as error has nothing to do with already created GLXFBConfig), but I that's what we have. So the most obvious reason for the error you have is that your system doesn't actually support OpenGL 4.5 Compatible Profile you have requested - it might have, though, support OpenGL 4.5 Core Profile or compatible/core profiles of lower versions. This is a pretty common case for Mesa drivers supporting only OpenGL 3.3+ Core Profiles and just OpenGL 3.0 Compatible Profile for many GPUs (but not all - some gets better Compatible Profile support like Radeons).
If you are not familiar yet with conception of OpenGL profiles - you can start here.
glxinfo shows information about both Core and Compatible profiles, which could be filtered out like this:
glxinfo | grep -e "OpenGL version" -e "Core" -e "Compatible"
which returns this on a virtual Ubuntu 18.04 to me:
OpenGL core profile version string: 3.3 (Core Profile) Mesa 19.2.8
OpenGL version string: 3.1 Mesa 19.2.8
If your application really needs OpenGL 4.5 or higher, than just try creating a context with GLX_CONTEXT_CORE_PROFILE_BIT_ARB bit instead of GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB and ensure not using any deprecated functionality.
Note that requesting a Compatible Profile of specific version usually makes no sense - it is enough just skipping version parameters to get the highest supported one and filter out unsupported versions from GL_VERSION/GL_MAJOR_VERSION of already created context like it was done in days before profiles have been introduced. In case of a Core Profile, it might be tricky on some OpenGL drivers requesting the highest supported version (e.g. without disabled functionality of versions higher than requested) - the following code snippet could be useful:
//! A dummy XError handler which just skips errors
static int xErrorDummyHandler (Display* , XErrorEvent* ) { return 0; }
...
Window aWindow = ...;
Display* aDisp = ...;
GLXFBConfig anFBConfig = ...;
bool toDebugContext = false;
GLXContext aGContext = NULL
const char* aGlxExts = glXQueryExtensionsString (aDisp, aVisInfo.screen);
if (!checkGlExtension (aGlxExts, "GLX_ARB_create_context_profile"))
{
std::cerr << "GLX_ARB_create_context_profile is NOT supported\n";
return;
}
// Replace default XError handler to ignore errors.
// Warning - this is global for all threads!
typedef int (*xerrorhandler_t)(Display* , XErrorEvent* );
xerrorhandler_t anOldHandler = XSetErrorHandler(xErrorDummyHandler);
typedef GLXContext (*glXCreateContextAttribsARB_t)(Display* dpy, GLXFBConfig config,
GLXContext share_context, Bool direct,
const int* attrib_list);
glXCreateContextAttribsARB_t aCreateCtxProc = (glXCreateContextAttribsARB_t )glXGetProcAddress((const GLubyte* )"glXCreateContextAttribsARB");
int aCoreCtxAttribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
GLX_CONTEXT_FLAGS_ARB, toDebugContext ? GLX_CONTEXT_DEBUG_BIT_ARB : 0,
0, 0
};
// try to create a Core Profile of highest OpenGL version (up to 4.6)
for (int aLowVer4 = 6; aLowVer4 >= 0 && aGContext == NULL; --aLowVer4)
{
aCoreCtxAttribs[1] = 4;
aCoreCtxAttribs[3] = aLowVer4;
aGContext = aCreateCtxProc (aDisp, anFBConfig, NULL, True, aCoreCtxAttribs);
}
for (int aLowVer3 = 3; aLowVer3 >= 2 && aGContext == NULL; --aLowVer3)
{
aCoreCtxAttribs[1] = 3;
aCoreCtxAttribs[3] = aLowVer3;
aGContext = aCreateCtxProc (aDisp, anFBConfig, NULL, True, aCoreCtxAttribs);
}
bool isCoreProfile = aGContext != NULL;
if (!isCoreProfile)
{
std::cerr << "glXCreateContextAttribsARB() failed to create Core Profile\n";
}
// try to create Compatible Profile
if (aGContext == NULL)
{
int aCtxAttribs[] =
{
GLX_CONTEXT_FLAGS_ARB, toDebugContext ? GLX_CONTEXT_DEBUG_BIT_ARB : 0,
0, 0
};
aGContext = aCreateCtxProc (aDisp, anFBConfig, NULL, True, aCtxAttribs);
}
XSetErrorHandler (anOldHandler);
// fallback to glXCreateContext() as last resort
if (aGContext == NULL)
{
aGContext = glXCreateContext (aDisp, aVis.get(), NULL, GL_TRUE);
if (aGContext == NULL) { std::cerr << "glXCreateContext() failed\n"; }
}

GTKmm Opengl context not initialized

I am trying to create a Video Player inside GTKmm, for this I am using mpv. The documentation says, that I can embed the video player using an OpenGL view. However, I am having difficulties implementing the player inside a GTKmm app.
I have a GLWindow, that contains a GLArea which should then contain the video player. The problem is, that when I try to initialize the mpv render context, I get an error, telling me that the OpenGL was not initialized.
The following is my Constructor for the main window that I have:
GLWindow::GLWindow(): GLArea_{}
{
set_title("GL Area");
set_default_size(400, 600);
setlocale(LC_NUMERIC, "C");
VBox_.property_margin() = 12;
VBox_.set_spacing(6);
add(VBox_);
GLArea_.set_hexpand(true);
GLArea_.set_vexpand(true);
GLArea_.set_auto_render(true);
GLArea_.set_required_version(4, 0);
VBox_.add(GLArea_);
mpv = mpv_create();
if (!mpv)
throw std::runtime_error("Unable to create mpv context");
mpv_set_option_string(mpv, "terminal", "yes");
mpv_set_option_string(mpv, "msg-level", "all=v");
if (mpv_initialize(mpv) < 0)
throw std::runtime_error("could not initialize mpv context");
mpv_render_param params[] = {
{MPV_RENDER_PARAM_API_TYPE, const_cast<char*>(MPV_RENDER_API_TYPE_OPENGL)},
{MPV_RENDER_PARAM_OPENGL_INIT_PARAMS, static_cast<void*>(new (mpv_opengl_init_params){
.get_proc_address = get_proc_address,
})},
{MPV_RENDER_PARAM_INVALID}
};
if (mpv_render_context_create(&mpv_gl, mpv, params) < 0)
throw std::runtime_error("Failed to create render context");
mpv_render_context_set_update_callback(mpv_gl, GLWindow::onUpdate, this);
}
As far as I know, this should just initialize the video player view, but the problem arises when I try to create the render context with mpv_render_context_create. I get the following error on that line:
[libmpv_render] glGetString(GL_VERSION) returned NULL.
[libmpv_render] OpenGL not initialized.
Then the app terminates with a SIGSEGV Signal.
The problem may be from my get_proc_address function, currently I have only implemented it for linux, it looks like the following:
static void *get_proc_address(void *ctx, const char *name) {
return (void *)glXGetProcAddress(reinterpret_cast<const GLubyte *>(name));
}
To be honest, I am overwhelmed as to why the OpenGL context is not being created. How do I have to adjust my GTKmm app to allow the mpv video player to initialize correctly?
As the error suggested, the problem was that there was no OpenGL context. The GLArea is not instantly created, there is an event signal_realize on the GLArea when the OpenGL view has been created. I had to listen to that event, and in there initialize the mpv variables after setting GLArea.make_current(), to set the GLArea's context to the one we want to connect to mpv

How I can get my total GPU memory using Qt's native OpenGL?

I'm trying to get the total amount of GPU memory from my video card using native Qt's OpenGL, I have tried hundred of methods, but none do work.
This is what I have at the moment:
QOpenGLContext context;
context.create();
QOffscreenSurface surface;
surface.setFormat(context.format());
surface.create();
QOpenGLFunctions func;
context.makeCurrent(&surface);
func.initializeOpenGLFunctions();
GLint total_mem_kb = 0;
func.glGetIntegerv(GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX,&total_mem_kb);
qDebug()<<total_mem_kb;
The problem is that the variable total_mem_kb is always 0, It does not get the value inside of glGetIntegerv. By running this code I get 0. What can be the problem? Can you please give me a hint?
First an foremost check if the NVX_gpu_memory_info extension is supported.
Note that the extension requires OpenGL 2.0 at least.
GLint count;
glGetIntegerv(GL_NUM_EXTENSIONS, &count);
for (GLint i = 0; i < count; ++i)
{
const char *extension = (const char*)glGetStringi(GL_EXTENSIONS, i);
if (!strcmp(extension, "GL_NVX_gpu_memory_info"))
printf("%d: %s\n", i, extension);
}
I know you just said that you have an Nvidia graphics card, but this doesn't by default guarantee support. Additionally if you have an integrated graphics card then make sure you are actually using your dedicated graphics card.
If you have an Nvidia GeForce graphics card, then then the following should result in something along the lines of "Nvidia" and "GeForce".
glGetString(GL_VENDOR);
glGetString(GL_RENDERER);
If it returns anything but "Nvidia" then you need to open your Nvidia Control Panel and set the preferred graphics card to your Nvidia graphics card.
After you've verified it being the Nvidia graphics card and that the extension is supported. Then you can try getting the total and current available memory:
GLint totalMemoryKb = 0;
glGetIntegerv(GL_GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX, &totalMemoryKb);
GLint currentMemoryKb = 0;
glGetIntegerv(GL_GPU_MEMORY_INFO_CURRENT_AVAILABLE_VIDMEM_NVX, &currentMemoryKb);
I would also like to point out that the NVX_gpu_memory_info extension defines it as:
GL_GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX
and not
GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX
Note the MEMORY vs MEM difference.
So suspecting you've defined GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX yourself or leveraging something else that has defined it. That tells it could be wrongly defined or referring to something else.
I use the following:
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
LONG __stdcall glxGpuTotalMemory()
{
GLint total_mem_kb = 0;
glGetIntegerv(GL_GPU_MEM_INFO_TOTAL_AVAILABLE_MEM_NVX, &total_mem_kb);
if (total_mem_kb == 0 && wglGetGPUIDsAMD)
{
UINT n = wglGetGPUIDsAMD(0, 0);
UINT *ids = new UINT[n];
size_t total_mem_mb = 0;
wglGetGPUIDsAMD(n, ids);
wglGetGPUInfoAMD(ids[0], WGL_GPU_RAM_AMD, GL_UNSIGNED_INT, sizeof(size_t), &total_mem_mb);
total_mem_kb = total_mem_mb * 1024;
}
return total_mem_kb;
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////
LONG __stdcall glxGpuAvailMemory()
{
GLint cur_avail_mem_kb = 0;
glGetIntegerv(GL_GPU_MEM_INFO_CURRENT_AVAILABLE_MEM_NVX, &cur_avail_mem_kb);
if (cur_avail_mem_kb == 0 && wglGetGPUIDsAMD)
{
glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI, &cur_avail_mem_kb);
}
return cur_avail_mem_kb;
}

Abnormal Execution after wglCreateContext in MFC based Application

We have a 32-bit Visual C++/MFC based (Multi-Document Interface-MDI) application which is compiled using Visual Studio 2005.
We are running the application on Windows Server 2008 R2 (64-bit) having ATI Graphics card (ATIES1000 version 8.240.50.5000 to be exact).
We are using OpenGL in certain parts of our software.
The problem is that the software is randomly crashing after executing the code related to initialization of an OpenGL based window. After including heavy tracing in the code, we found out that the program execution becomes abnormal after executing wglCreateContext() function; it skips the rest of the code for the current function for initializing of OpenGL based Window and instead starts executing the default View (drawing) function of the application (eg. ApplicationView::OnDraw).
After executing a few lines of code for the default view, the program generates an exception when trying to access a main document member variable. Our unhandled exception filter is able to catch the exception and generate an execution dump, but the execution dump does not provide much useful information either, other than specifying that the exception code is c0000090.
The problem is quite random and is only reported to appear on this Windows server environment only.
Any help or hints in solving this situation?
EDIT :
in other words, the program structure looks like this, with the execution randomly skipping the lines after wglCreateContext() function, executing a few lines of ApplicationView::onDraw and then causing an exception:
void 3DView::InitializeOpenGL()
{
....
Init3DWindow( m_b3DEnabled, m_pDC->GetSafeHdc());
m_Font->init();
....
}
bool 3DView::Init3DWindow( bool b3DEnabled, HDC pHDC)
{
...
static PIXELFORMATDESCRIPTOR pd = {
sizeof (PIXELFORMATDESCRIPTOR), // Specifies the size
1, // Specifies the version of this data structure
...
};
int iPixelFormat = ChoosePixelFormat(pHDC, &pd);
if(iPixelFormat == 0)
{
...
}
bSuccess = SetPixelFormat(pHDC, iPixelFormat, &pd);
m_hRC = wglCreateContext(pHDC);
//Execution becomes abnormal afterwards and control exits from 3DView::Init3DWindow
if(m_hRC==NULL)
{
...
}
bSuccess = wglMakeCurrent(pHDC, m_hRC);
....
return true;
}
void ApplicationView::OnDraw(CDC* pDC)
{
...
CApplicationDoc* pDoc = GetDocument();
ASSERT_VALID(pDoc);
...
double currentValue = pDoc->m_CurrentValue;
//EXCEPTION raised at the above line
double nextValue = pDoc->nextValue;
}