OpenGL version too low when using ifstream - c++

I'm writing some elements of an engine for a 2D roguelike. I'm at a part where I'd like to be able to open .pngs. (I'm doing things manually because I like to learn about these things.) So I have created a PngLoader class and am starting to do basic things with it, like ... opening the file. For some reason this breaks the OpenGL GLFunctionFinder class that does something similar to GLEW, except manually.
The GLFF basically crashes out the program when the OpenGL version is too low; this is expected behavior. (Probably a segfault on an unset function pointer. I could "fix" this by making it crash more gracefully, but who cares?) The GLFF works rather well generally, since my graphics card runs OpenGL 4.3 or so, but I did have it break a few days ago when the driver switched to the integrated graphics driver (that only does OpenGL version 1.1). That was fixed by changing some settings in the graphics dashboard.
So the issue that I'm having crop up today is appearing when I write something like this:
class ifcontainerclass {
std::ifstream fs;
};
/* other code */
int WINAPI WinMain(/* ... */) {
GLFunctionFinder ff;
ff.interrogateWindows();
ifcontainerclass ifcc;
/* GL code and main loop */
return 0;
}
... the OpenGL context gets stuck on version 1.1. If I change ifstream to fstream, I get the higher version context that I expect, and the issue goes away.
I'm also finding in my testing that if I comment out the GL code and main loop area, the problem again disappears. The "version too low" checks are done in GLFunctionFinder::interrogateWindows(), not in the later GL code, so the conditions are still being checked. (After some testing, I'm finding that commenting out the MSG structure is what's making the problem go away.)
My current belief is the compiler is doing some magic that causes Windows / Intel / NVidia to only issue OpenGL 1.1 contexts / connect to the wrong driver when ... I really don't know when. The issue appears really arbitrary.
I'm probably going to look into getting rid of the global HDC and global HGLRC I was using out of laziness, since I think the problem is associated with how things are being initialized / how the compiler arranges to have these things initialized, and pulling them out of global scope will let me inspect and control that process more effectively. I did this in the GLFunctionFinder by using a static void * GlobalAddr = this file-scoped pointer, casting that to GLFunctionFinder in the dummy window's WndProc, and having HDC and HGLRC be member variables of GLFunctionFinder, accessible through the pointer. I will probably try something similar in my main window; I've been needing to clean up the global scoped stuff anyway. The other thing I can do is run each version in a debugger and see where it diverges, although I'm reluctant to do that since debugging is not really set up properly in my IDE and I'm not looking forward to fixing that.
I think I can get by in the meantime by using fstream instead of ifstream, but I'm not comfortable with not understanding problems that are this strange, since it suggests some kind of instability I ought to be aware of before I have 10k lines of code that arbitrarily stops running and can only be fixed by changing what appears to be a completely unrelated thing somewhere else.
Questions:
What in the world is happening? What is the core issue here?
Why does changing ifstream to fstream fix the problem?
Why does commenting out the MSG struct fix the problem?
PS: NvOptimusEnablement = 0x00000001 did not fix the issue.
PPS: MinGW 4.9.2 in Qt (as an IDE, no Qt libraries) with CMake
Edit: After determining Qt's debugger works when -ggdb is passed to g++, I stepped through the code and found the PIXELFORMATDESCRIPTOR in GLFunctionFinder was not being assigned; I was assigning the properties to some random temporary variable and not the member variable, while ChoosePixelFormat was using the member variable. Since the context you get depends on the kind of pixel you specify, I was effectively requesting an indeterminate device context from Windows. The specifics of compilation determined what random junk got put in the PIXELFORMATDESCRIPTOR, and it just so happens that declaring an ifstream instead of an fstream puts the wrong random junk in that area.
The problem was fixed by adding something to the effect of this->pfd_ = pfd; to GLFunctionFinder's constructor after defining the temporary pfd.
Edit 2: To satisfy my understanding of what the "off-topic" flag means, I'll provide a minimum example of the core problem:
main.cpp:
#include <windows.h>
#include <sstream>
#include <GL/gl.h>
HDC h_dc;
HGLRC h_context;
LRESULT CALLBACK MainWndProc(_In_ HWND h_wnd,
_In_ UINT u_msg,
_In_ WPARAM w_param,
_In_ LPARAM l_param) {
switch(u_msg) {
case WM_CREATE: {
PIXELFORMATDESCRIPTOR pfd; // <-- This was the error source, (pfd not set to an
// accelerated format, but only sometimes)
// except in my code it was harder to see than
// this.
h_dc = GetDC(h_wnd);
int pfint = ChoosePixelFormat(h_dc, &pfd);
SetPixelFormat(h_dc, pfint, &pfd);
h_context = wglCreateContext(h_dc);
wglMakeCurrent(h_dc, h_context);
const unsigned char * version_string =
static_cast<const unsigned char *>(glGetString(GL_VERSION));
if(version_string[0] == '1' || version_string[0] == '2') {
std::stringstream ss;
ss << "OpenGL version (" << version_string << ") is too low";
MessageBox(NULL, ss.str().c_str(), "Error", MB_OK | MB_ICONERROR);
}
break;
}
case WM_DESTROY:
PostQuitMessage(EXIT_SUCCESS);
break;
default:
return DefWindowProc(h_wnd, u_msg, w_param, l_param);
}
return 1;
}
int WINAPI WinMain( HINSTANCE h_inst,
HINSTANCE h_previnst,
LPSTR cmd_str_in,
int cmd_show_opt) {
WNDCLASSEX wc;
wc.cbSize = sizeof(WNDCLASSEX);
wc.style = CS_OWNDC;
wc.lpfnWndProc = MainWndProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.hInstance = h_inst;
wc.hIcon = NULL;
wc.hCursor = LoadCursor(NULL, IDC_ARROW);
wc.hbrBackground = (HBRUSH)(COLOR_BACKGROUND + 1);
wc.lpszMenuName = NULL;
wc.lpszClassName = "MAINWIN";
wc.hIconSm = NULL;
RegisterClassEx(&wc);
HWND h_wnd = CreateWindowEx(0,
"MAINWIN",
"MCVE Program",
WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT,
CW_USEDEFAULT,
640,
480,
NULL,
NULL,
h_inst,
NULL);
return EXIT_SUCCESS;
}
CMakeLists.txt:
project(mcve_pfd_problem)
cmake_minimum_required(VERSION 2.8)
aux_source_directory(. SRC_LIST)
add_executable(${PROJECT_NAME} WIN32 ${SRC_LIST})
target_link_libraries(${PROJECT_NAME} opengl32)
In case someone skips to the end, the problem is solved, but I don't know how I'm supposed to indicate that.

So, what I was actually seeing was the effect of undefined behavior due to default-initializing a struct, where the values contained in it were uninitialized:
class GLFunctionFinder {
PIXELFORMATDESCRIPTOR pfdarr_;
/* other code */
GLFunctionFinder();
setupContext();
/* other code */
}
GLFunctionFinder::GLFunctionFinder() {
/* other code */
PIXELFORMATDESCRIPTOR pfd = { /* things */ };
// Missing: pfdarr_ = pfd;
// pfdarr_ never gets set
}
GLFunctionFinder::setupContext() {
// Undefined behavior:
int px_format_default = ChoosePixelFormat(this->h_cd, &(this->pfdarr_));
/* other code */
}
This gave ChoosePixelFormat whatever junk was in pfdarr_. When I initially wrote this, it behaved as though there was no problem, because apparently the junk data "looked like" an accelerated pixel format type, and ChoosePixelFormat would give me an int format that yielded the OpenGL context I was after. It stayed like that for a while, because it just kept working.
Switching from fstream to ifstream changed some specifics about the way the compiler laid out / optimized the program, and the junk data in pfdarr_ changed to "look like" an unaccelerated format. This led to getting the wrong context, which led to failing the OpenGL version check. The story with commenting out the MSG struct and part of the event loop is basically the same: it just so happens the compiler emits something that yields the OpenGL context I want.
I was compiling the code I gave in Edit 2 last night, and it was giving me a 1.1 context. This morning, exact same code, no error; moved the MessageBox and found I'm getting a 4.3 context. Fun errors.

Related

Is there a _safe_ way to send a string via the PostMessage?

I want to raise this question one more time. I wrote a comment on the accepted answer already, but, it seems, the answered person is inactive on the SO. So, I copy my comment-question here.
In the accepted answer on the referred question, there is a potential risk of the memory leakage. For example, the PostMessage can be resulted with error because of the messages queue is full. Or, the target window may be already destroyed (so, the delete operator will not be called).
As a summary, there is no a strong corresponding between the posting and the receiving of the Windows message. But, on the other hand, there are not so many options how to pass a pointer (to a string, for example) with the Windows message. I see only two: to use objects allocated in the heap or declared as the global variables. The former one has the difficulty which is described here. The latter one is deprived of the specified disadvantage, but there is need to allocate some memory which may be used rarely.
So, I have these questions:
May someone suggest a way, how we can be safe against the memory leakage in the case of using of the heap memory for "attaching" something to the Windows message?
Is there some another option how we can reach the goal (send a string, for example, within the Windows message via the PostMessage system call)?
use std::wstringstream.
look for EM_STREAMOUT.
wchar_t remmi[1250];
//make stuff
int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
hc=CreateWindowEx(WS_EX_NOPARENTNOTIFY, MSFTEDIT_CLASS,remmi,
ES_MULTILINE|ES_AUTOVSCROLL| WS_VISIBLE | WS_CHILD
|WS_TABSTOP|WS_VSCROLL,
1, 350, 450, 201,
this->m_hWnd, NULL, h, NULL);
return 0;
}
//middleware
DWORD CALLBACK E(DWORD_PTR dw, LPBYTE pb, LONG cb, LONG *pcb)
{
std::wstringstream *fr = (std::wstringstream *)dw;
fr->write((wchar_t *)pb, int(cb/2));
*pcb = cb;
return 0;
}
//final
void CMainFrame::tr()
{
std::wstringstream fr;
EDITSTREAM es = {};
es.dwCookie = (DWORD_PTR) &fr;
es.pfnCallback = E;
::SendMessage(hc, EM_STREAMOUT, SF_TEXT|SF_UNICODE, (LPARAM)&es);
ZeroMemory(remmi,1218*2);
fr.read(remmi,747);
}

AccessException: Attempted To Read Or Write Protected/Corrupted Memory -- Known Exception, Unknown Reason?

Yes, I know there's a million threads on this exception, I've probably looked at 20-25 of them, but none of the causes seem to correlate to this, sadly (hence the title, known exception, unknown reason).
I've recently been gaining interest in InfoSec. As my first learners-project, I'd create a basic DLL Injector. Seems to be going well so far, however, this exception is grinding me up, and after some relatively extensive research I'm quite puzzled. Oddly enough, the exception also rises after the function completely finishes.
I couldn't really figure this out myself since external debuggers wouldn't work with my target application, and that was a whole new unrelated issue.
Solutions suggested & attempted so far:
Fix/Remove thread status checking (it was wrong)
Ensure the value behind DllPath ptr is being allocated, not the ptr
Marshaling the C# interop parameters
Anyway, here is my hunk of code:
#pragma once
#include "pch.h"
#include "injection.h" // only specifies UserInject as an exportable proto.
DWORD __stdcall UserInject(DWORD ProcessId, PCSTR DllPath, BOOL UseExtended) {
DWORD length;
CHAR* buffer;
LPVOID memry;
SIZE_T write;
HANDLE hProc;
HMODULE kr32;
HANDLE thread;
length = GetFullPathName(
DllPath,
NULL,
NULL,
NULL
);
AssertNonNull(length, INVALID_PATH);
kr32 = GetModuleHandle("kernel32.dll");
AssertNonNull(kr32, YOUREALLYMESSEDUP);
buffer = new CHAR[length];
GetFullPathName(
DllPath,
length,
buffer,
NULL
);
AssertNonNull(buffer, ERR_DEAD_BUFFER);
hProc = OpenProcess(
ADMIN,
FALSE,
ProcessId
);
AssertNonNull(hProc, INVALID_PROCID);
memry = VirtualAllocEx(
hProc,
nullptr,
sizeof buffer,
SHELLCODE_ALLOCATION,
PAGE_EXECUTE_READWRITE
);
AssertNonNull(memry, INVALID_BUFSIZE);
WriteProcessMemory(
hProc,
memry,
DllPath,
sizeof DllPath,
&write
);
AssertNonNull(write, ERR_SOLID_BUFFER);
auto decidePrototype = [](BOOL UseExtended, HMODULE kr32) -> decltype(auto) {
LPVOID procAddress;
if (!UseExtended) {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIB_ORD);
}
else {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIBX_ORD);
};
return (LPTHREAD_START_ROUTINE)procAddress;
};
auto loadLibraryAddress = decidePrototype(UseExtended, kr32);
thread = CreateRemoteThread(
hProc,
NULL,
NULL,
loadLibraryAddress,
memry,
NULL,
NULL
);
AssertNonNull(thread, INVALID_ROUTINE);
WaitForSingleObject(thread, INFINITE);
// The status stuff is quite weird; it was an attempt at debugging. The error occurs with or without this code.
// I left it because 50% of the comments below wouldn't make sense. Just be assured this code is positively *not* the problem (sadly).
// LPDWORD status = (LPDWORD)1;
// GetExitCodeThread(thread, status);
return TRUE // *status;
}
One obscure macro would be "ADMIN" which expands to "PROCESS_ALL_ACCESS", shortened to fit in better. Another is "AssertNonNull":
#define AssertNonNull(o, p) if (o == NULL) return p;
I've given a shot at debugging this code, but it doesn't halt at any specific point. I've thrown MessageBox tests past each operation (e.g allocation, writing) in addition to the integrity checks and didn't get any interesting responses.
I'm sorry I can't really add much extensive detail, but I'm really stone-walled here, not sure what to do, what information to get, or if there's anything to get. In short, I'm just not sure what to look for.
This is also being called from C#, 1% pseudocode.
[DllImport(path, CallingConvention = CallingConvention.StdCall)]
static extern int UserInject(uint ProcId, string DllPath, bool UseExtended);
uint validProcId; // integrity tested
string validDllPath; // integrity tested
UserInject(validProcId, validDllPath, true);
If you're interested in my testing application (for reproduction)
#include <iostream>
#include <Windows.h>
static const std::string toPrint = "Hello, World!\n";
int main()
{
while (true)
{
Sleep(1000);
std::cout << toPrint;
}
}
To my surprise, this wasn't as much an issue with the code as much as it was with the testing application.
The basic injection technique I used is prevented by various exploit protections & security mitigations that Visual Studio 2010+ applies to any applications built in release mode.
If I build my testing application in debug mode, there is no exception. If I use a non-VS built application, there is no exception.
I still need to fix how I create my threads, because no thread is created, but I've figured this out, that should be easy enough.

c++ how to remotely call a console window within opengl program

What I am trying to do is make a graphing calculator that takes certain character inputs to transform the graph, but in order to do that I need to be able to generate a console window within the program. Is there any way in c++ to do that?
using Dev C++
but in order to do that I need to be able to generate a console window
within the program [...]
If by that you mean:
takes certain character inputs
then no, you dont need to be able to generate a console window. This being titled opengl program, a better fitting solution is to register keyboard callbacks for the current window (see here under glutKeyboardFunc) and handle everything through there. Other callbacks, for mouse, etc. are documented there as well.
There's no problem downloading freeglut (preserves same API and extends GLUT) in case you're missing any header(s)/libraries. Using Dev C++ is not a limiting factor for doing so.
For the purpose that I've submitted, you don't need to call a console. If you don't want to use the glut method above, what you can do instead is use a few functions present in the windows.h header file to take inputs.
The best way to implement inputs without glut involves creating a thread in your program that takes the inputs, and modifies a few variables that the main thread can use. Lets take a simple program here as an example:
#include <windows.h>
#include <pthread.h>
//the thread that takes the inputs
void * takeInputs(void * outputVariable)
{
//casts the output type so the compiler won't complain about setting void* to something
char * output = (char*) outputVariable;
//generic loop to stay alive
while (1 == 1) {
//checks to see if the key is in the on state, by getting only the needed bit in the data.
//In this case, we're checking to see if the A key on the keyboard is pressed
//You can use different keys like the arrow keys, using VK_UP VK_RIGHT
if (GetAsyncKeyState('A') & 0x8000 != 0)
{
*output = 1;
}
//put a delay in here so that the input doesn't consume a lot of cpu power
Sleep(100);
}
pthread_exit(0);
}
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int iCmdShow)
{
//DoUnimportantWindowsSetup
char option = 0;
pthread_t Inputs;
//order: pointer to handle, Pointer to thread output, pointer to function, pointer to input
pthread_create(&Inputs, NULL, takeInputs, &option);
//Do stuff
if (option == 1) doWorks();
else doNotWorks();
//Order: thread handle, pointer to variable that stores output
pthread_join(Inputs, NULL);
return 0;
}

Serial asynchronous I/O in Windows 7/64

I have a multi-threaded Windows program which is doing serial port asynchronous I/O through "raw" Win API calls. It is working perfectly fine on any Windows version except Windows 7/64.
The problem is that the program can find and setup the COM port just fine, but it cannot send nor receive any data. No matter if I compile the binary in Win XP or 7, I cannot send/receive on Win 7/64. Compatibility mode, run as admin etc does not help.
I have managed to narrow down the problem to the FileIOCompletionRoutine callback. Every time it is called, dwErrorCode is always 0, dwNumberOfBytesTransfered is always 0. GetOverlappedResult() from inside the function always return TRUE (everything ok). It seems to set the lpNumberOfBytesTransferred correctly. But the lpOverlapped parameter is corrupt, it is a garbage pointer pointing at garbage values.
I can see that it is corrupt by either checking in the debugger what address the correct OVERLAPPED struct is allocated at, or by setting a temp. global variable to point at it.
My question is: why does this happen, and why does it only happen on Windows 7/64? Is there some issue with calling convention that I am not aware of? Or is the overlapped struct treated differently somehow?
Posting relevant parts of the code below:
class ThreadedComport : public Comport
{
private:
typedef struct
{
OVERLAPPED overlapped;
ThreadedComport* caller; /* add user data to struct */
} OVERLAPPED_overlap;
OVERLAPPED_overlap _send_overlapped;
OVERLAPPED_overlap _rec_overlapped;
...
static void WINAPI _send_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped);
static void WINAPI _receive_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped);
...
};
Open/close is done in a base class that has no multi-threading nor asynchronous I/O implemented:
void Comport::open (void)
{
char port[20];
DCB dcbCommPort;
COMMTIMEOUTS ctmo_new = {0};
if(_is_open)
{
close();
}
sprintf(port, "\\\\.\\COM%d", TEXT(_port_number));
_hcom = CreateFile(port,
GENERIC_READ | GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
0,
0);
if(_hcom == INVALID_HANDLE_VALUE)
{
// error handling
}
GetCommTimeouts(_hcom, &_ctmo_old);
ctmo_new.ReadTotalTimeoutConstant = 10;
ctmo_new.ReadTotalTimeoutMultiplier = 0;
ctmo_new.WriteTotalTimeoutMultiplier = 0;
ctmo_new.WriteTotalTimeoutConstant = 0;
if(SetCommTimeouts(_hcom, &ctmo_new) == FALSE)
{
// error handling
}
dcbCommPort.DCBlength = sizeof(DCB);
if(GetCommState(_hcom, &(DCB)dcbCommPort) == FALSE)
{
// error handling
}
// setup DCB, this seems to work fine
dcbCommPort.DCBlength = sizeof(DCB);
dcbCommPort.BaudRate = baudrate_int;
if(_parity == PAR_NONE)
{
dcbCommPort.fParity = 0; /* disable parity */
}
else
{
dcbCommPort.fParity = 1; /* enable parity */
}
dcbCommPort.Parity = (uint8)_parity;
dcbCommPort.ByteSize = _databits;
dcbCommPort.StopBits = _stopbits;
SetCommState(_hcom, &(DCB)dcbCommPort);
}
void Comport::close (void)
{
if(_hcom != NULL)
{
SetCommTimeouts(_hcom, &_ctmo_old);
CloseHandle(_hcom);
_hcom = NULL;
}
_is_open = false;
}
The whole multi-threading and event handling mechanism is rather complex, relevant parts are:
Send
result = WriteFileEx (_hcom, // handle to output file
(void*)_write_data, // pointer to input buffer
send_buf_size, // number of bytes to write
(LPOVERLAPPED)&_send_overlapped, // pointer to async. i/o data
(LPOVERLAPPED_COMPLETION_ROUTINE )&_send_callback);
Receive
result = ReadFileEx (_hcom, // handle to output file
(void*)_read_data, // pointer to input buffer
_MAX_MESSAGE_LENGTH, // number of bytes to read
(OVERLAPPED*)&_rec_overlapped, // pointer to async. i/o data
(LPOVERLAPPED_COMPLETION_ROUTINE )&_receive_callback);
Callback functions
void WINAPI ThreadedComport::_send_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped)
{
ThreadedComport* _this = ((OVERLAPPED_overlap*)lpOverlapped)->caller;
if(dwErrorCode == 0) // no errors
{
if(dwNumberOfBytesTransfered > 0)
{
_this->_data_sent = dwNumberOfBytesTransfered;
}
}
SetEvent(lpOverlapped->hEvent);
}
void WINAPI ThreadedComport::_receive_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped)
{
if(dwErrorCode == 0) // no errors
{
if(dwNumberOfBytesTransfered > 0)
{
ThreadedComport* _this = ((OVERLAPPED_overlap*)lpOverlapped)->caller;
_this->_bytes_read = dwNumberOfBytesTransfered;
}
}
SetEvent(lpOverlapped->hEvent);
}
EDIT
Updated: I have spent most of the day on the theory that the OVERLAPPED variable went out of scope before the callback is executed. I have verified that this never happens and I have even tried to declare the OVERLAPPED struct as static, same problem remains. If the OVERLAPPED struct had gone out of scope, I would expect the callback to point at the memory location where the struct was previously allocated, but it doesn't, it points somewhere else, at an entirely unfamiliar memory location. Why it does that, I have no idea.
Maybe Windows 7/64 makes an internal hardcopy of the OVERLAPPED struct? I can see how that would cause this behavior, since I am relying on additional parameters sneaked in at the end of the struct (which seems like a hack to me, but apparently I got that "hack" from official MSDN examples).
I have also tried to change calling convention but this doesn't work at all, if I change it then the program crashes. (The standard calling convention causes it to crash, whatever standard is, cdecl? __fastcall also causes a crash.) The calling conventions that work are __stdcall, WINAPI and CALLBACK. I think these are all same names for __stdcall and I read somewhere that Win 64 ignores that calling convention anyhow.
It would seem that the callback is executed because of some "spurious disturbance" in Win 7/64 generating false callback calls with corrupt or irrelevant parameters.
Multi-thread race conditions is another theory, but in the scenario I am running to reproduce the bug, there is only one thread, and I can confirm that the thread calling ReadFileEx is the same one that is executing the callback.
I have found the problem, it turned out to be annoyingly simple.
In CreateFile(), I did not specify FILE_FLAG_OVERLAPPED. For reasons unknown, this was not necessary on 32-bit Windows. But if you forget it on 64-bit Windows, it will apparently still generate callbacks with the FileIOCompletionRoutine, but they have corrupted parameters.
I haven't found any documentation of this change of behavior anywhere; perhaps it was just an internal bug fix in Windows, since the older documentation also specifies that you must have FILE_FLAG_OVERLAPPED set.
As for my specific case, the bug appeared because I had a base class that assumed synchronous I/O, which has then been inherited by a class using asynchronous I/O.

Uninitialized read problem

Program works fine (with random crashes) and Memory Validator reports Uninitialized read problem in pD3D = Direct3DCreate9.
What could be the problem ?
init3D.h
class CD3DWindow
{
public:
CD3DWindow();
~CD3DWindow();
LPDIRECT3D9 pD3D;
HRESULT PreInitD3D();
HWND hWnd;
bool killed;
VOID KillD3DWindow();
};
init3D.cpp
CD3DWindow::CD3DWindow()
{
pD3D=NULL;
}
CD3DWindow::~CD3DWindow()
{
if (!killed) KillD3DWindow();
}
HRESULT CD3DWindow::PreInitD3D()
{
pD3D = Direct3DCreate9( D3D_SDK_VERSION ); // Here it reports a problem
if( pD3D == NULL ) return E_FAIL;
// Other not related code
VOID CD3DWindow::KillD3DWindow()
{
if (killed) return;
diwrap::input.UnCreate();
if (hWnd) DestroyWindow(hWnd);
UnregisterClass( "D3D Window", wc.hInstance );
killed = true;
}
Inside main app .h
CD3DWindow *d3dWin;
Inside main app .cpp
d3dWin = new CD3DWindow;
d3dWin->PreInitD3D();
And here is the error report:
Error: UNINITIALIZED READ: reading register ebx
#0:00:02.969 in thread 4092
0x7c912a1f <ntdll.dll+0x12a1f> ntdll.dll!RtlUnicodeToMultiByteN
0x7e42d4c4 <USER32.dll+0x1d4c4> USER32.dll!WCSToMBEx
0x7e428b79 <USER32.dll+0x18b79> USER32.dll!EnumDisplayDevicesA
0x4fdfc8c7 <d3d9.dll+0x2c8c7> d3d9.dll!DebugSetLevel
0x4fdfa701 <d3d9.dll+0x2a701> d3d9.dll!D3DPERF_GetStatus
0x4fdfafad <d3d9.dll+0x2afad> d3d9.dll!Direct3DCreate9
0x00644c59 <Temp.exe+0x244c59> Temp.exe!CD3DWindow::PreInitD3D
c:\_work\Temp\initd3d.cpp:32
Edit: Your stack trace is very, very strange- inside the USER32.dll? That's part of Windows.
What I might suggest is that you're linking the multi-byte Direct3D against the Unicode D3D libraries, or something like that. You shouldn't be able to cause Windows functions to trigger an error.
Your Memory Validator application is reporting false positives to you. I would ignore this error and move on.
There is no copy constructor in your class CD3DWindow. This might not be the cause, but it is the very first thing that comes to mind.
If, by any chance, anywhere in your code a temporary copy is made of a CD3DWindow instance, the destructor of that copy will destroy the window handle. Afterwards, your original will try to use that same, now invalid, handle.
The same holds for the assignment operator.
This might even work, if the memory is not overwritten yet, for some time. Then suddenly, the memory is reused and your code crashes.
So start by adding this to your class:
private:
CD3DWindow(const CD3DWindow&); // left unimplemented intentionally
CD3DWindow& operator=(const CD3DWindow&); // left unimplemented intentionally
If the compiler complains, check the code it refers to.
Update: Of course, this problem might apply to all your other classes. Please read up on the "Rule of Three".