Is there a _safe_ way to send a string via the PostMessage? - c++

I want to raise this question one more time. I wrote a comment on the accepted answer already, but, it seems, the answered person is inactive on the SO. So, I copy my comment-question here.
In the accepted answer on the referred question, there is a potential risk of the memory leakage. For example, the PostMessage can be resulted with error because of the messages queue is full. Or, the target window may be already destroyed (so, the delete operator will not be called).
As a summary, there is no a strong corresponding between the posting and the receiving of the Windows message. But, on the other hand, there are not so many options how to pass a pointer (to a string, for example) with the Windows message. I see only two: to use objects allocated in the heap or declared as the global variables. The former one has the difficulty which is described here. The latter one is deprived of the specified disadvantage, but there is need to allocate some memory which may be used rarely.
So, I have these questions:
May someone suggest a way, how we can be safe against the memory leakage in the case of using of the heap memory for "attaching" something to the Windows message?
Is there some another option how we can reach the goal (send a string, for example, within the Windows message via the PostMessage system call)?

use std::wstringstream.
look for EM_STREAMOUT.
wchar_t remmi[1250];
//make stuff
int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
hc=CreateWindowEx(WS_EX_NOPARENTNOTIFY, MSFTEDIT_CLASS,remmi,
ES_MULTILINE|ES_AUTOVSCROLL| WS_VISIBLE | WS_CHILD
|WS_TABSTOP|WS_VSCROLL,
1, 350, 450, 201,
this->m_hWnd, NULL, h, NULL);
return 0;
}
//middleware
DWORD CALLBACK E(DWORD_PTR dw, LPBYTE pb, LONG cb, LONG *pcb)
{
std::wstringstream *fr = (std::wstringstream *)dw;
fr->write((wchar_t *)pb, int(cb/2));
*pcb = cb;
return 0;
}
//final
void CMainFrame::tr()
{
std::wstringstream fr;
EDITSTREAM es = {};
es.dwCookie = (DWORD_PTR) &fr;
es.pfnCallback = E;
::SendMessage(hc, EM_STREAMOUT, SF_TEXT|SF_UNICODE, (LPARAM)&es);
ZeroMemory(remmi,1218*2);
fr.read(remmi,747);
}

Related

AccessException: Attempted To Read Or Write Protected/Corrupted Memory -- Known Exception, Unknown Reason?

Yes, I know there's a million threads on this exception, I've probably looked at 20-25 of them, but none of the causes seem to correlate to this, sadly (hence the title, known exception, unknown reason).
I've recently been gaining interest in InfoSec. As my first learners-project, I'd create a basic DLL Injector. Seems to be going well so far, however, this exception is grinding me up, and after some relatively extensive research I'm quite puzzled. Oddly enough, the exception also rises after the function completely finishes.
I couldn't really figure this out myself since external debuggers wouldn't work with my target application, and that was a whole new unrelated issue.
Solutions suggested & attempted so far:
Fix/Remove thread status checking (it was wrong)
Ensure the value behind DllPath ptr is being allocated, not the ptr
Marshaling the C# interop parameters
Anyway, here is my hunk of code:
#pragma once
#include "pch.h"
#include "injection.h" // only specifies UserInject as an exportable proto.
DWORD __stdcall UserInject(DWORD ProcessId, PCSTR DllPath, BOOL UseExtended) {
DWORD length;
CHAR* buffer;
LPVOID memry;
SIZE_T write;
HANDLE hProc;
HMODULE kr32;
HANDLE thread;
length = GetFullPathName(
DllPath,
NULL,
NULL,
NULL
);
AssertNonNull(length, INVALID_PATH);
kr32 = GetModuleHandle("kernel32.dll");
AssertNonNull(kr32, YOUREALLYMESSEDUP);
buffer = new CHAR[length];
GetFullPathName(
DllPath,
length,
buffer,
NULL
);
AssertNonNull(buffer, ERR_DEAD_BUFFER);
hProc = OpenProcess(
ADMIN,
FALSE,
ProcessId
);
AssertNonNull(hProc, INVALID_PROCID);
memry = VirtualAllocEx(
hProc,
nullptr,
sizeof buffer,
SHELLCODE_ALLOCATION,
PAGE_EXECUTE_READWRITE
);
AssertNonNull(memry, INVALID_BUFSIZE);
WriteProcessMemory(
hProc,
memry,
DllPath,
sizeof DllPath,
&write
);
AssertNonNull(write, ERR_SOLID_BUFFER);
auto decidePrototype = [](BOOL UseExtended, HMODULE kr32) -> decltype(auto) {
LPVOID procAddress;
if (!UseExtended) {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIB_ORD);
}
else {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIBX_ORD);
};
return (LPTHREAD_START_ROUTINE)procAddress;
};
auto loadLibraryAddress = decidePrototype(UseExtended, kr32);
thread = CreateRemoteThread(
hProc,
NULL,
NULL,
loadLibraryAddress,
memry,
NULL,
NULL
);
AssertNonNull(thread, INVALID_ROUTINE);
WaitForSingleObject(thread, INFINITE);
// The status stuff is quite weird; it was an attempt at debugging. The error occurs with or without this code.
// I left it because 50% of the comments below wouldn't make sense. Just be assured this code is positively *not* the problem (sadly).
// LPDWORD status = (LPDWORD)1;
// GetExitCodeThread(thread, status);
return TRUE // *status;
}
One obscure macro would be "ADMIN" which expands to "PROCESS_ALL_ACCESS", shortened to fit in better. Another is "AssertNonNull":
#define AssertNonNull(o, p) if (o == NULL) return p;
I've given a shot at debugging this code, but it doesn't halt at any specific point. I've thrown MessageBox tests past each operation (e.g allocation, writing) in addition to the integrity checks and didn't get any interesting responses.
I'm sorry I can't really add much extensive detail, but I'm really stone-walled here, not sure what to do, what information to get, or if there's anything to get. In short, I'm just not sure what to look for.
This is also being called from C#, 1% pseudocode.
[DllImport(path, CallingConvention = CallingConvention.StdCall)]
static extern int UserInject(uint ProcId, string DllPath, bool UseExtended);
uint validProcId; // integrity tested
string validDllPath; // integrity tested
UserInject(validProcId, validDllPath, true);
If you're interested in my testing application (for reproduction)
#include <iostream>
#include <Windows.h>
static const std::string toPrint = "Hello, World!\n";
int main()
{
while (true)
{
Sleep(1000);
std::cout << toPrint;
}
}
To my surprise, this wasn't as much an issue with the code as much as it was with the testing application.
The basic injection technique I used is prevented by various exploit protections & security mitigations that Visual Studio 2010+ applies to any applications built in release mode.
If I build my testing application in debug mode, there is no exception. If I use a non-VS built application, there is no exception.
I still need to fix how I create my threads, because no thread is created, but I've figured this out, that should be easy enough.

Output the call stack when the program crashes along with the symbol names

I want to output the call stack when I catch an unhandled exception and my program crashes. I want to do this while the program is still alive, without any post-mortem analysis.
I would rather not use any third-party libraries, which is what most of the answers to similar questions suggest. I'm trying to use StackWalk here.
I am trying to get this to work on Windows.
Here's what I have:
DWORD machine = IMAGE_FILE_MACHINE_I386;
HANDLE process = GetCurrentProcess();
HANDLE thread = GetCurrentThread();
CONTEXT context = {};
context.ContextFlags = CONTEXT_FULL;
RtlCaptureContext(&context);
SymInitialize(process, NULL, TRUE);
SymSetOptions(SYMOPT_LOAD_LINES);
STACKFRAME frame = {};
frame.AddrPC.Offset = context.Eip;
frame.AddrPC.Mode = AddrModeFlat;
frame.AddrFrame.Offset = context.Ebp;
frame.AddrFrame.Mode = AddrModeFlat;
frame.AddrStack.Offset = context.Esp;
frame.AddrStack.Mode = AddrModeFlat;
while (StackWalk(machine, process, thread, &frame, &context, NULL, SymFunctionTableAccess, SymGetModuleBase, NULL))
{
char * functionName;
char symbolBuffer[sizeof(IMAGEHLP_SYMBOL) + 255];
PIMAGEHLP_SYMBOL symbol = (PIMAGEHLP_SYMBOL)symbolBuffer;
symbol->SizeOfStruct = (sizeof IMAGEHLP_SYMBOL) + 255;
symbol->MaxNameLength = 254;
if (SymGetSymFromAddr(process, frame.AddrPC.Offset, NULL, symbol))
{
functionName = symbol->Name;
std::string str(functionName);
std::wstring stemp = std::wstring(str.begin(), str.end());
LPCWSTR sw = stemp.c_str();
MessageBox(NULL, sw, L"Error", MB_ICONERROR | MB_OK); //for testing purposes
if (str.find("nvd3dum") != std::string::npos) {
//I'd put a messagebox here telling the user to do something if I find a symbol name I recognize
}
}
}
The problem I have with it is that instead of outputting the call stack of when the program crashed, I get this very function that was called, along with stuff like RtlCaptureContext that I used in this very function.
I solved it. I've seen a lot of people have the same problem as me. Put it into correct context!
CONTEXT context = {};
context.ContextFlags = exceptionInfo->ContextRecord->ContextFlags;
context.Eip = exceptionInfo->ContextRecord->Eip;
context.Ebp = exceptionInfo->ContextRecord->Ebp;
context.Esp = exceptionInfo->ContextRecord->Esp;
In principle, the call stack is not required to exist in C++. Read carefully the C++11 standard n3337, the call stack is not mentioned there. So that in theory, some C++ compilers could be clever enough to avoid any call stack (for some particular program given to that compiler). Many C++ compilers are optimizing tail calls (so share the same memory locations between calling and called functions and their call frame).
In practice, C++ implementations follow the As-if rule. They can, and often do, optimize to the point of inlining functions, even those you did not annotate with inline. When that happens, speaking of call frame for an inlined function makes no sense.
Also notice that some automatic variables are practically not on the call stack. A compiler is permitted to (and generally does) put some variables only in processor registers, without spilling them into the call stack. Read about register allocation. A given slot in your call frame could be used to keep several unrelated variables.
So, showing the call stack is a quality-of-implementation issue and could depend of external factors (such as the optimization levels you required when compiling your C++ code). On Linux, I recommend using Ian Taylor libbacktrace. I guess you could find a similar library for call stack inspection on Windows.

Multithreading with _beginthread and CreateThread

I try to write a Multithreading WIN32 Application in C++, but due to i get difficulties.
One of the Window Procedure creates a Thread, which manages the output of this window. If this Window Procedure receives a message (from the other Window Procedures), it should transmit it to their Thread. At the beginning i worked with the _beginthread(...) function, what doesn't work.
Then i tried it with the CreateThread(...) function, and it worked? What did i do wrong?
(My English isn't so good, i hope you understand my problem)
Code with CreateThread(...):
DWORD thHalloHandle; // global
HWND hwndHallo; // Hwnd of WndProc4
...
LRESULT APIENTRY WndProc4 (HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam)
{
static PARAMS params ;
switch (message)
{
case WM_CREATE: {
params.hwnd = hwnd ;
params.cyChar = HIWORD (GetDialogBaseUnits ()) ;
CreateThread(NULL, 0, thHallo, &params, 0, &thHalloHandle);
return 0 ;
}
...
case WM_SPACE: {
PostThreadMessage(thHalloHandle, WM_SPACE, 0, 0);
return 0;
}
...
}
Code with _beginthread(...):
...
case WM_CREATE: {
params.hwnd = hwnd ;
params.cyChar = HIWORD (GetDialogBaseUnits ()) ;
thHalloHandle = (DWORD)_beginthread (thHallo, 0, &params) ;
return 0;
}
...
case WM_SPACE: {
PostThreadMessage(thHalloHandle, WM_SPACE, 0, 0);
return 0;
}
...
thHallo for CreateThread:
DWORD WINAPI thHallo(void *pvoid)
{
static TCHAR *szMessage[] = { TEXT(...), ...};
// Some Declaration
pparams = (PPARAMS) pvoid;
while(!pparams->bKill)
{
MsgReturn = GetMessage(&msg, NULL, 0, 0);
hdc = GetDC(pparams->hwnd);
if(MsgReturn)
{
switch(msg.message)
{
// case....
}
}
}
return 0;
}
thHallo for _beginthread(...):
void thHallo(void *pvoid)
{
...
// The Same like for CreateThread
...
_endthread();
}
The _beginthread/ex() function is proving to be radically difficult to eliminate. It was necessary back in the previous century, VS6 was the last Visual Studio version that required it. It was a band-aid to allow the CRT to allocate thread-local state for internal CRT variables. Like the ones used for strtok() and gmtime(), CRT functions that maintain internal state. That state must be stored separately for each thread so that the use of, say, strtok() in one thread doesn't screw up the use of strtok() in another thread. It must be stored in thread-local state. _beginthread/ex() ensures that this state is allocated and cleaned-up again.
That has been worked on, necessarily so when Windows 2000 introduced the thread-pool. There is no possible way to get that internal CRT state initialized when your code gets called by a thread-pool thread. Quite an effort btw, the hardest problem they had to solve was to ensure that the thread-local state is automatically getting cleaned-up again when the thread stops running. Many a program has died on that going wrong, Apple's QuickTime is a particularly nasty source of these crashes.
So forget that _beginthread() ever existed, using CreateThread() is fine.
There's a serious problem with your use of PostThreadMessage(). You are used the wrong argument in your _beginthread() code which is why it didn't work. But there are bigger problems with it. The message that is posted can only ever be retrieved in your message loop. Which works fine, until it is no longer your message loop that is dispatching messages. That happens in many cases in a GUI app. Simple examples are using MessageBox(), DialogBox() or the user resizing the window. Modal code that works by Windows itself pumping the message loop.
A big problem is the message loop in that code knows beans about the messages you posted. They just fall in the bit-bucket and disappear without trace. The DispatchMessage() call inside that modal loop fails, the message you posted has a NULL window handle.
You must fix this by using PostMessage() instead. Which requires a window handle. You can use any window handle, the handle of your main window is a decent choice. Better yet, you can create a dedicated window, one that just isn't visible, with its own WndProc() that just handles these inter-thread messages. A very common choice. DispatchMessage() can now no longer fail, solves your bug as well.
Your call to CreateThread puts the thread ID into thHalloHandle. The call to _beginthread puts the thread handle into thHalloHandle.
Now, the thread ID is not the same as the thread handle. When you call PostThreadMessage you do need to supply a thread ID. You only do that for the CreateThread variant which I believe explains the problem.
Your code lacks error checking. Had you checked for errors on the call to PostThreadMessage you would have found that PostThreadMessage returned FALSE. Had you then gone on to call GetLastError that would have returned ERROR_INVALID_THREAD_ID. I do urge you to include proper error checking.
In order to address this you must first be more clear on the difference between thread ID and thread handle. You should give thHalloHandle a different name: thHalloThreadId perhaps. If you wish to use _beginthread you will have to call GetThreadId, passing the thread handle, to obtain the thread ID. Alternatively, use _beginthreadex which yields the thread ID, or indeed CreateThread.
Your problem is that you need a TID (Thread Identifier) to use PostThreadMessage.
_beginthread doesn't return a TID, it return a Thread Handle.
Solution is to use the GetThreadId function.
HANDLE hThread = (HANDLE)_beginthread (thHallo, 0, &params) ;
thHalloHandle = GetThreadId( hThread );
Better Code (see the documentation here)
HANDLE hThread = (HANDLE)_beginthreadex(NULL, 0, thHallo, &params, 0, &thHalloHandle ) ;

Open process with debug privileges and read/write memory

Short version:
I'm trying to open a process handle with debug privileges and define a pointer which points to an object in the memory of the debuggee.
Long version
I'm a computer science university student in my final year of graduation and got tasked to build an application which should be used for educational purposes for the next generation of students.
Why am I here asking for help, you might ask? Well, the target platform is Windows and I have unfortunately no knowledge of the WinAPI whatsoever...
Okay, here is the basic requirement:
Programming language: C++
Platform: Windows (7 Professional)
Used IDE: Visual Studio 2012
No additional libraries if they aren't essential to ease the development
What will the application be used for?
Using this application the students shall learn to handle addresses, in this case static ones: the debuggee process will have some static pointers, which lead to other pointers themself to form a multi-dimensional pointer.
The students have to find these base addresses using some debugging techniques (which is not part of my work!) and try to find the values at the end of these pointers.
My application will be used by the tutors to randomly change the values and/or structures in the debuggee process.
Some search did yield the first answer: using ReadProcessMemory and WriteProcessMemory one can easily change values in the memory of another process without any need to get debug privileges.
What my tutors want, however, is to have the ability to define pointers (let's say unsigned int) which should point into the memory space of the debuggee process, effectively holding the base addresses I wrote about earlier.
They really want this and I couldn't even talk this out of them so I'm stuck to do this at the end...
And what exactly should work?
Well, I'd have accomplished my task if the following (pseudo) code works:
grantThisProcessDebugPrivileges();
openAnotherProcessWhileItsRunning("targetProcess.exe");
unsigned int * targetValue = (unsigned int*) 0xDE123F00;
// or even
myCustomClass * targetClass = (myCustomClass*) 0xDE123F00;
where the address 0xDE123F00 lies in the memory space of targetProcess.exe.
I know this is possible, else there wouldn't be debuggers which could show this information.
What I did so far (or tried...)
Okay, the thing is: I'm really confused whether I have to activate debug privileges for my application prior opening the target process, doing it after opening or rather giving the target process these privileges.
So I found an example in MSDN and tried to implement it:
BOOL SetPrivilege(
HANDLE hToken, // token handle
LPCTSTR Privilege, // Privilege to enable/disable
BOOL bEnablePrivilege // TRUE to enable. FALSE to disable
)
{
TOKEN_PRIVILEGES tp;
LUID luid;
TOKEN_PRIVILEGES tpPrevious;
DWORD cbPrevious=sizeof(TOKEN_PRIVILEGES);
if(!LookupPrivilegeValue( NULL, Privilege, &luid )) return FALSE;
//
// first pass. get current privilege setting
//
tp.PrivilegeCount = 1;
tp.Privileges[0].Luid = luid;
tp.Privileges[0].Attributes = 0;
AdjustTokenPrivileges(
hToken,
FALSE,
&tp,
sizeof(TOKEN_PRIVILEGES),
&tpPrevious,
&cbPrevious
);
if (GetLastError() != ERROR_SUCCESS) return FALSE;
//
// second pass. set privilege based on previous setting
//
tpPrevious.PrivilegeCount = 1;
tpPrevious.Privileges[0].Luid = luid;
if(bEnablePrivilege) {
tpPrevious.Privileges[0].Attributes |= (SE_PRIVILEGE_ENABLED);
}
else {
tpPrevious.Privileges[0].Attributes ^= (SE_PRIVILEGE_ENABLED &
tpPrevious.Privileges[0].Attributes);
}
AdjustTokenPrivileges(
hToken,
FALSE,
&tpPrevious,
cbPrevious,
NULL,
NULL
);
if (GetLastError() != ERROR_SUCCESS) return FALSE;
return TRUE;
};
And in my main:
HANDLE mainToken;
// I really don't know what this block of code does :<
if(!OpenThreadToken(GetCurrentThread(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, FALSE, &mainToken))
{
if (GetLastError() == ERROR_NO_TOKEN)
{
if (!ImpersonateSelf(SecurityImpersonation))
return 1;
if(!OpenThreadToken(GetCurrentThread(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, FALSE, &mainToken)){
cout << GetLastError();
return 1;
}
}
else
return 1;
}
if (!SetPrivilege(mainToken, SE_DEBUG_NAME, true))
{
CloseHandle(mainToken);
cout << "Couldn't set DEBUG MODE: " << GetLastError() << endl;
return 1;
};
unsigned int processID = getPID("targetProcess.exe");
HANDLE hproc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, processID);
if (hproc == NULL)
{
cout << "Couldn't open the process" << endl;
return 1;
};
unsigned int * theValue = (unsigned int*) 0xDE123F;
Okay, this code runs without any errors, SetPrivilege returns TRUE so I guess it really did set SE_DEBUG_NAME, which I think is the flag I need to set.
But after - for example - outputting the dereferenced value of theValue, the application crashes with an access violation message, which shows that my approach didn't work. I did especially pay attention to start the VisualStudio Debugger with admin rights (SetPrivilege failed otherwise).
I am really clueless here, the fact that I don't know whether setting SE_DEBUG_NAME is the right approach adds to my overall confusion.
I hope you can help me out :)
My hands are tied concerning the specific requests of the application , if you have ideas to achieve my goal using an entire dfferent approach, you're free to enlight me, but I won't be able to present it to my superiors so it will only add to my knowledge :D
From you description, it appears that you have gotten to the point where you can open the process with SE_DEBUG. At this point you now have a handle to the target process.
What your code appears to be missing is the use of ReadProcessMemory.
First we need to look at the definition of ReadProcessMemory:
BOOL WINAPI ReadProcessMemory(
_In_ HANDLE hProcess,
_In_ LPCVOID lpBaseAddress,
_Out_ LPVOID lpBuffer,
_In_ SIZE_T nSize,
_Out_ SIZE_T *lpNumberOfBytesRead);
This function essentially gives you the ability to copy a block of memory from one process space into your process space. So you need to use this method to read a block of memory the size of the data structure you wish to read into your process space, then you can reinterpret the memory block as that data type.
So semi pseudocode for reading an unsigned int from your target process looks like this:
unsigned int ReadUInt(HANDLE process, const void * address)
{
// Add parameter validation
unsigned char buffer[sizeof(unsigned int)] = {};
size_t bytesRead = 0;
BOOL res = ::ReadProcessMemory(process, // The handle you opened with SE_DEBUG privs
address, // The location in the other process
buffer, // Where to transfer the memory to
sizeof(unsigned int), // The number of bytes to read
&bytesRead); // The number of bytes actually read
if (!res)
{
// Deal with the error
}
if (bytesRead != sizeof(unsigned int))
{
// Deal with error where we didn't get enough memory
}
return *reinterpret_cast<unsigned int *>(buffer);
}
Instead of using this line:
unsigned int * theValue = (unsigned int*) 0xDE123F00;
You would do this:
unsigned int theValue = ReadUInt(hproc, 0xDE123F00);
Keep in mind that this requires that you know the size and memory layout of the types you are trying to read. Simple types that are contained in contiguous memory can be retrieved in a single ReadProcessMemory call. Types that contain pointers as well as values will require you to make extra calls to ReadProcessMemory to find the values referenced by the pointers.
Each process has its own virtual address space. An address in one process only has meaning in that process. De-referencing a pointer in C++ code will access the virtual address space of the executing process.
When you de-referenced the pointer in your code you were actually attempting to access memory in your process. No amount of wishful thinking on the part of your tutors can make pointer de-reference access memory in another process.
If you wish to read and write memory from other processes then you must use ReadProcessMemory and WriteProcessMemory.
I don't think you really need to go to all those lengths with tokens and privileges. If I recall correctly you add the debug privilege, call OpenProcess and go straight to it. And I think you can typically skip adding the privilege.
Some search did yield the first answer: using ReadProcessMemory and WriteProcessMemory one can easily change values in the memory of another process without
any need to get debug privileges. What my tutors want, however, is to have the ability to define pointers (let's say unsigned int) which should point into the memory space of the debuggee process, effectively holding the base addresses I wrote about earlier. They really want this and I couldn't even talk this out of them so I'm stuck to do this at the end...
What they want is impossible. I suggest you tell them to get a better understanding of virtual memory before making impossible requirements!
#Cody Gray helpfully mentions memory mapped files. If debuggee and debugger co-operate then they can use memory mapped files to share a common region of memory. In that situation then both process can map the memory into their virtual address space and access it in the normal manner.
I rather assumed that your debuggee was an unwilling victim, but if it is prepared to co-operate then sharing memory could be an option.
Even then you'd need to be careful with any pointers in that shared memory because the memory would, in general, be mapped onto different virtual addresses in each process.
I think you are trying to access kernel land memory range and hence the exception.
The user land range is from 0x00000000 - 7FFFFFFF, so try accessing in this range, as anything above is kernel space.
I am assuming you are on a 32-bit machine.
Check User Space and System Space (Microsoft Docs).
You can create a type that behaves like a pointer by implementing the appropriate operators, just like shared_ptr does:
foreign_ptr<int> ptr{0xDE123F00};
int a = *ptr;
*ptr = 1;

Uninitialized read problem

Program works fine (with random crashes) and Memory Validator reports Uninitialized read problem in pD3D = Direct3DCreate9.
What could be the problem ?
init3D.h
class CD3DWindow
{
public:
CD3DWindow();
~CD3DWindow();
LPDIRECT3D9 pD3D;
HRESULT PreInitD3D();
HWND hWnd;
bool killed;
VOID KillD3DWindow();
};
init3D.cpp
CD3DWindow::CD3DWindow()
{
pD3D=NULL;
}
CD3DWindow::~CD3DWindow()
{
if (!killed) KillD3DWindow();
}
HRESULT CD3DWindow::PreInitD3D()
{
pD3D = Direct3DCreate9( D3D_SDK_VERSION ); // Here it reports a problem
if( pD3D == NULL ) return E_FAIL;
// Other not related code
VOID CD3DWindow::KillD3DWindow()
{
if (killed) return;
diwrap::input.UnCreate();
if (hWnd) DestroyWindow(hWnd);
UnregisterClass( "D3D Window", wc.hInstance );
killed = true;
}
Inside main app .h
CD3DWindow *d3dWin;
Inside main app .cpp
d3dWin = new CD3DWindow;
d3dWin->PreInitD3D();
And here is the error report:
Error: UNINITIALIZED READ: reading register ebx
#0:00:02.969 in thread 4092
0x7c912a1f <ntdll.dll+0x12a1f> ntdll.dll!RtlUnicodeToMultiByteN
0x7e42d4c4 <USER32.dll+0x1d4c4> USER32.dll!WCSToMBEx
0x7e428b79 <USER32.dll+0x18b79> USER32.dll!EnumDisplayDevicesA
0x4fdfc8c7 <d3d9.dll+0x2c8c7> d3d9.dll!DebugSetLevel
0x4fdfa701 <d3d9.dll+0x2a701> d3d9.dll!D3DPERF_GetStatus
0x4fdfafad <d3d9.dll+0x2afad> d3d9.dll!Direct3DCreate9
0x00644c59 <Temp.exe+0x244c59> Temp.exe!CD3DWindow::PreInitD3D
c:\_work\Temp\initd3d.cpp:32
Edit: Your stack trace is very, very strange- inside the USER32.dll? That's part of Windows.
What I might suggest is that you're linking the multi-byte Direct3D against the Unicode D3D libraries, or something like that. You shouldn't be able to cause Windows functions to trigger an error.
Your Memory Validator application is reporting false positives to you. I would ignore this error and move on.
There is no copy constructor in your class CD3DWindow. This might not be the cause, but it is the very first thing that comes to mind.
If, by any chance, anywhere in your code a temporary copy is made of a CD3DWindow instance, the destructor of that copy will destroy the window handle. Afterwards, your original will try to use that same, now invalid, handle.
The same holds for the assignment operator.
This might even work, if the memory is not overwritten yet, for some time. Then suddenly, the memory is reused and your code crashes.
So start by adding this to your class:
private:
CD3DWindow(const CD3DWindow&); // left unimplemented intentionally
CD3DWindow& operator=(const CD3DWindow&); // left unimplemented intentionally
If the compiler complains, check the code it refers to.
Update: Of course, this problem might apply to all your other classes. Please read up on the "Rule of Three".