I've made this simple class to open a process and read memory from it:
The problem is when I call ReadDWORD with any memory address ReadProcessMemory fails with error code 6: ERROR_INVALID_HANDLE, The handle is invalid. And I can't figure out what I'm doing wrong.
If I put the OpenProcess part in the ReadDWORD function it works fine. Is there something wrong with how I store the handle? Why does it become invalid before I use it?
Memory.h
#ifndef MEMORY_H
#define MEMORY_H
#include <windows.h>
#include <psapi.h>
#pragma comment(lib, "psapi.lib")
#include <iostream>
class Memory
{
public:
Memory();
Memory(DWORD offset);
~Memory();
DWORD ReadDWORD(DWORD addr);
private:
HANDLE m_hProc;
DWORD m_Offset;
};
#endif
Memory.cpp
#include "Memory.h"
Memory::Memory()
{
Memory(0);
}
Memory::Memory(DWORD offset)
{
m_hProc = OpenProcess(PROCESS_VM_READ | PROCESS_QUERY_INFORMATION, false, 5444); // 5444 is the PID of a process I'm testing this with
m_Offset = offset;
}
Memory::~Memory()
{
CloseHandle(m_hProc);
}
DWORD Memory::ReadDWORD(DWORD addr)
{
// Optional memory offset
addr += m_Offset;
DWORD value = -1;
int result = ReadProcessMemory(m_hProc, (LPVOID)addr, &value, sizeof(DWORD), NULL);
if (result == 0)
std::cout << "ReadProcessMemory error: " << GetLastError() << std::endl;
return value;
}
Memory::Memory()
{
Memory(0);
}
This isn't doing what you think its doing: it's not actually calling the other constructor, instead it's creating a temporary that gets discarded. So you are opening the process, but in a separate temporary object, while this object remains uninitialized.
Safer approach is to have a separate Initialize(offset) method that you call from both ctors.
(The advice in the other answers is also good; check your return values, and where you get a E_INVALID_HANDLE, check that the handle is something that looks like a handle. Or set a breakpoint at the OpenHandle and ReadProcessMemory and check that the same value is being used in both places. C++ is often full of surprises, and there's often no substitute for just stepping through the code to make sure it's doing what you think it's doing.)
To access other processes, you often need to enable certain privileges. SeDebugPrivilege comes to mind. See here. Otherwise see the suggestion from Hans Passant (i.e. GetLastError).
You can use RtlAdjustPrivilege function to get SeDebugPrivilege.
NTSTATUS NTAPI RtlAdjustPrivilege(ULONG,BOOLEAN,BOOLEAN,PBOOLEAN); /*This is the
protoype of RtlAdjustPrivilege function.*/
Related
I need a way to self-delete the executable within my Win32 C++ project, and I found a program that does it in C:
selfdel.c:
// http://www.catch22.net/tuts/win32/self-deleting-executables
// selfdel.c
//
// Self deleting executable for Win9x/WinNT (works for all versions of windows)
//
// J Brown 1/10/2003
//
// This source file must be compiled with /GZ turned OFF
// (basically, disable run-time stack checks)
//
// Under debug build this is always on (MSVC6)
//
//
/**
* The way this works is:
* Firstly a child process is created in a suspended state (any process will do - i.e. explorer.exe).
* Some code is then injected into the address-space of the child process.
* The injected code waits for the parent-process to exit.
* The parent-process is then deleted.
* The injected code then calls ExitProcess, which terminates the child process.
*/
#include <windows.h>
#include <tchar.h>
#pragma pack(push, 1)
#define CODESIZE 0x200
//
// Structure to inject into remote process. Contains
// function pointers and code to execute.
//
typedef struct _SELFDEL
{
struct _SELFDEL *Arg0; // pointer to self
BYTE opCodes[CODESIZE]; // code
HANDLE hParent; // parent process handle
FARPROC fnWaitForSingleObject;
FARPROC fnCloseHandle;
FARPROC fnDeleteFile;
FARPROC fnSleep;
FARPROC fnExitProcess;
FARPROC fnRemoveDirectory;
FARPROC fnGetLastError;
BOOL fRemDir;
TCHAR szFileName[MAX_PATH]; // file to delete
} SELFDEL;
#pragma pack(pop)
#ifdef _DEBUG
#define FUNC_ADDR(func) (PVOID)(*(DWORD *)((BYTE *)func + 1) + (DWORD)((BYTE *)func + 5))
#else
#define FUNC_ADDR(func) func
#endif
//
// Routine to execute in remote process.
//
static void remote_thread(SELFDEL *remote)
{
// wait for parent process to terminate
remote->fnWaitForSingleObject(remote->hParent, INFINITE);
remote->fnCloseHandle(remote->hParent);
// try to delete the executable file
while(!remote->fnDeleteFile(remote->szFileName))
{
// failed - try again in one second's time
remote->fnSleep(1000);
}
// finished! exit so that we don't execute garbage code
remote->fnExitProcess(0);
}
//
// Delete currently running executable and exit
//
BOOL SelfDelete(BOOL fRemoveDirectory)
{
STARTUPINFO si = { sizeof(si) };
PROCESS_INFORMATION pi;
CONTEXT context;
DWORD oldProt;
SELFDEL local;
DWORD entrypoint;
TCHAR szExe[MAX_PATH] = _T("explorer.exe");
//
// Create executable suspended
//
if(CreateProcess(0, szExe, 0, 0, 0, CREATE_SUSPENDED|IDLE_PRIORITY_CLASS, 0, 0, &si, &pi))
{
local.fnWaitForSingleObject = (FARPROC)WaitForSingleObject;
local.fnCloseHandle = (FARPROC)CloseHandle;
local.fnDeleteFile = (FARPROC)DeleteFile;
local.fnSleep = (FARPROC)Sleep;
local.fnExitProcess = (FARPROC)ExitProcess;
local.fnRemoveDirectory = (FARPROC)RemoveDirectory;
local.fnGetLastError = (FARPROC)GetLastError;
local.fRemDir = fRemoveDirectory;
// Give remote process a copy of our own process handle
DuplicateHandle(GetCurrentProcess(), GetCurrentProcess(),
pi.hProcess, &local.hParent, 0, FALSE, 0);
GetModuleFileName(0, local.szFileName, MAX_PATH);
// copy in binary code
memcpy(local.opCodes, FUNC_ADDR(remote_thread), CODESIZE);
//
// Allocate some space on process's stack and place
// our SELFDEL structure there. Then set the instruction pointer
// to this location and let the process resume
//
context.ContextFlags = CONTEXT_INTEGER|CONTEXT_CONTROL;
GetThreadContext(pi.hThread, &context);
// Allocate space on stack (aligned to cache-line boundary)
entrypoint = (context.Esp - sizeof(SELFDEL)) & ~0x1F;
//
// Place a pointer to the structure at the bottom-of-stack
// this pointer is located in such a way that it becomes
// the remote_thread's first argument!!
//
local.Arg0 = (SELFDEL *)entrypoint;
context.Esp = entrypoint - 4; // create dummy return address
context.Eip = entrypoint + 4; // offset of opCodes within structure
// copy in our code+data at the exe's entry-point
VirtualProtectEx(pi.hProcess, (PVOID)entrypoint, sizeof(local), PAGE_EXECUTE_READWRITE, &oldProt);
WriteProcessMemory(pi.hProcess, (PVOID)entrypoint, &local, sizeof(local), 0);
FlushInstructionCache(pi.hProcess, (PVOID)entrypoint, sizeof(local));
SetThreadContext(pi.hThread, &context);
// Let the process continue
ResumeThread(pi.hThread);
CloseHandle(pi.hThread);
CloseHandle(pi.hProcess);
return TRUE;
}
return FALSE;
}
Compiling this alone using the Visual Studio Developer Command Prompt and cl creates an executable and it runs just as expected. Putting this in my C++ project creates an error within remote_thread:
I can't figure out this error. Building in x64 and x86 causes the same error. I have tried putting it in there just by itself, adding extern "C" {} as a wrapper, bringing the file in as a .c file and then linking with a header file, but then the header method broke all the other windows header files (probably because the compiler couldn't differentiate between c headers and cpp headers?) but that's besides the point, what am I missing here?
FARPROC does not work the same way in C++ as it does in C. This is actually described in the CallWindowProc documentation:
...
The FARPROC type is declared as follows:
int (FAR WINAPI * FARPROC) ()
In C, the FARPROC declaration indicates a callback function that has an unspecified parameter list. In C++, however, the empty parameter list in the declaration indicates that a function has no parameters. This subtle distinction can break careless code.
...
So, you will have to use proper function-pointer signatures instead, eg:
typedef struct _SELFDEL
{
...
DWORD (WINAPI *fnWaitForSingleObject)(HANDLE, DWORD);
BOOL (WINAPI *fnCloseHandle)(HANDLE);
BOOL (WINAPI *fnDeleteFile)(LPCTSTR);
void (WINAPI *fnSleep)(DWORD);
void (WINAPI *fnExitProcess)(UINT);
BOOL (WINAPI *fnRemoveDirectory)(LPCTSTR);
DWORD (WINAPI *fnGetLastError)();
...
} SELFDEL;
SELFDEL local;
...
local.fnWaitForSingleObject = &WaitForSingleObject;
local.fnCloseHandle = &CloseHandle;
local.fnDeleteFile = &DeleteFile;
local.fnSleep = &Sleep;
local.fnExitProcess = &ExitProcess;
local.fnRemoveDirectory = &RemoveDirectory;
local.fnGetLastError = &GetLastError;
...
As covered in the comments, the problem here is the code is not valid when compiled as C++ because the function prototypes are incorrect. You can force the compiler to compile the code as C by giving the file a .c suffix.
Your calling C++ code must then be informed the SelfDelete is a C function rather than a C++ function. This is because C++ functions have mangled names to allow for things like overloading. You do this by declaring the function as extern "C" in your C++ code:
extern "C" BOOL SelfDelete(BOOL fRemoveDirectory);
Just put this in your calling C++ code, somewhere before you actually call the function. Without this, your program will not link, because the C code and C++ code will disagree on what the function is actually called behind the scenes.
Yes, I know there's a million threads on this exception, I've probably looked at 20-25 of them, but none of the causes seem to correlate to this, sadly (hence the title, known exception, unknown reason).
I've recently been gaining interest in InfoSec. As my first learners-project, I'd create a basic DLL Injector. Seems to be going well so far, however, this exception is grinding me up, and after some relatively extensive research I'm quite puzzled. Oddly enough, the exception also rises after the function completely finishes.
I couldn't really figure this out myself since external debuggers wouldn't work with my target application, and that was a whole new unrelated issue.
Solutions suggested & attempted so far:
Fix/Remove thread status checking (it was wrong)
Ensure the value behind DllPath ptr is being allocated, not the ptr
Marshaling the C# interop parameters
Anyway, here is my hunk of code:
#pragma once
#include "pch.h"
#include "injection.h" // only specifies UserInject as an exportable proto.
DWORD __stdcall UserInject(DWORD ProcessId, PCSTR DllPath, BOOL UseExtended) {
DWORD length;
CHAR* buffer;
LPVOID memry;
SIZE_T write;
HANDLE hProc;
HMODULE kr32;
HANDLE thread;
length = GetFullPathName(
DllPath,
NULL,
NULL,
NULL
);
AssertNonNull(length, INVALID_PATH);
kr32 = GetModuleHandle("kernel32.dll");
AssertNonNull(kr32, YOUREALLYMESSEDUP);
buffer = new CHAR[length];
GetFullPathName(
DllPath,
length,
buffer,
NULL
);
AssertNonNull(buffer, ERR_DEAD_BUFFER);
hProc = OpenProcess(
ADMIN,
FALSE,
ProcessId
);
AssertNonNull(hProc, INVALID_PROCID);
memry = VirtualAllocEx(
hProc,
nullptr,
sizeof buffer,
SHELLCODE_ALLOCATION,
PAGE_EXECUTE_READWRITE
);
AssertNonNull(memry, INVALID_BUFSIZE);
WriteProcessMemory(
hProc,
memry,
DllPath,
sizeof DllPath,
&write
);
AssertNonNull(write, ERR_SOLID_BUFFER);
auto decidePrototype = [](BOOL UseExtended, HMODULE kr32) -> decltype(auto) {
LPVOID procAddress;
if (!UseExtended) {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIB_ORD);
}
else {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIBX_ORD);
};
return (LPTHREAD_START_ROUTINE)procAddress;
};
auto loadLibraryAddress = decidePrototype(UseExtended, kr32);
thread = CreateRemoteThread(
hProc,
NULL,
NULL,
loadLibraryAddress,
memry,
NULL,
NULL
);
AssertNonNull(thread, INVALID_ROUTINE);
WaitForSingleObject(thread, INFINITE);
// The status stuff is quite weird; it was an attempt at debugging. The error occurs with or without this code.
// I left it because 50% of the comments below wouldn't make sense. Just be assured this code is positively *not* the problem (sadly).
// LPDWORD status = (LPDWORD)1;
// GetExitCodeThread(thread, status);
return TRUE // *status;
}
One obscure macro would be "ADMIN" which expands to "PROCESS_ALL_ACCESS", shortened to fit in better. Another is "AssertNonNull":
#define AssertNonNull(o, p) if (o == NULL) return p;
I've given a shot at debugging this code, but it doesn't halt at any specific point. I've thrown MessageBox tests past each operation (e.g allocation, writing) in addition to the integrity checks and didn't get any interesting responses.
I'm sorry I can't really add much extensive detail, but I'm really stone-walled here, not sure what to do, what information to get, or if there's anything to get. In short, I'm just not sure what to look for.
This is also being called from C#, 1% pseudocode.
[DllImport(path, CallingConvention = CallingConvention.StdCall)]
static extern int UserInject(uint ProcId, string DllPath, bool UseExtended);
uint validProcId; // integrity tested
string validDllPath; // integrity tested
UserInject(validProcId, validDllPath, true);
If you're interested in my testing application (for reproduction)
#include <iostream>
#include <Windows.h>
static const std::string toPrint = "Hello, World!\n";
int main()
{
while (true)
{
Sleep(1000);
std::cout << toPrint;
}
}
To my surprise, this wasn't as much an issue with the code as much as it was with the testing application.
The basic injection technique I used is prevented by various exploit protections & security mitigations that Visual Studio 2010+ applies to any applications built in release mode.
If I build my testing application in debug mode, there is no exception. If I use a non-VS built application, there is no exception.
I still need to fix how I create my threads, because no thread is created, but I've figured this out, that should be easy enough.
I'm calling Windows APIs from C++ code and I have a helper method to do the FormatMessage stuff and throw an exception for error handling. The signature of the function is
void throw_system_error(const std::string& msg_prefix, DWORD messageid)
I've noticed something strange. This code does not work properly:
handle = ...;
if (handle == NULL) {
throw_system_error("something bad happened", GetLastError());
}
The error code that is passed to throw_system_error is always zero.
Where as this works just fine:
handle = ...;
if (handle == NULL) {
DWORD error = GetLastError();
throw_system_error("something bad happened", error);
}
Some more investigation showed that this version has the same problem:
handle = ...;
if (handle == NULL) {
std::string msg("something bad happened");
DWORD error = GetLastError();
throw_system_error(msg, error);
}
It looks for all the world as if the constructor of std::string is resetting the error code.
My guess would be that std::string is allocating memory internally which causes some system call that then sets the last error back to zero.
Anyone knows what is actually going on here?
Visual C++ 2015, 64bit.
Let's see the GetLastError documentation:
Most functions that set the thread's last-error code set it when they
fail. However, some functions also set the last-error code when they
succeed.
You should call the GetLastError function immediately when a
function's return value indicates that such a call will return useful
data. That is because some functions call SetLastError with a zero
when they succeed, wiping out the error code set by the most recently
failed function.
So there is one function calling SetLastError, very likely one that allocates memory:
When you construct a string, new is called to allocate memory.
Now, let's see new's implementation in vc++. There is a very good answer to that question in Stack Overflow : https://softwareengineering.stackexchange.com/a/293209
It depends if you are in debug or release mode. In release mode, there is HeapAlloc/HeapFree which are kernel functions,
while in debug mode (with visual studio) there is a hand written
version of free and malloc (to which new/delete are re-directed) with
thread locks and more exceptions detection, so that you can detect
more easily when you did some mistakes with you heap pointers when
running your code in debug mode.
So in release mode, the function called is HeapAlloc, which does NOT call SetLastError. From the documentation:
If the function fails, it does not call SetLastError
So the code should work properly in release mode.
However, in the debug implementation, the function FlsGetValue is called, and that function calls SetLastError when succeeded.
It's very easy to check this,
#include <iostream>
#include <Windows.h>
int main() {
DWORD t = FlsAlloc(nullptr);
SetLastError(23); //Set error to 23
DWORD error1 = GetLastError(); //store error
FlsGetValue(t); //If success, it is going to set error to 0
DWORD error2 = GetLastError(); //store second error code
std::cout << error1 << std::endl;
std::cout << error2 << std::endl;
system("PAUSE");
return 0;
}
It outputs the following:
23
0
So FlsGetValue has called SetLastError(). To prove that it is called only on debug we can do the following test:
#include <iostream>
#include <Windows.h>
int main() {
DWORD t = FlsAlloc(nullptr);
SetLastError(23); //Set error to 23
DWORD error1 = GetLastError(); //store error
int* test = new int; //allocate int
DWORD error2 = GetLastError(); //store second error code
std::cout << error1 << std::endl; //output errors
std::cout << error2 << std::endl;
delete test; //free allocated memory
system("PAUSE");
return 0;
}
If you run it in debug mode, it will give you, because it calls FlsGetValue:
23
0
However, if you run it in release mode, it produces, because it calls HeapAlloc:
23
23
Per the documentation for GetLastError
The Return Value section of the documentation for each function that sets the last-error code notes the conditions under which the function sets the last-error code. Most functions that set the thread's last-error code set it when they fail. However, some functions also set the last-error code when they succeed. If the function is not documented to set the last-error code, the value returned by this function is simply the most recent last-error code to have been set; some functions set the last-error code to 0 on success and others do not.
At some point during the construction of std::string, SetLastError is called. The standard library on windows uses Win32 calls as part of its implementation.
Your second method (that works) is the correct way to use GetLastError
You should call the GetLastError function immediately when a function's return value indicates that such a call will return useful data. That is because some functions call SetLastError with a zero when they succeed, wiping out the error code set by the most recently failed function.
This is normal - the "last error" can be set indirectly through any function call.
Some functions set it to "no error" on success, so if you want to use it reliably you need to store it immediately before you do anything else.
If you've ever encountered an "A serious error occurred: The operation completed successfully" dialogue, this is probably the reason.
I have a multi-threaded Windows program which is doing serial port asynchronous I/O through "raw" Win API calls. It is working perfectly fine on any Windows version except Windows 7/64.
The problem is that the program can find and setup the COM port just fine, but it cannot send nor receive any data. No matter if I compile the binary in Win XP or 7, I cannot send/receive on Win 7/64. Compatibility mode, run as admin etc does not help.
I have managed to narrow down the problem to the FileIOCompletionRoutine callback. Every time it is called, dwErrorCode is always 0, dwNumberOfBytesTransfered is always 0. GetOverlappedResult() from inside the function always return TRUE (everything ok). It seems to set the lpNumberOfBytesTransferred correctly. But the lpOverlapped parameter is corrupt, it is a garbage pointer pointing at garbage values.
I can see that it is corrupt by either checking in the debugger what address the correct OVERLAPPED struct is allocated at, or by setting a temp. global variable to point at it.
My question is: why does this happen, and why does it only happen on Windows 7/64? Is there some issue with calling convention that I am not aware of? Or is the overlapped struct treated differently somehow?
Posting relevant parts of the code below:
class ThreadedComport : public Comport
{
private:
typedef struct
{
OVERLAPPED overlapped;
ThreadedComport* caller; /* add user data to struct */
} OVERLAPPED_overlap;
OVERLAPPED_overlap _send_overlapped;
OVERLAPPED_overlap _rec_overlapped;
...
static void WINAPI _send_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped);
static void WINAPI _receive_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped);
...
};
Open/close is done in a base class that has no multi-threading nor asynchronous I/O implemented:
void Comport::open (void)
{
char port[20];
DCB dcbCommPort;
COMMTIMEOUTS ctmo_new = {0};
if(_is_open)
{
close();
}
sprintf(port, "\\\\.\\COM%d", TEXT(_port_number));
_hcom = CreateFile(port,
GENERIC_READ | GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
0,
0);
if(_hcom == INVALID_HANDLE_VALUE)
{
// error handling
}
GetCommTimeouts(_hcom, &_ctmo_old);
ctmo_new.ReadTotalTimeoutConstant = 10;
ctmo_new.ReadTotalTimeoutMultiplier = 0;
ctmo_new.WriteTotalTimeoutMultiplier = 0;
ctmo_new.WriteTotalTimeoutConstant = 0;
if(SetCommTimeouts(_hcom, &ctmo_new) == FALSE)
{
// error handling
}
dcbCommPort.DCBlength = sizeof(DCB);
if(GetCommState(_hcom, &(DCB)dcbCommPort) == FALSE)
{
// error handling
}
// setup DCB, this seems to work fine
dcbCommPort.DCBlength = sizeof(DCB);
dcbCommPort.BaudRate = baudrate_int;
if(_parity == PAR_NONE)
{
dcbCommPort.fParity = 0; /* disable parity */
}
else
{
dcbCommPort.fParity = 1; /* enable parity */
}
dcbCommPort.Parity = (uint8)_parity;
dcbCommPort.ByteSize = _databits;
dcbCommPort.StopBits = _stopbits;
SetCommState(_hcom, &(DCB)dcbCommPort);
}
void Comport::close (void)
{
if(_hcom != NULL)
{
SetCommTimeouts(_hcom, &_ctmo_old);
CloseHandle(_hcom);
_hcom = NULL;
}
_is_open = false;
}
The whole multi-threading and event handling mechanism is rather complex, relevant parts are:
Send
result = WriteFileEx (_hcom, // handle to output file
(void*)_write_data, // pointer to input buffer
send_buf_size, // number of bytes to write
(LPOVERLAPPED)&_send_overlapped, // pointer to async. i/o data
(LPOVERLAPPED_COMPLETION_ROUTINE )&_send_callback);
Receive
result = ReadFileEx (_hcom, // handle to output file
(void*)_read_data, // pointer to input buffer
_MAX_MESSAGE_LENGTH, // number of bytes to read
(OVERLAPPED*)&_rec_overlapped, // pointer to async. i/o data
(LPOVERLAPPED_COMPLETION_ROUTINE )&_receive_callback);
Callback functions
void WINAPI ThreadedComport::_send_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped)
{
ThreadedComport* _this = ((OVERLAPPED_overlap*)lpOverlapped)->caller;
if(dwErrorCode == 0) // no errors
{
if(dwNumberOfBytesTransfered > 0)
{
_this->_data_sent = dwNumberOfBytesTransfered;
}
}
SetEvent(lpOverlapped->hEvent);
}
void WINAPI ThreadedComport::_receive_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped)
{
if(dwErrorCode == 0) // no errors
{
if(dwNumberOfBytesTransfered > 0)
{
ThreadedComport* _this = ((OVERLAPPED_overlap*)lpOverlapped)->caller;
_this->_bytes_read = dwNumberOfBytesTransfered;
}
}
SetEvent(lpOverlapped->hEvent);
}
EDIT
Updated: I have spent most of the day on the theory that the OVERLAPPED variable went out of scope before the callback is executed. I have verified that this never happens and I have even tried to declare the OVERLAPPED struct as static, same problem remains. If the OVERLAPPED struct had gone out of scope, I would expect the callback to point at the memory location where the struct was previously allocated, but it doesn't, it points somewhere else, at an entirely unfamiliar memory location. Why it does that, I have no idea.
Maybe Windows 7/64 makes an internal hardcopy of the OVERLAPPED struct? I can see how that would cause this behavior, since I am relying on additional parameters sneaked in at the end of the struct (which seems like a hack to me, but apparently I got that "hack" from official MSDN examples).
I have also tried to change calling convention but this doesn't work at all, if I change it then the program crashes. (The standard calling convention causes it to crash, whatever standard is, cdecl? __fastcall also causes a crash.) The calling conventions that work are __stdcall, WINAPI and CALLBACK. I think these are all same names for __stdcall and I read somewhere that Win 64 ignores that calling convention anyhow.
It would seem that the callback is executed because of some "spurious disturbance" in Win 7/64 generating false callback calls with corrupt or irrelevant parameters.
Multi-thread race conditions is another theory, but in the scenario I am running to reproduce the bug, there is only one thread, and I can confirm that the thread calling ReadFileEx is the same one that is executing the callback.
I have found the problem, it turned out to be annoyingly simple.
In CreateFile(), I did not specify FILE_FLAG_OVERLAPPED. For reasons unknown, this was not necessary on 32-bit Windows. But if you forget it on 64-bit Windows, it will apparently still generate callbacks with the FileIOCompletionRoutine, but they have corrupted parameters.
I haven't found any documentation of this change of behavior anywhere; perhaps it was just an internal bug fix in Windows, since the older documentation also specifies that you must have FILE_FLAG_OVERLAPPED set.
As for my specific case, the bug appeared because I had a base class that assumed synchronous I/O, which has then been inherited by a class using asynchronous I/O.
I have a server application that uses "a lot" of threads. Without wanting to get into an argument about how many threads it really should be using, it would be nice to be able to see some descriptive text in the debugger "threads" window describing what each one is, without having to click through to it and determine from the context what it is.
They all have the same start address so generally the threads window says something like "thread_base::start" or something similar. I'd like to know if there is an API call or something that allows me to customise that text.
Here is the code I use.
This goes in a header file.
#pragma once
#define MS_VC_EXCEPTION 0x406d1388
#pragma warning(disable: 6312)
#pragma warning(disable: 6322)
typedef struct tagTHREADNAME_INFO
{
DWORD dwType; // must be 0x1000
LPCSTR szName; // pointer to name (in same addr space)
DWORD dwThreadID; // thread ID (-1 caller thread)
DWORD dwFlags; // reserved for future use, most be zero
} THREADNAME_INFO;
inline
void SetThreadName(DWORD dwThreadID, LPCSTR szThreadName)
{
#ifdef _DEBUG
THREADNAME_INFO info;
info.dwType = 0x1000;
info.szName = szThreadName;
info.dwThreadID = dwThreadID;
info.dwFlags = 0;
__try
{
RaiseException(MS_VC_EXCEPTION, 0, sizeof(info) / sizeof(DWORD), (DWORD *)&info);
}
__except (EXCEPTION_CONTINUE_EXECUTION)
{
}
#else
dwThreadID;
szThreadName;
#endif
}
Then I call it like this inside the threads proc.
SetThreadName(GetCurrentThreadId(), "VideoSource Thread");
It is worth noting that this is the exact code that David posted a link to (Thanks! I had forgotten where I got it). I didn't delete this post because I'd like the code to still be available if MSDN decides to reorganize its links (again).
Use SetThreadName
Windows 10 added SetThreadDescription(), on that platform this will be the best method.