I have a function that is supposed to launch another process:
DWORD WINAPI StartCalc(LPVOID lpParam) {
STARTUPINFOW info;
PROCESS_INFORMATION processInfo;
std::wstring cmd = L"C:\\Windows\\System32\\calc.exe";
BOOL hR = CreateProcessW(NULL, (LPWSTR)cmd.c_str(), NULL, NULL, TRUE, 0, NULL, NULL,
&info, &processInfo);
if (hR == 0) {
DWORD errorMessageID = ::GetLastError();
printf("Error creating process\n");
return 1;
}
return 0;
}
I get an exception in ntdll.dll "Access violation reading location 0xFFFFFFFFFFFFFFFF". I know there are a few common mistakes that might cause this:
Inconsistent calling conventions. I am using __stdcall
Encoding issues. I storing strings as wide chars
The problem happens in both x64 and x86 builds
The problems happens when I try to create other Windows processes
What am i doing wrong?
EDIT: This actually isn't a problem with casting cmd.c_str() as a (LPWSTR), that part appears to be fine. I needed to initialize the STARTUPINFO struct: STARTUPINFO info = { 0 };
BOOL hR = CreateProcessW(NULL, (LPWSTR)cmd.c_str(), NULL, NULL, TRUE, 0, NULL, NULL, &info, &processInfo);
^^^^^
That's a cast. A great rule of thumb is "spot the cast, you spot the error".
It's true here. CreateProcessW must be passed a writable string. That means, no literal, and also no c_str() result.
From the documentation:
The Unicode version of this function, CreateProcessW, can modify the contents of this string. Therefore, this parameter cannot be a pointer to read-only memory (such as a const variable or a literal string). If this parameter is a constant string, the function may cause an access violation.
Pass a real non-const pointer, don't play hide-the-const. &cmd[0] should work, that's guaranteed to be a writable string. To be super-safe, increase your wstring capacity beyond just what you needed, because CreateProcessW is going to use it as a working buffer.
Related
Yes, I know there's a million threads on this exception, I've probably looked at 20-25 of them, but none of the causes seem to correlate to this, sadly (hence the title, known exception, unknown reason).
I've recently been gaining interest in InfoSec. As my first learners-project, I'd create a basic DLL Injector. Seems to be going well so far, however, this exception is grinding me up, and after some relatively extensive research I'm quite puzzled. Oddly enough, the exception also rises after the function completely finishes.
I couldn't really figure this out myself since external debuggers wouldn't work with my target application, and that was a whole new unrelated issue.
Solutions suggested & attempted so far:
Fix/Remove thread status checking (it was wrong)
Ensure the value behind DllPath ptr is being allocated, not the ptr
Marshaling the C# interop parameters
Anyway, here is my hunk of code:
#pragma once
#include "pch.h"
#include "injection.h" // only specifies UserInject as an exportable proto.
DWORD __stdcall UserInject(DWORD ProcessId, PCSTR DllPath, BOOL UseExtended) {
DWORD length;
CHAR* buffer;
LPVOID memry;
SIZE_T write;
HANDLE hProc;
HMODULE kr32;
HANDLE thread;
length = GetFullPathName(
DllPath,
NULL,
NULL,
NULL
);
AssertNonNull(length, INVALID_PATH);
kr32 = GetModuleHandle("kernel32.dll");
AssertNonNull(kr32, YOUREALLYMESSEDUP);
buffer = new CHAR[length];
GetFullPathName(
DllPath,
length,
buffer,
NULL
);
AssertNonNull(buffer, ERR_DEAD_BUFFER);
hProc = OpenProcess(
ADMIN,
FALSE,
ProcessId
);
AssertNonNull(hProc, INVALID_PROCID);
memry = VirtualAllocEx(
hProc,
nullptr,
sizeof buffer,
SHELLCODE_ALLOCATION,
PAGE_EXECUTE_READWRITE
);
AssertNonNull(memry, INVALID_BUFSIZE);
WriteProcessMemory(
hProc,
memry,
DllPath,
sizeof DllPath,
&write
);
AssertNonNull(write, ERR_SOLID_BUFFER);
auto decidePrototype = [](BOOL UseExtended, HMODULE kr32) -> decltype(auto) {
LPVOID procAddress;
if (!UseExtended) {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIB_ORD);
}
else {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIBX_ORD);
};
return (LPTHREAD_START_ROUTINE)procAddress;
};
auto loadLibraryAddress = decidePrototype(UseExtended, kr32);
thread = CreateRemoteThread(
hProc,
NULL,
NULL,
loadLibraryAddress,
memry,
NULL,
NULL
);
AssertNonNull(thread, INVALID_ROUTINE);
WaitForSingleObject(thread, INFINITE);
// The status stuff is quite weird; it was an attempt at debugging. The error occurs with or without this code.
// I left it because 50% of the comments below wouldn't make sense. Just be assured this code is positively *not* the problem (sadly).
// LPDWORD status = (LPDWORD)1;
// GetExitCodeThread(thread, status);
return TRUE // *status;
}
One obscure macro would be "ADMIN" which expands to "PROCESS_ALL_ACCESS", shortened to fit in better. Another is "AssertNonNull":
#define AssertNonNull(o, p) if (o == NULL) return p;
I've given a shot at debugging this code, but it doesn't halt at any specific point. I've thrown MessageBox tests past each operation (e.g allocation, writing) in addition to the integrity checks and didn't get any interesting responses.
I'm sorry I can't really add much extensive detail, but I'm really stone-walled here, not sure what to do, what information to get, or if there's anything to get. In short, I'm just not sure what to look for.
This is also being called from C#, 1% pseudocode.
[DllImport(path, CallingConvention = CallingConvention.StdCall)]
static extern int UserInject(uint ProcId, string DllPath, bool UseExtended);
uint validProcId; // integrity tested
string validDllPath; // integrity tested
UserInject(validProcId, validDllPath, true);
If you're interested in my testing application (for reproduction)
#include <iostream>
#include <Windows.h>
static const std::string toPrint = "Hello, World!\n";
int main()
{
while (true)
{
Sleep(1000);
std::cout << toPrint;
}
}
To my surprise, this wasn't as much an issue with the code as much as it was with the testing application.
The basic injection technique I used is prevented by various exploit protections & security mitigations that Visual Studio 2010+ applies to any applications built in release mode.
If I build my testing application in debug mode, there is no exception. If I use a non-VS built application, there is no exception.
I still need to fix how I create my threads, because no thread is created, but I've figured this out, that should be easy enough.
I have a service that it creates a process and then frees its memory.
I use createprocess() function and then close handle of process and thread,
this is my code:
if (CreateProcessA(NULL,
zAppName,//{ pointer to command line string }
NULL, //{ pointer to process security attributes }
NULL, //{ pointer to thread security attributes }
TRUE, //{ handle inheritance flag }
0,
NULL, //{ pointer to new environment block }
NULL, //{ pointer to current directory name }
&StartupInfo, //{ pointer to STARTUPINFO }
&ProcessInfo))
{
WaitForSingleObject(ProcessInfo.hProcess,WaitMiliSec);
GetExitCodeProcess(ProcessInfo.hProcess, &Result);
CloseHandle(ProcessInfo.hProcess);
CloseHandle(ProcessInfo.hThread);
CloseHandle(hRead);
CloseHandle(hWrite);
return (Result);
}
But after closehandle() function, hProcess and hThread still have value!!!
and each time my service runs this code the memory increases and doesn't decrease!!!Is this a memory leak?
what should I do?
As the process is loaded into the memory following elements are loaded :
1.code
2.memory
2.PCB
4.Data etc.
When you close handles only some part of heap (assuming that you are using malloc to create handlers) is freed. So rest of elements still remains into the process stack. So you can't guarantee that memory is not freed.
yes hProcess and hThread might have some values that can create confusion. But these pointers hProcess and hThread will be called as "dangling pointers". So it is a good practice to assign a null value to the pointers after de-allocation.
Short version:
I'm trying to open a process handle with debug privileges and define a pointer which points to an object in the memory of the debuggee.
Long version
I'm a computer science university student in my final year of graduation and got tasked to build an application which should be used for educational purposes for the next generation of students.
Why am I here asking for help, you might ask? Well, the target platform is Windows and I have unfortunately no knowledge of the WinAPI whatsoever...
Okay, here is the basic requirement:
Programming language: C++
Platform: Windows (7 Professional)
Used IDE: Visual Studio 2012
No additional libraries if they aren't essential to ease the development
What will the application be used for?
Using this application the students shall learn to handle addresses, in this case static ones: the debuggee process will have some static pointers, which lead to other pointers themself to form a multi-dimensional pointer.
The students have to find these base addresses using some debugging techniques (which is not part of my work!) and try to find the values at the end of these pointers.
My application will be used by the tutors to randomly change the values and/or structures in the debuggee process.
Some search did yield the first answer: using ReadProcessMemory and WriteProcessMemory one can easily change values in the memory of another process without any need to get debug privileges.
What my tutors want, however, is to have the ability to define pointers (let's say unsigned int) which should point into the memory space of the debuggee process, effectively holding the base addresses I wrote about earlier.
They really want this and I couldn't even talk this out of them so I'm stuck to do this at the end...
And what exactly should work?
Well, I'd have accomplished my task if the following (pseudo) code works:
grantThisProcessDebugPrivileges();
openAnotherProcessWhileItsRunning("targetProcess.exe");
unsigned int * targetValue = (unsigned int*) 0xDE123F00;
// or even
myCustomClass * targetClass = (myCustomClass*) 0xDE123F00;
where the address 0xDE123F00 lies in the memory space of targetProcess.exe.
I know this is possible, else there wouldn't be debuggers which could show this information.
What I did so far (or tried...)
Okay, the thing is: I'm really confused whether I have to activate debug privileges for my application prior opening the target process, doing it after opening or rather giving the target process these privileges.
So I found an example in MSDN and tried to implement it:
BOOL SetPrivilege(
HANDLE hToken, // token handle
LPCTSTR Privilege, // Privilege to enable/disable
BOOL bEnablePrivilege // TRUE to enable. FALSE to disable
)
{
TOKEN_PRIVILEGES tp;
LUID luid;
TOKEN_PRIVILEGES tpPrevious;
DWORD cbPrevious=sizeof(TOKEN_PRIVILEGES);
if(!LookupPrivilegeValue( NULL, Privilege, &luid )) return FALSE;
//
// first pass. get current privilege setting
//
tp.PrivilegeCount = 1;
tp.Privileges[0].Luid = luid;
tp.Privileges[0].Attributes = 0;
AdjustTokenPrivileges(
hToken,
FALSE,
&tp,
sizeof(TOKEN_PRIVILEGES),
&tpPrevious,
&cbPrevious
);
if (GetLastError() != ERROR_SUCCESS) return FALSE;
//
// second pass. set privilege based on previous setting
//
tpPrevious.PrivilegeCount = 1;
tpPrevious.Privileges[0].Luid = luid;
if(bEnablePrivilege) {
tpPrevious.Privileges[0].Attributes |= (SE_PRIVILEGE_ENABLED);
}
else {
tpPrevious.Privileges[0].Attributes ^= (SE_PRIVILEGE_ENABLED &
tpPrevious.Privileges[0].Attributes);
}
AdjustTokenPrivileges(
hToken,
FALSE,
&tpPrevious,
cbPrevious,
NULL,
NULL
);
if (GetLastError() != ERROR_SUCCESS) return FALSE;
return TRUE;
};
And in my main:
HANDLE mainToken;
// I really don't know what this block of code does :<
if(!OpenThreadToken(GetCurrentThread(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, FALSE, &mainToken))
{
if (GetLastError() == ERROR_NO_TOKEN)
{
if (!ImpersonateSelf(SecurityImpersonation))
return 1;
if(!OpenThreadToken(GetCurrentThread(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, FALSE, &mainToken)){
cout << GetLastError();
return 1;
}
}
else
return 1;
}
if (!SetPrivilege(mainToken, SE_DEBUG_NAME, true))
{
CloseHandle(mainToken);
cout << "Couldn't set DEBUG MODE: " << GetLastError() << endl;
return 1;
};
unsigned int processID = getPID("targetProcess.exe");
HANDLE hproc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, processID);
if (hproc == NULL)
{
cout << "Couldn't open the process" << endl;
return 1;
};
unsigned int * theValue = (unsigned int*) 0xDE123F;
Okay, this code runs without any errors, SetPrivilege returns TRUE so I guess it really did set SE_DEBUG_NAME, which I think is the flag I need to set.
But after - for example - outputting the dereferenced value of theValue, the application crashes with an access violation message, which shows that my approach didn't work. I did especially pay attention to start the VisualStudio Debugger with admin rights (SetPrivilege failed otherwise).
I am really clueless here, the fact that I don't know whether setting SE_DEBUG_NAME is the right approach adds to my overall confusion.
I hope you can help me out :)
My hands are tied concerning the specific requests of the application , if you have ideas to achieve my goal using an entire dfferent approach, you're free to enlight me, but I won't be able to present it to my superiors so it will only add to my knowledge :D
From you description, it appears that you have gotten to the point where you can open the process with SE_DEBUG. At this point you now have a handle to the target process.
What your code appears to be missing is the use of ReadProcessMemory.
First we need to look at the definition of ReadProcessMemory:
BOOL WINAPI ReadProcessMemory(
_In_ HANDLE hProcess,
_In_ LPCVOID lpBaseAddress,
_Out_ LPVOID lpBuffer,
_In_ SIZE_T nSize,
_Out_ SIZE_T *lpNumberOfBytesRead);
This function essentially gives you the ability to copy a block of memory from one process space into your process space. So you need to use this method to read a block of memory the size of the data structure you wish to read into your process space, then you can reinterpret the memory block as that data type.
So semi pseudocode for reading an unsigned int from your target process looks like this:
unsigned int ReadUInt(HANDLE process, const void * address)
{
// Add parameter validation
unsigned char buffer[sizeof(unsigned int)] = {};
size_t bytesRead = 0;
BOOL res = ::ReadProcessMemory(process, // The handle you opened with SE_DEBUG privs
address, // The location in the other process
buffer, // Where to transfer the memory to
sizeof(unsigned int), // The number of bytes to read
&bytesRead); // The number of bytes actually read
if (!res)
{
// Deal with the error
}
if (bytesRead != sizeof(unsigned int))
{
// Deal with error where we didn't get enough memory
}
return *reinterpret_cast<unsigned int *>(buffer);
}
Instead of using this line:
unsigned int * theValue = (unsigned int*) 0xDE123F00;
You would do this:
unsigned int theValue = ReadUInt(hproc, 0xDE123F00);
Keep in mind that this requires that you know the size and memory layout of the types you are trying to read. Simple types that are contained in contiguous memory can be retrieved in a single ReadProcessMemory call. Types that contain pointers as well as values will require you to make extra calls to ReadProcessMemory to find the values referenced by the pointers.
Each process has its own virtual address space. An address in one process only has meaning in that process. De-referencing a pointer in C++ code will access the virtual address space of the executing process.
When you de-referenced the pointer in your code you were actually attempting to access memory in your process. No amount of wishful thinking on the part of your tutors can make pointer de-reference access memory in another process.
If you wish to read and write memory from other processes then you must use ReadProcessMemory and WriteProcessMemory.
I don't think you really need to go to all those lengths with tokens and privileges. If I recall correctly you add the debug privilege, call OpenProcess and go straight to it. And I think you can typically skip adding the privilege.
Some search did yield the first answer: using ReadProcessMemory and WriteProcessMemory one can easily change values in the memory of another process without
any need to get debug privileges. What my tutors want, however, is to have the ability to define pointers (let's say unsigned int) which should point into the memory space of the debuggee process, effectively holding the base addresses I wrote about earlier. They really want this and I couldn't even talk this out of them so I'm stuck to do this at the end...
What they want is impossible. I suggest you tell them to get a better understanding of virtual memory before making impossible requirements!
#Cody Gray helpfully mentions memory mapped files. If debuggee and debugger co-operate then they can use memory mapped files to share a common region of memory. In that situation then both process can map the memory into their virtual address space and access it in the normal manner.
I rather assumed that your debuggee was an unwilling victim, but if it is prepared to co-operate then sharing memory could be an option.
Even then you'd need to be careful with any pointers in that shared memory because the memory would, in general, be mapped onto different virtual addresses in each process.
I think you are trying to access kernel land memory range and hence the exception.
The user land range is from 0x00000000 - 7FFFFFFF, so try accessing in this range, as anything above is kernel space.
I am assuming you are on a 32-bit machine.
Check User Space and System Space (Microsoft Docs).
You can create a type that behaves like a pointer by implementing the appropriate operators, just like shared_ptr does:
foreign_ptr<int> ptr{0xDE123F00};
int a = *ptr;
*ptr = 1;
The following code is copied from MS Async Filter. It is supposed that the following code is calling either CancelIo or CancelIoEx. I don't see where CancelIoEx is called in anyway. It is supposed that the typedef is representing the CancelIoEx but is never called. What exactly the line bResult = (pfnCancelIoEx)(m_hFile, NULL); is doing?
// Cancel: Cancels pending I/O requests.
HRESULT CFileStream::Cancel()
{
CAutoLock lock(&m_CritSec);
// Use CancelIoEx if available, otherwise use CancelIo.
typedef BOOL (*CANCELIOEXPROC)(HANDLE hFile, LPOVERLAPPED lpOverlapped);
BOOL bResult = 0;
CANCELIOEXPROC pfnCancelIoEx = NULL;
HMODULE hKernel32 = LoadLibrary(L"Kernel32.dll");
if (hKernel32){
//propably bad code !!! Take Care.
bResult = (pfnCancelIoEx)(m_hFile, NULL);
FreeLibrary(hKernel32);
}
else {
bResult = CancelIo(m_hFile);
}
if (!bResult) {
return HRESULT_FROM_WIN32(GetLastError());
}
return S_OK;
}
Assuming this is all the code, it has a serious bug in it. This:
CANCELIOEXPROC pfnCancelIoEx = NULL;
defines pfnCancelIoEx as a pointer to function whose signature matches that of CancelIoEx. The pointer is initialised to a null value and the obvious intention is to point it at CancelIoEx later.
That function is defined in Kernel32.dll, so loading this is the logical next step. If this succeeds, the code should proceed by doing this:
pfnCancelIoEx = GetProcAddress(hKernel32, "CancelIoEx");
And then it should check the result. However, it does not do either of that.
Next, on this line:
bResult = (pfnCancelIoEx)(m_hFile, NULL);
it attempts to call the function pointed to by pfnCancelIoEx. However, this pointer is never changed from its initial null value, so this will attempt to dereference a null pointer, resulting in undefined behaviour and likely a crash.
Is there a function in win API which can be used to extract the string representation of HRESULT value?
The problem is that not all return values are documented in MSDN, for example ExecuteInDefaultAppDomain() function is not documented to return "0x80070002 - The system cannot find the file specified.", however, it does! Therefore, I was wondering whether there is a function to be used in common case.
You can use _com_error:
_com_error err(hr);
LPCTSTR errMsg = err.ErrorMessage();
If you don't want to use _com_error for whatever reason, you can still take a look at its source, and see how it's done.
Don't forget to include the header comdef.h
Since c++11, this functionality is built into the standard library:
#include <system_error>
std::string message = std::system_category().message(hr)
The Windows API for this is FormatMessage. Here is a link that explains how to do it: Retrieving Error Messages.
For Win32 messages (messages with an HRESULT that begins with 0x8007, which is FACILITY_WIN32), you need to remove the hi order word. For example in the 0x80070002, you need to call FormatMessage with 0x0002.
However, it does not always work for any type of message. And for some specific messages (specific to a technology, a vendor, etc.), you need to load the corresponding resource DLL, which is not always an easy task, because you need to find this DLL.
Here's a sample using FormatMessage()
LPTSTR SRUTIL_WinErrorMsg(int nErrorCode, LPTSTR pStr, WORD wLength )
{
try
{
LPTSTR szBuffer = pStr;
int nBufferSize = wLength;
//
// prime buffer with error code
//
wsprintf( szBuffer, _T("Error code %u"), nErrorCode);
//
// if we have a message, replace default with msg.
//
FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM,
NULL, nErrorCode,
MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), // Default language
(LPTSTR) szBuffer,
nBufferSize,
NULL );
}
catch(...)
{
}
return pStr;
} // End of SRUTIL_WinErrorMsg()