Short version:
I'm trying to open a process handle with debug privileges and define a pointer which points to an object in the memory of the debuggee.
Long version
I'm a computer science university student in my final year of graduation and got tasked to build an application which should be used for educational purposes for the next generation of students.
Why am I here asking for help, you might ask? Well, the target platform is Windows and I have unfortunately no knowledge of the WinAPI whatsoever...
Okay, here is the basic requirement:
Programming language: C++
Platform: Windows (7 Professional)
Used IDE: Visual Studio 2012
No additional libraries if they aren't essential to ease the development
What will the application be used for?
Using this application the students shall learn to handle addresses, in this case static ones: the debuggee process will have some static pointers, which lead to other pointers themself to form a multi-dimensional pointer.
The students have to find these base addresses using some debugging techniques (which is not part of my work!) and try to find the values at the end of these pointers.
My application will be used by the tutors to randomly change the values and/or structures in the debuggee process.
Some search did yield the first answer: using ReadProcessMemory and WriteProcessMemory one can easily change values in the memory of another process without any need to get debug privileges.
What my tutors want, however, is to have the ability to define pointers (let's say unsigned int) which should point into the memory space of the debuggee process, effectively holding the base addresses I wrote about earlier.
They really want this and I couldn't even talk this out of them so I'm stuck to do this at the end...
And what exactly should work?
Well, I'd have accomplished my task if the following (pseudo) code works:
grantThisProcessDebugPrivileges();
openAnotherProcessWhileItsRunning("targetProcess.exe");
unsigned int * targetValue = (unsigned int*) 0xDE123F00;
// or even
myCustomClass * targetClass = (myCustomClass*) 0xDE123F00;
where the address 0xDE123F00 lies in the memory space of targetProcess.exe.
I know this is possible, else there wouldn't be debuggers which could show this information.
What I did so far (or tried...)
Okay, the thing is: I'm really confused whether I have to activate debug privileges for my application prior opening the target process, doing it after opening or rather giving the target process these privileges.
So I found an example in MSDN and tried to implement it:
BOOL SetPrivilege(
HANDLE hToken, // token handle
LPCTSTR Privilege, // Privilege to enable/disable
BOOL bEnablePrivilege // TRUE to enable. FALSE to disable
)
{
TOKEN_PRIVILEGES tp;
LUID luid;
TOKEN_PRIVILEGES tpPrevious;
DWORD cbPrevious=sizeof(TOKEN_PRIVILEGES);
if(!LookupPrivilegeValue( NULL, Privilege, &luid )) return FALSE;
//
// first pass. get current privilege setting
//
tp.PrivilegeCount = 1;
tp.Privileges[0].Luid = luid;
tp.Privileges[0].Attributes = 0;
AdjustTokenPrivileges(
hToken,
FALSE,
&tp,
sizeof(TOKEN_PRIVILEGES),
&tpPrevious,
&cbPrevious
);
if (GetLastError() != ERROR_SUCCESS) return FALSE;
//
// second pass. set privilege based on previous setting
//
tpPrevious.PrivilegeCount = 1;
tpPrevious.Privileges[0].Luid = luid;
if(bEnablePrivilege) {
tpPrevious.Privileges[0].Attributes |= (SE_PRIVILEGE_ENABLED);
}
else {
tpPrevious.Privileges[0].Attributes ^= (SE_PRIVILEGE_ENABLED &
tpPrevious.Privileges[0].Attributes);
}
AdjustTokenPrivileges(
hToken,
FALSE,
&tpPrevious,
cbPrevious,
NULL,
NULL
);
if (GetLastError() != ERROR_SUCCESS) return FALSE;
return TRUE;
};
And in my main:
HANDLE mainToken;
// I really don't know what this block of code does :<
if(!OpenThreadToken(GetCurrentThread(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, FALSE, &mainToken))
{
if (GetLastError() == ERROR_NO_TOKEN)
{
if (!ImpersonateSelf(SecurityImpersonation))
return 1;
if(!OpenThreadToken(GetCurrentThread(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, FALSE, &mainToken)){
cout << GetLastError();
return 1;
}
}
else
return 1;
}
if (!SetPrivilege(mainToken, SE_DEBUG_NAME, true))
{
CloseHandle(mainToken);
cout << "Couldn't set DEBUG MODE: " << GetLastError() << endl;
return 1;
};
unsigned int processID = getPID("targetProcess.exe");
HANDLE hproc = OpenProcess(PROCESS_ALL_ACCESS, FALSE, processID);
if (hproc == NULL)
{
cout << "Couldn't open the process" << endl;
return 1;
};
unsigned int * theValue = (unsigned int*) 0xDE123F;
Okay, this code runs without any errors, SetPrivilege returns TRUE so I guess it really did set SE_DEBUG_NAME, which I think is the flag I need to set.
But after - for example - outputting the dereferenced value of theValue, the application crashes with an access violation message, which shows that my approach didn't work. I did especially pay attention to start the VisualStudio Debugger with admin rights (SetPrivilege failed otherwise).
I am really clueless here, the fact that I don't know whether setting SE_DEBUG_NAME is the right approach adds to my overall confusion.
I hope you can help me out :)
My hands are tied concerning the specific requests of the application , if you have ideas to achieve my goal using an entire dfferent approach, you're free to enlight me, but I won't be able to present it to my superiors so it will only add to my knowledge :D
From you description, it appears that you have gotten to the point where you can open the process with SE_DEBUG. At this point you now have a handle to the target process.
What your code appears to be missing is the use of ReadProcessMemory.
First we need to look at the definition of ReadProcessMemory:
BOOL WINAPI ReadProcessMemory(
_In_ HANDLE hProcess,
_In_ LPCVOID lpBaseAddress,
_Out_ LPVOID lpBuffer,
_In_ SIZE_T nSize,
_Out_ SIZE_T *lpNumberOfBytesRead);
This function essentially gives you the ability to copy a block of memory from one process space into your process space. So you need to use this method to read a block of memory the size of the data structure you wish to read into your process space, then you can reinterpret the memory block as that data type.
So semi pseudocode for reading an unsigned int from your target process looks like this:
unsigned int ReadUInt(HANDLE process, const void * address)
{
// Add parameter validation
unsigned char buffer[sizeof(unsigned int)] = {};
size_t bytesRead = 0;
BOOL res = ::ReadProcessMemory(process, // The handle you opened with SE_DEBUG privs
address, // The location in the other process
buffer, // Where to transfer the memory to
sizeof(unsigned int), // The number of bytes to read
&bytesRead); // The number of bytes actually read
if (!res)
{
// Deal with the error
}
if (bytesRead != sizeof(unsigned int))
{
// Deal with error where we didn't get enough memory
}
return *reinterpret_cast<unsigned int *>(buffer);
}
Instead of using this line:
unsigned int * theValue = (unsigned int*) 0xDE123F00;
You would do this:
unsigned int theValue = ReadUInt(hproc, 0xDE123F00);
Keep in mind that this requires that you know the size and memory layout of the types you are trying to read. Simple types that are contained in contiguous memory can be retrieved in a single ReadProcessMemory call. Types that contain pointers as well as values will require you to make extra calls to ReadProcessMemory to find the values referenced by the pointers.
Each process has its own virtual address space. An address in one process only has meaning in that process. De-referencing a pointer in C++ code will access the virtual address space of the executing process.
When you de-referenced the pointer in your code you were actually attempting to access memory in your process. No amount of wishful thinking on the part of your tutors can make pointer de-reference access memory in another process.
If you wish to read and write memory from other processes then you must use ReadProcessMemory and WriteProcessMemory.
I don't think you really need to go to all those lengths with tokens and privileges. If I recall correctly you add the debug privilege, call OpenProcess and go straight to it. And I think you can typically skip adding the privilege.
Some search did yield the first answer: using ReadProcessMemory and WriteProcessMemory one can easily change values in the memory of another process without
any need to get debug privileges. What my tutors want, however, is to have the ability to define pointers (let's say unsigned int) which should point into the memory space of the debuggee process, effectively holding the base addresses I wrote about earlier. They really want this and I couldn't even talk this out of them so I'm stuck to do this at the end...
What they want is impossible. I suggest you tell them to get a better understanding of virtual memory before making impossible requirements!
#Cody Gray helpfully mentions memory mapped files. If debuggee and debugger co-operate then they can use memory mapped files to share a common region of memory. In that situation then both process can map the memory into their virtual address space and access it in the normal manner.
I rather assumed that your debuggee was an unwilling victim, but if it is prepared to co-operate then sharing memory could be an option.
Even then you'd need to be careful with any pointers in that shared memory because the memory would, in general, be mapped onto different virtual addresses in each process.
I think you are trying to access kernel land memory range and hence the exception.
The user land range is from 0x00000000 - 7FFFFFFF, so try accessing in this range, as anything above is kernel space.
I am assuming you are on a 32-bit machine.
Check User Space and System Space (Microsoft Docs).
You can create a type that behaves like a pointer by implementing the appropriate operators, just like shared_ptr does:
foreign_ptr<int> ptr{0xDE123F00};
int a = *ptr;
*ptr = 1;
Related
I want to raise this question one more time. I wrote a comment on the accepted answer already, but, it seems, the answered person is inactive on the SO. So, I copy my comment-question here.
In the accepted answer on the referred question, there is a potential risk of the memory leakage. For example, the PostMessage can be resulted with error because of the messages queue is full. Or, the target window may be already destroyed (so, the delete operator will not be called).
As a summary, there is no a strong corresponding between the posting and the receiving of the Windows message. But, on the other hand, there are not so many options how to pass a pointer (to a string, for example) with the Windows message. I see only two: to use objects allocated in the heap or declared as the global variables. The former one has the difficulty which is described here. The latter one is deprived of the specified disadvantage, but there is need to allocate some memory which may be used rarely.
So, I have these questions:
May someone suggest a way, how we can be safe against the memory leakage in the case of using of the heap memory for "attaching" something to the Windows message?
Is there some another option how we can reach the goal (send a string, for example, within the Windows message via the PostMessage system call)?
use std::wstringstream.
look for EM_STREAMOUT.
wchar_t remmi[1250];
//make stuff
int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
hc=CreateWindowEx(WS_EX_NOPARENTNOTIFY, MSFTEDIT_CLASS,remmi,
ES_MULTILINE|ES_AUTOVSCROLL| WS_VISIBLE | WS_CHILD
|WS_TABSTOP|WS_VSCROLL,
1, 350, 450, 201,
this->m_hWnd, NULL, h, NULL);
return 0;
}
//middleware
DWORD CALLBACK E(DWORD_PTR dw, LPBYTE pb, LONG cb, LONG *pcb)
{
std::wstringstream *fr = (std::wstringstream *)dw;
fr->write((wchar_t *)pb, int(cb/2));
*pcb = cb;
return 0;
}
//final
void CMainFrame::tr()
{
std::wstringstream fr;
EDITSTREAM es = {};
es.dwCookie = (DWORD_PTR) &fr;
es.pfnCallback = E;
::SendMessage(hc, EM_STREAMOUT, SF_TEXT|SF_UNICODE, (LPARAM)&es);
ZeroMemory(remmi,1218*2);
fr.read(remmi,747);
}
Yes, I know there's a million threads on this exception, I've probably looked at 20-25 of them, but none of the causes seem to correlate to this, sadly (hence the title, known exception, unknown reason).
I've recently been gaining interest in InfoSec. As my first learners-project, I'd create a basic DLL Injector. Seems to be going well so far, however, this exception is grinding me up, and after some relatively extensive research I'm quite puzzled. Oddly enough, the exception also rises after the function completely finishes.
I couldn't really figure this out myself since external debuggers wouldn't work with my target application, and that was a whole new unrelated issue.
Solutions suggested & attempted so far:
Fix/Remove thread status checking (it was wrong)
Ensure the value behind DllPath ptr is being allocated, not the ptr
Marshaling the C# interop parameters
Anyway, here is my hunk of code:
#pragma once
#include "pch.h"
#include "injection.h" // only specifies UserInject as an exportable proto.
DWORD __stdcall UserInject(DWORD ProcessId, PCSTR DllPath, BOOL UseExtended) {
DWORD length;
CHAR* buffer;
LPVOID memry;
SIZE_T write;
HANDLE hProc;
HMODULE kr32;
HANDLE thread;
length = GetFullPathName(
DllPath,
NULL,
NULL,
NULL
);
AssertNonNull(length, INVALID_PATH);
kr32 = GetModuleHandle("kernel32.dll");
AssertNonNull(kr32, YOUREALLYMESSEDUP);
buffer = new CHAR[length];
GetFullPathName(
DllPath,
length,
buffer,
NULL
);
AssertNonNull(buffer, ERR_DEAD_BUFFER);
hProc = OpenProcess(
ADMIN,
FALSE,
ProcessId
);
AssertNonNull(hProc, INVALID_PROCID);
memry = VirtualAllocEx(
hProc,
nullptr,
sizeof buffer,
SHELLCODE_ALLOCATION,
PAGE_EXECUTE_READWRITE
);
AssertNonNull(memry, INVALID_BUFSIZE);
WriteProcessMemory(
hProc,
memry,
DllPath,
sizeof DllPath,
&write
);
AssertNonNull(write, ERR_SOLID_BUFFER);
auto decidePrototype = [](BOOL UseExtended, HMODULE kr32) -> decltype(auto) {
LPVOID procAddress;
if (!UseExtended) {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIB_ORD);
}
else {
procAddress = (LPVOID)GetProcAddress(kr32, LOADLIBX_ORD);
};
return (LPTHREAD_START_ROUTINE)procAddress;
};
auto loadLibraryAddress = decidePrototype(UseExtended, kr32);
thread = CreateRemoteThread(
hProc,
NULL,
NULL,
loadLibraryAddress,
memry,
NULL,
NULL
);
AssertNonNull(thread, INVALID_ROUTINE);
WaitForSingleObject(thread, INFINITE);
// The status stuff is quite weird; it was an attempt at debugging. The error occurs with or without this code.
// I left it because 50% of the comments below wouldn't make sense. Just be assured this code is positively *not* the problem (sadly).
// LPDWORD status = (LPDWORD)1;
// GetExitCodeThread(thread, status);
return TRUE // *status;
}
One obscure macro would be "ADMIN" which expands to "PROCESS_ALL_ACCESS", shortened to fit in better. Another is "AssertNonNull":
#define AssertNonNull(o, p) if (o == NULL) return p;
I've given a shot at debugging this code, but it doesn't halt at any specific point. I've thrown MessageBox tests past each operation (e.g allocation, writing) in addition to the integrity checks and didn't get any interesting responses.
I'm sorry I can't really add much extensive detail, but I'm really stone-walled here, not sure what to do, what information to get, or if there's anything to get. In short, I'm just not sure what to look for.
This is also being called from C#, 1% pseudocode.
[DllImport(path, CallingConvention = CallingConvention.StdCall)]
static extern int UserInject(uint ProcId, string DllPath, bool UseExtended);
uint validProcId; // integrity tested
string validDllPath; // integrity tested
UserInject(validProcId, validDllPath, true);
If you're interested in my testing application (for reproduction)
#include <iostream>
#include <Windows.h>
static const std::string toPrint = "Hello, World!\n";
int main()
{
while (true)
{
Sleep(1000);
std::cout << toPrint;
}
}
To my surprise, this wasn't as much an issue with the code as much as it was with the testing application.
The basic injection technique I used is prevented by various exploit protections & security mitigations that Visual Studio 2010+ applies to any applications built in release mode.
If I build my testing application in debug mode, there is no exception. If I use a non-VS built application, there is no exception.
I still need to fix how I create my threads, because no thread is created, but I've figured this out, that should be easy enough.
I developed a WDM filter driver on disk driver. I want to send an asynchronous request to write data on disk. The windows will crash when I delete the writeBuffer memory in WriteDataIRPCompletion function.
My question is: How can I safely free the writeBuffer memory without crashing?
This my send request code:
#pragma PAGEDCODE
NTSTATUS WriteToDeviceRoutine() {
PMYDRIVER_WRITE_CONTEXT context = (PMYDRIVER_WRITE_CONTEXT)ExAllocatePool(NonPagedPool,sizeof(PMYDRIVER_WRITE_CONTEXT));
context->writeBuffer = new(NonPagedPool) unsigned char[4096];
PIRP pNewIrp = IoBuildAsynchronousFsdRequest(IRP_MJ_WRITE,
pdx->LowerDeviceObject,
context->writeBuffer,(wroteRecordNodeCount<<SHIFT_BIT),
&startingOffset,NULL);
IoSetCompletionRoutine(pNewIrp,WriteDataIRPCompletion,context,TRUE,TRUE,TRUE);
IoCallDriver(pdx->LowerDeviceObject,pNewIrp);
}
This is my completion routine code:
#pragma LOCKEDCODE
NTSTATUS WriteDataIRPCompletion(IN PDEVICE_OBJECT DeviceObject,IN PIRP driverIrp,IN PVOID Context) {
PMDL mdl,nextMdl;
KdPrint((" WriteDataIRPCompletion \n"));
PMYDRIVER_WRITE_CONTEXT writeContext = (PMYDRIVER_WRITE_CONTEXT) Context;
if(driverIrp->MdlAddress!=NULL){
for(mdl=driverIrp->MdlAddress;mdl!=NULL;mdl = nextMdl) {
nextMdl = mdl->Next;
MmUnlockPages(mdl);
IoFreeMdl(mdl);
KdPrint(("mdl clear\n"));
}
driverIrp->MdlAddress = NULL;
}
delete [] writeContext->writeBuffer;
if(Context)
ExFreePool(Context);
KdPrint(("leave WriteDataIRPCompletion \n"));
return STATUS_CONTINUE_COMPLETION;
}
you error in next line
context = ExAllocatePool(NonPagedPool,sizeof(PMYDRIVER_WRITE_CONTEXT));
when must be
context = ExAllocatePool(NonPagedPool,sizeof(MYDRIVER_WRITE_CONTEXT));
not sizeof(PMYDRIVER_WRITE_CONTEXT) but sizeof(MYDRIVER_WRITE_CONTEXT) you allocate not structure but pointer to it.
this not produce error only if your MYDRIVER_WRITE_CONTEXT containing single field writeBuffer and no more data. otherwise you overwrite allocated memory (which is only sizeof(PVOID)) and this create bug
and about completion for IoBuildAsynchronousFsdRequest. unfortunately documentation not very good. here sated that
Before calling IoFreeIrp, an additional step is required to free the
buffer for an IRP built by IoBuildAsynchronousFsdRequest if the
following are all true:
The buffer was allocated from system memory pool.
but then all attention for
The Irp->MdlAddress field is non-NULL.
however we must check and for IRP_DEALLOCATE_BUFFER|IRP_BUFFERED_IO, without this we can leak Irp->AssociatedIrp.SystemBuffer. need next code
if (Irp->Flags & IRP_BUFFERED_IO)
{
if (Irp->Flags & IRP_INPUT_OPERATION)
{
if (!NT_ERROR(Irp->IoStatus.Status) && Irp->IoStatus.Information)
{
memcpy( Irp->UserBuffer, Irp->AssociatedIrp.SystemBuffer, Irp->IoStatus.Information );
}
}
if (Irp->Flags & IRP_DEALLOCATE_BUFFER)
{
ExFreePool(Irp->AssociatedIrp.SystemBuffer);
Irp->AssociatedIrp.SystemBuffer = 0;
}
Irp->Flags &= ~(IRP_DEALLOCATE_BUFFER|IRP_BUFFERED_IO);
}
and check for if (writeContext) after use writeContext->writeBuffer already senseless and nosense. really you need do check for context != NULL yet in WriteToDeviceRoutine()
I'm not too familiar with the specifics of what you're working with, so here're a few details that caught my attention.
In WriteDataIRPCompletion function
PMYDRIVER_WRITE_CONTEXT writeContext = (PMYDRIVER_WRITE_CONTEXT) Context;
// ...
delete [] writeContext->writeBuffer;
if(Context)
ExFreePool(Context);
Notice that your writeContext originates from your Context argument. However, you seem to be deleting/freeing the allocated memory twice.
The ExFreePool function docs state:
Specifies the address of the block of pool memory being deallocated.
It looks like the delete [] writeContext->writeBuffer; line might be causing the problem and it just needs to be removed.
As it is right now, part of the memory that should be freed by the function has already been manually deleted by the time you invoke ExFreePool, but not set to NULL, which in turn causes ExFreePool to receive a now-invalid pointer (i.e. a non-null pointer pointing to de-allocated memory) in its Context argument, causing the crash.
In WriteToDeviceRoutine function
The documentation for ExFreePool explicitly states that it deallocates memory that has been allocated with other functions, such as ExAllocatePool and other friends.
However, your code is trying to allocate/deallocate the writeContext->writeBuffer directly using the new/delete operators respectively. It seems like you should be allocating your memory with ExAllocatePool and then deallocating with ExFreePool instead of trying to do things manually like that.
These functions may be organizing the memory in a specific way and if/when this pre-condition is not met in ExFreePool, it could end up in a crash.
On a separate note, it seems odd that you check if(Context) is null before invoking ExFreePool, but not above before you try to type-cast for your local writeContext variable and use it.
Maybe you should also check at that first point of use? If Context is always non-null, then the check might also be unnecessary prior to invoking ExFreePool.
I have some event handles and I add them to a list. I want to know if I can store these handles, close the local ones and still use the stored ones later.
Example:
std::map<std::string, HANDLE> Events;
DWORD OpenSingleEvent(std::string EventName, bool InheritHandle, DWORD dwDesiredAccess, DWORD dwMilliseconds)
{
Handle hEvent = OpenEvent(dwDesiredAccess, InheritHandle, EventName.c_str()); //Local Handle.
if (hEvent)
{
DeleteSingleEvent(EventName); //Delete the correct/old handle in the map.
Events.insert(EventName, hEvent); //Add this new handle to the map.
DWORD Result = WaitForSingleObject(hEvent, dwMilliseconds);
CloseHandle(hEvent); //Close THIS handle. Not the one in my Map.
return Result;
}
CloseHandle(hEvent); //Close this handle.
return WAIT_FAILED;
}
Will the above work? If not, is there another way to do this? It's for shared memory communication so I cannot duplicate handles since I only have the client PID not the Server's.
Also can someone explain what InheritHandle does? The function I use is OpenEvent and it has that parameter but I'm not sure what it does.
A HANDLE is simply a void *, it's a token which actually represents an object in kernel space. Calling CloseHandle actually deallocates the kernel object so the short answer to your question is no, you can't keep a list of them and then close all the local ones. All you'll have is a list of void* which don't represent anything.
What you can do is use DuplicateHandle which actually creates another kernel object on your behalf. However... why not just close the handles when you've finished with the entry in the list?
I'm trying to create a trainer for Icy Tower 1.4 for educational purposes.
I wrote a function that shorten the WriteProcessMemory function like that:
void WPM(HWND hWnd,int address,byte data[])
{
DWORD proc_id;
GetWindowThreadProcessId(hWnd, &proc_id);
HANDLE hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, proc_id);
if(!hProcess)
return;
DWORD dataSize = sizeof(data);
WriteProcessMemory(hProcess,(LPVOID)address,&data,dataSize,NULL);
CloseHandle(hProcess);
}
and that's the function that should stop the Icy Tower Clock:
void ClockHack(int status)
{
if(status==1)//enable
{
//crashes the game
byte data[]={0xc7,0x05,0x04,0x11,0x45,0x00,0x00,0x00,0x00,0x00};
WPM(FindIcyTower(),0x00415E19,data);
}
else if(status==0)//disable
{
byte data[]={0xA3,0x04,0x11,0x45,0x00};
}
}
in the else statement there's the orginal AOB of the Opcode.
When I call the ClockHack function with the status parameter set to 1, the game crashes.
In Cheat Engine I wrote for this a script, that dosen't exactly write to the same address because I did Code Cave and it works great.
Someone knows why? Thank you.
By the way: it is for educational purposes only.
You can't pass an array to a function like that. Having a byte[] parameter is the same as a byte * parameter, and sizeof(data) will just give you the size of a pointer. Also, you shouldn't use &data since it's already a pointer.
So your function should look like:
void WPM(HWND hWnd,int address, byte *data, int dataSize)
{
//....
WriteProcessMemory(hProcess,(LPVOID)address,data,dataSize,NULL);
//...
}
when an array is passed into a function it is always passed by reference, so byte[] is the same as byte*, and you are only writing the first sizeof(byte*) bytes of your code. Or 4 bytes on X86 platforms.
Also, it looks like what you are writing is object code, if not then ignore the rest of this this answer.
Well, assuming that you are writing to the correct location, and what you are writing is correct, you still have problem - WriteProcessMemory isn't guaranteed to be atomic with respect to the thread that is running in the target process.
You need to make sure that that target thread is Suspended, and not executing in that part of code. And I have no idea what sort of thing you (may) have to do to flush the instruction decoding pipeline and or L1 cache.
Edit: Now that I've thought some more. I think that using a mutex to protect this piece of code from being overwritten while it is being executed is better than Suspending the thread.