WriteProcessMemory C++ - c++

Just pasted what was necessary, the memory addresses aren't being written to even though my logging shows that WriteProcessMemory() was successful. Also, I've double checked that i have the correct memory addresses as well. Thank You for help.
char* offsets[][3] = {
{ "0x3E264", "0", "char[1]" },
{ "0x45848", "Auto-Mine", "char[10]" },
{ "0x458C0", "Auto-Build", "char[10]" },
//to be continued...
};
HANDLE scHandle = OpenProcess(PROCESS_VM_WRITE | PROCESS_VM_OPERATION, FALSE, ID);
if (scHandle == NULL) {
log << "ERROR: OpenProcess() returned " << GetLastError() << endl;
return false;
}
DWORD bytesOut;
for (int a = 0; a < 9; a++) {
if (WriteProcessMemory(scHandle, (LPVOID)(wDetectorBaseAddress + (int)strtol(offsets[a][0], NULL, 0)), offsets[a][1], strlen(offsets[a][1]) + 1, &bytesOut))
{
log << "WriteProcessMemory() to address " << wDetectorBaseAddress << " + " << (int)strtol(offsets[a][0], NULL, 0) << " = " << wDetectorBaseAddress + (int)strtol(offsets[a][0], NULL, 0) << " with '" << offsets[a][1] << "'; " << bytesOut << " bytes were written" << endl;
}
else
{
log << "ERROR: WriteProcessMemory() returned " << GetLastError() << endl;
return false;
}
}
CloseHandle(scHandle);

You need to call VirtualProtect with PAGE_EXECUTE_READWRITE before you can write to the process's memory. After writing, you need to restore the original protection.
Another thing is, how exactly do you know those addresses are always the same? Can you confirm that it never changes?
Note: You MIGHT also have to call FlushInstructionCache after writing.

Related

How do I get stack call information in an x64 program?

I found a way to get the information of the stack call in an x86 program. The code is not difficult to understand:
#ifdef X86
void TestGetCallStack_X86()
{
std::cout << "Call -> TestGetCallStack_X86" << std::endl;
std::cout << "***************************************" << std::endl;
DWORD _ebp, _esp;
__asm mov _ebp, ebp
__asm mov _esp, esp
for (unsigned int index = 0; index < CALLSTACK_NUM; index++) {
void* pAddr = (void*)ULongToPtr(*(((DWORD*)ULongToPtr(_ebp)) + 1));
if (!pAddr)
return;
IMAGEHLP_LINE64 Line;
Line.SizeOfStruct = sizeof(Line);
memset(&Line, 0, sizeof(Line));
DWORD Offset = 0;
if (fnSymGetLineFromAddr64(s_Process, (DWORD64)pAddr, &Offset, &Line))
{
std::cout << index << " [" << pAddr << "]";
std::cout << " File Name:" << Line.FileName << " " << "Line Count:" << Line.LineNumber << std::endl;
std::cout << std::endl;
}
else
{
DWORD error = GetLastError();
if (error == 487)
{
OutputDebugString(TEXT("No debug info in current module\n"));
}
else if (error == 126)
{
OutputDebugString(TEXT("Debug info in current module has not loaded\n"));
}
else
{
OutputDebugString(TEXT("SymGetLineFromAddr64 failed\n"));
}
std::cout << std::endl;
}
_ebp = *(DWORD*)ULongToPtr(_ebp);
if (_ebp == 0 || 0 != (_ebp & 0xFC000000) || _ebp < _esp)
break;
}
std::cout << "***************************************" << std::endl;
}
#endif // X86
run result
I wrote an x64 version and found that MSVC for x64 does't support inline assembly.
I try to write a .asm file and load the function in C++ code. but it does't work. (Actually I'm not good at Assembly language)
I found many ways. So far, I have used the _ReturnAddress to achieve.
void TestGetCallStack_X64()
{
void* pAddr = _ReturnAddress();
if (!pAddr)
return;
std::cout << "0" << "\t" << pAddr << std::endl;
std::cout << "Use GetLineFromAddress64:" << std::endl;
IMAGEHLP_LINE64 Line;
Line.SizeOfStruct = sizeof(Line);
memset(&Line, 0, sizeof(Line));
DWORD Offset = 0;
if (fnSymGetLineFromAddr64(s_Process, (DWORD64)pAddr, &Offset, &Line))
{
std::cout << "File Name:" << Line.FileName << "\t" << "Line Count:" << Line.LineNumber << std::endl;
std::cout << std::endl;
}
else
{
//Can not find...
}
}
Run Result
But this way cannot fully display the information of the stack call. How should I achieve the same effect as the x86 version?
PS: I can't understand the condition of break in x86's Code. Can anyone answer, Please! Thank you first.
//what's the mean that <_ebp & 0xFC000000>
if (_ebp == 0 || 0 != (_ebp & 0xFC000000) || _ebp < _esp)

How do I use the winhttp api with "transfer-encoding: chunked"

I'm trying to send some data to a web service which requires the "Transfer-encoding: chunked" header. It works fine with a normal POST request.
But as soon as I add the header, I always get:
The content could not be delivered due to the following condition:
Received invalid request from client
This is the part where the request is sent:
std::vector<std::wstring> m_headers;
m_headers.push_back(TEXT("Transfer-encoding: chunked"));
std::wstring m_verb(TEXT("POST"));
std::vector<unsigned __int8> m_payload;
HINTERNET m_connectionHandle = WinHttpConnect(m_http->getSessionHandle(), hostName.c_str(), m_urlParts.nPort, 0);
if (!m_connectionHandle) {
std::cout << "InternetConnect failed: " << GetLastError() << std::endl;
return;
}
__int32 requestFlags = WINHTTP_FLAG_SECURE | WINHTTP_FLAG_REFRESH;
HINTERNET m_requestHandle = WinHttpOpenRequest(m_connectionHandle, m_verb.c_str(), (path + extra).c_str(), NULL, WINHTTP_NO_REFERER, WINHTTP_DEFAULT_ACCEPT_TYPES, requestFlags);
if(!m_requestHandle) {
std::cout << "HttpOpenRequest failed: " << GetLastError() << std::endl;
return;
}
for(auto header : m_headers) {
if(!WinHttpAddRequestHeaders(m_requestHandle, (header + TEXT("\r\n")).c_str(), -1, WINHTTP_ADDREQ_FLAG_ADD)) {
std::cout << "WinHttpAddRequestHeaders failed: " << GetLastError() << std::endl;
return;
}
}
if(!WinHttpSendRequest(m_requestHandle, WINHTTP_NO_ADDITIONAL_HEADERS, 0, WINHTTP_NO_REQUEST_DATA, 0, WINHTTP_IGNORE_REQUEST_TOTAL_LENGTH, (DWORD_PTR)this)) {
std::cout << "HttpSendRequest failed: " << GetLastError() << std::endl;
return;
}
unsigned chunkSize = 1024;
unsigned chunkCount = m_payload.size() / chunkSize;
char chunksizeString[128];
for (unsigned i = 0; i <= chunkCount; i++) {
unsigned actualChunkSize = std::min<unsigned>(chunkSize, m_payload.size() - i * chunkSize);
sprintf_s(chunksizeString, "%d\r\n", actualChunkSize);
if (!WinHttpWriteData(m_requestHandle, chunksizeString, strlen(chunksizeString), (LPDWORD)&m_totalBytesWritten)) {
std::cout << "HttpWriteData failed: " << GetLastError() << std::endl;
return;
}
if (!WinHttpWriteData(m_requestHandle, m_payload.data() + i * chunkSize, actualChunkSize, (LPDWORD)&m_totalBytesWritten)) {
std::cout << "HttpWriteData failed: " << GetLastError() << std::endl;
return;
}
}
// terminate chunked transfer
if (!WinHttpWriteData(m_requestHandle, "0\r\n", strlen("0\r\n"), (LPDWORD)&m_totalBytesWritten)) {
std::cout << "HttpWriteData failed: " << GetLastError() << std::endl;
return;
}
if(!WinHttpReceiveResponse(m_requestHandle, NULL)) {
std::wcout << "HttpReceiveResponse failed: " << GetLastError() << std::endl;
return;
}
I had to copy it from different files, so I hope I got all the important variable definitions. Right now I only use it synchronously since I thought it easier to debug.
As it works with normal POST requests (where I just use WinHttpSendRequest with the payload) I'm guessing it must have to do with the way I use WinHttpSendRequest & WinHttpWriteData, I just don't see how else it should be used.
Any help is appreciated!
You need to split data into chunks manually like this:
int chunkSize = 512; // can be anything
char chunkSizeString[128]; // large enough string buffer
for (int i=0; i<chunksCount; ++i) {
int actualChunkSize = chunkSize; // may be less when passing the last chunk of data (if that's not a multiple of chunkSize)
sprintf(chunkSizeString, "%d\r\n", actualChunkSize);
WinHttpWriteData(m_requestHandle, chunkSizeString, strlen(chunkSizeString), (LPDWORD)&m_totalBytesWritten);
WinHttpWriteData(m_requestHandle, m_payload.data() + i*chunkSize, actualChunkSize, (LPDWORD)&m_totalBytesWritten);
}
WinHttpWriteData(m_requestHandle, "0\r\n", strlen("0\r\n"), (LPDWORD)&m_totalBytesWritten); // the last zero chunk, end of transmission
Thanks to the link provided by #anton-malyshev I was able to find the solution, I just replaced all calls to WinHttpWriteData above with this:
/* Chunk header */
char chunksizeString[64];
sprintf_s(chunksizeString, "%X\r\n", m_payload.size());
if (!WinHttpWriteData(m_requestHandle, chunksizeString, strlen(chunksizeString), (LPDWORD)&m_totalBytesWritten)) {
std::wcout << "WinHttpWriteData chunk header failed: " << getHttpErrorMessage(GetLastError()) << std::endl;
return;
}
/* Chunk body */
if (!WinHttpWriteData(m_requestHandle, m_payload.data(), m_payload.size(), (LPDWORD)&m_totalBytesWritten)) {
std::wcout << "WinHttpWriteData chunk body failed: " << getHttpErrorMessage(GetLastError()) << std::endl;
return;
}
/* Chunk footer */
if (!WinHttpWriteData(m_requestHandle, "\r\n", 2, (LPDWORD)&m_totalBytesWritten)) {
std::wcout << "WinHttpWriteDatachunk footer failed: " << getHttpErrorMessage(GetLastError()) << std::endl;
return;
}
/* Terminate chunk transfer */
if (!WinHttpWriteData(m_requestHandle, "0\r\n\r\n", 5, (LPDWORD)&m_totalBytesWritten)) {
std::wcout << "WinHttpWriteData chunk termination failed: " << getHttpErrorMessage(GetLastError()) << std::endl;
return;
}

Win API ReadProcessMemory at base address of DLL returning unexpected data

I'm trying to read the contents of a DLL from memory for some academic research. Specifically, the NTDSA.DLL library for the purpose of mutating specific instructions to simulate programming errors to force the system to fail. The failure will then be recorded to train machine learning algorithms to predict future failures (this is an attempt to generalize previously published research seen here).
I'm getting what I believe to be the base address in virtual memory of the lsass.exe process (which loads the target DLL) through the process outlined here. I'm then calling ReadProcessMemory with an allocated buffer and the handle to lsass obtained by calling OpenProcess with 'PROCESS_ALL_ACCESS' set. The ReadProcessMemory returns with error code 299 80% of the time (partial read) with zero bytes read. My assumption is that the area I'm trying to access is in use when the call is made. Fortunately, it will occasionally return the number of bytes I'm requesting. Unfortunately, the data returned does not match what is on disk when compared to the static DLL in the System32 directory.
So the question is, is ReadProcessMemory doing something funny with the address that I give it, or is my virtual address wrong? Is there another way to figure out where that DLL gets loaded into memory? Any thoughts? Any help or suggestions would be greatly appreciated.
Adding Code:
// FaultInjection.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <windows.h>
#include <psapi.h>
#include <string>
#include <iostream>
#include <fstream>
#include <stdio.h>
#include <io.h>
#include <tchar.h>
using namespace std;
int _tmain(int argc, _TCHAR* argv[]) {
// Declarations
int pid = 0;
__int64* start_addr;
DWORD size_of_ntdsa;
DWORD aProcesses[1024], cbNeeded, cProcesses;
TCHAR szProcessName[MAX_PATH] = TEXT("<unknown>");
HMODULE hmods[1024];
unsigned int i;
// Get All pids
if (!EnumProcesses(aProcesses, sizeof(aProcesses), &cbNeeded)){
cout << "Failed to get all PIDs: " << GetLastError() << endl;
return -1;
}
// Find pid for lsass.exe
cProcesses = cbNeeded / sizeof(DWORD);
for (i = 0; i < cProcesses; i++) {
if (aProcesses[i] != 0) {
HANDLE hProc = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, aProcesses[i]);
if (hProc != NULL) {
HMODULE hMod;
DWORD cbNeededMod;
if (EnumProcessModules(hProc, &hMod, sizeof(hMod), &cbNeededMod)) {
GetModuleBaseName(hProc, hMod, szProcessName, sizeof(szProcessName) / sizeof(TCHAR));
}
if (wstring(szProcessName).find(L"lsass.exe") != string::npos) {
pid = aProcesses[i];
}
CloseHandle(hProc);
}
}
}
cout << "lsass pid: " << pid << endl;
HANDLE h_lsass = OpenProcess(PROCESS_ALL_ACCESS, FALSE, pid);
if (!h_lsass) {
cout << "Failed to open process (are you root?): " << GetLastError() << endl;
return -1;
}
// Get Process Image File Name
char filename[MAX_PATH];
if (GetProcessImageFileName(h_lsass, (LPTSTR)&filename, MAX_PATH) == 0) {
cout << "Failed to get image file name: " << GetLastError() << endl;
CloseHandle(h_lsass);
return -1;
}
// Enumerate modules within process
if (EnumProcessModules(h_lsass, hmods, sizeof(hmods), &cbNeeded)) {
for (i = 0; i < (cbNeeded / sizeof(HMODULE)); i++) {
TCHAR szModName[MAX_PATH];
if (GetModuleFileNameEx(h_lsass, hmods[i], szModName, sizeof(szModName) / sizeof(TCHAR))) {
if (wstring(szModName).find(L"NTDSA.dll") != string::npos) {
_tprintf(TEXT("%s\n"), szModName);
MODULEINFO lModInfo = { 0 };
if (GetModuleInformation(h_lsass, hmods[i], &lModInfo, sizeof(lModInfo))){
cout << "\t Base Addr: " << lModInfo.lpBaseOfDll << endl;
cout << "\t Entry Point: " << lModInfo.EntryPoint << endl;
cout << "\t Size of image: " << lModInfo.SizeOfImage << endl;
start_addr = (__int64*)lModInfo.lpBaseOfDll;
size_of_ntdsa = lModInfo.SizeOfImage;
}
else {
cout << "Failed to Print enumerated list of modules: " << GetLastError() << endl;
}
}
} else {
cout << "Failed to Print enumerated list of modules: " << GetLastError() << endl;
}
}
}
else {
cout << "Failed to enum the modules: " << GetLastError() << endl;
}
// Ready to continue?
string cont = "";
cout << "Continue? [Y|n]: ";
getline(cin, cont);
if (cont.find("n") != string::npos || cont.find("N") != string::npos) {
CloseHandle(h_lsass);
return 0;
}
void* buf = malloc(size_of_ntdsa);
if (!buf) {
cout << "Failed to allocate space for memory contents: " << GetLastError() << endl;
CloseHandle(h_lsass);
return -1;
}
SIZE_T num_bytes_read = 0;
int count = 0;
if (ReadProcessMemory(h_lsass, &start_addr, buf, size_of_ntdsa, &num_bytes_read) != 0) {
cout << "Read success. Got " << num_bytes_read << " bytes: " << endl;
} else {
int error_code = GetLastError();
if (error_code == 299) {
cout << "Partial read. Got " << num_bytes_read << " bytes: " << endl;
} else {
cout << "Failed to read memory: " << GetLastError() << endl;
CloseHandle(h_lsass);
free(buf);
return -1;
}
}
if (num_bytes_read > 0) {
FILE *fp;
fopen_s(&fp, "C:\\ntdsa_new.dll", "w");
SIZE_T bytes_written = 0;
while (bytes_written < num_bytes_read) {
bytes_written += fwrite(buf, 1, num_bytes_read, fp);
}
fclose(fp);
cout << "Wrote " << bytes_written << " bytes." << endl;
}
CloseHandle(h_lsass);
free(buf);
return 0;
}
Code works as described minus my amateur mistake of sending the address of the variable I was using to store the address of the location in virtual memory of the target application. In above code, changed:
if (ReadProcessMemory(h_lsass, &start_addr, buf, size_of_ntdsa, &num_bytes_read) != 0) {
to
if (ReadProcessMemory(h_lsass, start_addr, buf, size_of_ntdsa, &num_bytes_read) != 0) {
Works like a charm. Thank you ssbssa for pointing out mistake, sorry for wasting anyone's time.

C++ ReadProcessMemory buffer always displays 0

my &pTemp
So I think I'm not understanding this quite well. If I'm not mistaken, the pointer value should be stored in pTemp, right?. So, if the base pointer is 0x00001A, shouldn't pTemp display the same thing? I'm really new to C++ and any help would be appreciated!
DWORD pointer = baseAddress;
DWORD pTemp;
DWORD pointerAddress;
cout << "Base Address: " << (DWORD*) pointer << endl;
for (int i = 0; i < PointerLevel; i++)
{
if (i == 0)
{
ReadProcessMemory(handle, (LPVOID)pointer, &pTemp, sizeof(4), NULL);
cout << "pTemp: " << pTemp << endl;
Try this:
void * src_addr = reinterpret_cast<void *>(baseAddress);
std::size_t n;
if (ReadProcessMemory(handle, src_addr, &pTemp, sizeof pTemp, &n))
{
if (n == sizeof pTemp)
{
std::cout << "Success: pTemp = " << pTemp << "\n";
}
else
{
std::cout << "We only read " << n << " bytes, not the expected "
<< sizeof pTemp << " bytes.\n";
}
}
else
{
std::cout << "Failed to read process memory.\n";
}

Null pointer sqlite3 handle after passed to function call

When i use a function in c++ to init sqlite3, when it comes out of the function the handle is Null. Any idea what might cause this? I simply hand the pointer over as a parameter. If I move the open to the main function it works fine. What happens that causes this? Is something hidden and going out of scope?
#include <iostream>
#include "sqlite3.h"
using namespace std;
int init_table(sqlite3 *dbH, string db_name)
{
if (sqlite3_open(db_name.c_str(), &dbH) != SQLITE_OK)
{
cout << "Failed to open DB : " << sqlite3_errmsg(dbH) << endl;
abort();
}
else
{
cout << "Opened database: " << db_name << endl;
}
if (sqlite3_exec(dbH, "PRAGMA synchronous = OFF", NULL, NULL, NULL) != SQLITE_OK)
{
cout << "Failed to set synchronous: " << sqlite3_errmsg(dbH) << endl;
}
if (sqlite3_exec(dbH, "PRAGMA journal_mode = WAL", NULL, NULL, NULL) != SQLITE_OK)
{
cout << "Failed to set journal mode: " << sqlite3_errmsg(dbH) << endl;
}
cout << "dbH 2: " << dbH << endl;
}
int main()
{
sqlite3 * dbH;
dbH = NULL;
cout << "dbH 1: " << dbH << endl;
string dbName = "foo1.db";
init_table(dbH, dbName);
cout << "dbH 3: " << dbH << endl;
}
And when Run
$ ./a.out
dbH 1: 0
Opened database: foo1.db
dbH 2: 0x5baa048
dbH 3: 0
Should it be
int init_table(sqlite3 **dbH, string db_name)
And pass pointer to pointer?
May be it has no problem with sqliter handling. It is either you pass pointer as reference or as pointer to pointer.
Ofcourse, while passing, You need to pass &dbH to the init_table after modification.