My goal is to read large chunks of executable memory from a target app.
ReadProcessMemory() sometimes fails, but that is okay, I still can examine the rest of the read bytes that I'm interested in.
I don't modify anything in the target application like values.
My problem is, that the target app crashes after a minute or so, or when certain reallocations happen in it.
I went to extremes like reading without VirtualProtectEx() in order to not to modify even the security attributes of the said regions of memory.
I'm curious what could cause a target application to crash after reading form its memory, without modifying values or access rights. (?)
Sidenote: The said memory is probably being accessed simultaneously by the target application as well as my application. (From the target app's perspective it is being read, executed and written.)
You can take a look at my code here:
UINT64 pageNum = 0;
BYTE page[4096];
for (UINT64 i = start; i < end; i+=0x1000)
{
ReadProcessMemory(qtHandle, (void*)i, &page, sizeof(page), &bytesRead);
foundCode = findCode(page, pageNum);
if (foundCode != 0)
{
foundCode += start - 11;
break;
}
pageNum++;
}
cout << hex<< foundCode << endl;
CloseHandle(qtHandle);
return 0;
}
UINT64 findCode(BYTE* pg, UINT64 pageNum)
{
for (size_t i = 0; i < 4096; i++)
{
if (findPattern(asm2, pg, i)) { //asm2 is an array of bytes
return (pageNum * 4096 + i);
}
}
return 0;
}
bool findPattern(BYTE* pattern, BYTE* page, size_t index)
{
for (size_t i = 0; i < sizeof(pattern); i++)
{
if (page[index + i] != pattern[i])
{
return false;
}
}
return true;
}
ReadProcessMemory() cannot cause the target program to crash.
Anticheat/antidebug might be detecting you and terminating the application
If you use VirtualProtectEx() to changing permissions that can cause a crash for sure
We would need to see more code to tell you what the problem is
It was the usage of VirtualProtectEx() that caused the problem.
Related
I am curious to know if it is possible to extend a process module, like a dll, to be larger.
As an example, I inject test.dll into test.exe, I make test.dll + 0x1000 bytes larger, these new bytes now have PAGE_EXECUTE privileges and I can modify them however I like.
I want to be able to do this only after injecting the dll, not by adding fake code to the dll to overwrite, as I will likely have no control over the execution of the actual test subject.
I have already tried to use VirtualAlloc on the end of the dll's sections to increase its size, but it hasn't worked, I usually get ERROR_BAD_ADDRESS, I made sure to verify the process had enough space after the dll to allocate to and commit to.
uint64_t module_contents = (uint64_t)GetModuleHandleA(NULL);
IMAGE_DOS_HEADER* dos_header = (IMAGE_DOS_HEADER*)module_contents;
IMAGE_NT_HEADERS64* nt_header = (IMAGE_NT_HEADERS*)(module_contents + dos_header->e_lfanew);
IMAGE_SECTION_HEADER* section = (IMAGE_SECTION_HEADER*)(nt_header + 1);
for (unsigned short i = 0; i < nt_header->FileHeader.NumberOfSections; i++) {
if (!section->SizeOfRawData && section->Characteristics & IMAGE_SCN_MEM_EXECUTE) {
char name[8];
memcpy(name, section->Name, sizeof(name));
if (std::string(name).find("text") == std::string::npos) {
//std::cout << name << std::endl;
break;
}
}
section++;
}
std::cout << VirtualAlloc((void*)(module_contents + section->VirtualAddress + section->Misc.PhysicalAddress), 0x1000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE) << std::endl;
The expected result of this is to fill in and commit to the empty space after the module's sections, which I verify has not been committed to both by the cout & with Cheat Engine to look at the memory region. The result I get is an error message of 0x1E7, or ERROR_INVALID_ADDRESS. I have gotten other error messages like ERROR_INVALID_PARAMETER, when using a smaller size like 0x1.
Any help with this is appreciated!
I'm experimenting with http://www.capstone-engine.org on MacOS and MacOS x86_64 binary. It more or less does work, however i do have 2 concerns.
I'm loading test dylib
[self custom_logging:[NSString stringWithFormat:#"Module Path:%#",clientPath]];
NSMutableData *ModuleNSDATA = [NSMutableData dataWithContentsOfFile:clientPath];
[self custom_logging:[NSString stringWithFormat:#"Client Module Size: %lu MB",(ModuleNSDATA.length/1024/1024)]];
[ModuleNSDATA replaceBytesInRange:NSMakeRange(0, 20752) withBytes:NULL length:0];
uint8_t *bytes = (uint8_t*)[ModuleNSDATA bytes];
long size = [ModuleNSDATA length]/sizeof(uint8_t);
[self custom_logging:[NSString stringWithFormat:#"UInt8_t array size: %lu",size]];
ModuleASM = [NSString stringWithCString:disassembly(bytes,size,0x5110).c_str() encoding:[NSString defaultCStringEncoding]];
As far i did my research it seems i need to trim "first" bytes from binary code to remove header and metadata until it encounters real instructions. However i'm not really sure if capstone do provide any api for this or that i need to scan by byte patterns and locate first instruction address.
In fact i've applied simple workaround, i did found safe address which for sure will have instructions on most modules i will load, however i would like to apply proper solution.
I've successfully loaded and disassembled part of module code using workaround i've described. However, sadly, cs_disasm returns mostly no more than 5000-6000 instructions, which is confusing, it seems it breaks on regular instructions which it shouldn't broke on to. I'm not really sure what i'm doing wrong. Module is more than 15mb of code, so there is a lot more than 5k instructions to disassembly.
Below is function i've based on Capstone Docs example
string disassembly(uint8_t *bytearray, long size, uint64_t startAddress){
csh handle;
cs_insn *insn;
size_t count;
string output;
if (cs_open(CS_ARCH_X86, CS_MODE_64, &handle) == CS_ERR_OK){
count = cs_disasm(handle, bytearray, size, startAddress, 0, &insn);
printf("\nCOUNT:%lu",count);
if (count > 0) {
size_t j;
for (j = 0; j < count; j++) {
char buffer[512];
int i=0;
i = sprintf(buffer, "0x%" PRIx64":\t%s\t\t%s\n", insn[j].address, insn[j].mnemonic,insn[j].op_str);
output += buffer;
}
cs_free(insn, count);
} else {
output = "ERROR: Failed to disassemble given code!\n";
}
}
cs_close(&handle);
return output;
}
I will really appreciate any help on this.
Warmly,
David
Anwser is to simply use SKIPDATA mode. Capstone is great, but their docs are very bad.
Working example below. This mode is still very bugged, so preferably this detection of data sectors should be custom code. For me it works fine only with small chunks of code. However, indeed it does disassembly up to end of file.
string disassembly(uint8_t *bytearray, long size, uint64_t startAddress){
csh handle;
cs_insn *insn;
size_t count;
string output;
cs_opt_skipdata skipdata = {
.mnemonic = "db",
};
if (cs_open(CS_ARCH_X86, CS_MODE_64, &handle) == CS_ERR_OK){
cs_option(handle, CS_OPT_DETAIL, CS_OPT_ON);
cs_option(handle, CS_OPT_SKIPDATA, CS_OPT_ON);
cs_option(handle, CS_OPT_SKIPDATA_SETUP, (size_t)&skipdata);
count = cs_disasm(handle, bytearray, size, startAddress, 0, &insn);
if (count > 0) {
size_t j;
for (j = 0; j < count; j++) {
char buffer[512];
int i=0;
i = sprintf(buffer, "0x%" PRIx64":\t%s\t\t%s\n", insn[j].address, insn[j].mnemonic,insn[j].op_str);
output += buffer;
}
cs_free(insn, count);
} else {
output = "ERROR: Failed to disassemble given code!\n";
}
}
cs_close(&handle);
return output;
}
Shame to those trolls who down-voted this question.
My goal is to lock virtual memory allocated for my process heaps (to prevent a possibility of it being swapped out to disk.)
I use the following code:
//pseudo-code, error checks are omitted for brevity
struct MEM_PAGE_TO_LOCK{
const BYTE* pBaseAddr; //Base address of the page
size_t szcbBlockSz; //Size of the block in bytes
MEM_PAGE_TO_LOCK()
: pBaseAddr(NULL)
, szcbBlockSz(0)
{
}
};
void WorkerThread(LPVOID pVoid)
{
//Called repeatedly from a worker thread
HANDLE hHeaps[256] = {0}; //Assume large array for the sake of this example
UINT nNumberHeaps = ::GetProcessHeaps(256, hHeaps);
if(nNumberHeaps > 256)
nNumberHeaps = 256;
std::vector<MEM_PAGE_TO_LOCK> arrPages;
for(UINT i = 0; i < nNumberHeaps; i++)
{
lockUnlockHeapAndWalkIt(hHeaps[i], arrPages);
}
//Now lock collected virtual memory
for(size_t p = 0; p < arrPages.size(); p++)
{
::VirtualLock((void*)arrPages[p].pBaseAddr, arrPages[p].szcbBlockSz);
}
}
void lockUnlockHeapAndWalkIt(HANDLE hHeap, std::vector<MEM_PAGE_TO_LOCK>& arrPages)
{
if(::HeapLock(hHeap))
{
__try
{
walkHeapAndCollectVMPages(hHeap, arrPages);
}
__finally
{
::HeapUnlock(hHeap);
}
}
}
void walkHeapAndCollectVMPages(HANDLE hHeap, std::vector<MEM_PAGE_TO_LOCK>& arrPages)
{
PROCESS_HEAP_ENTRY phe = {0};
MEM_PAGE_TO_LOCK mptl;
SYSTEM_INFO si = {0};
::GetSystemInfo(&si);
for(;;)
{
//Get next heap block
if(!::HeapWalk(hHeap, &phe))
{
if(::GetLastError() != ERROR_NO_MORE_ITEMS)
{
//Some other error
ASSERT(NULL);
}
break;
}
//We need to skip heap regions & uncommitted areas
//We're interested only in allocated blocks
if((phe.wFlags & (PROCESS_HEAP_REGION |
PROCESS_HEAP_UNCOMMITTED_RANGE | PROCESS_HEAP_ENTRY_BUSY)) == PROCESS_HEAP_ENTRY_BUSY)
{
if(phe.cbData &&
phe.lpData)
{
//Get address aligned at the page size boundary
size_t nRmndr = (size_t)phe.lpData % si.dwPageSize;
BYTE* pBegin = (BYTE*)((size_t)phe.lpData - nRmndr);
//Get segment size, also page aligned (round it up though)
BYTE* pLast = (BYTE*)phe.lpData + phe.cbData;
nRmndr = (size_t)pLast % si.dwPageSize;
if(nRmndr)
pLast += si.dwPageSize - nRmndr;
size_t szcbSz = pLast - pBegin;
//Do we have such a block already, or an adjacent one?
std::vector<MEM_PAGE_TO_LOCK>::iterator itr = arrPages.begin();
for(; itr != arrPages.end(); ++itr)
{
const BYTE* pLPtr = itr->pBaseAddr + itr->szcbBlockSz;
//See if they intersect or are adjacent
if(pLPtr >= pBegin &&
itr->pBaseAddr <= pLast)
{
//Intersected with another memory block
//Get the larger of the two
if(pBegin < itr->pBaseAddr)
itr->pBaseAddr = pBegin;
itr->szcbBlockSz = pLPtr > pLast ? pLPtr - itr->pBaseAddr : pLast - itr->pBaseAddr;
break;
}
}
if(itr == arrPages.end())
{
//Add new page
mptl.pBaseAddr = pBegin;
mptl.szcbBlockSz = szcbSz;
arrPages.push_back(mptl);
}
}
}
}
}
This method works, except that rarely the following happens. The app hangs up, UI and everything, and even if I try to run it with the Visual Studio debugger and then try to Break all, it shows an error message that no user-mode threads are running:
The process appears to be deadlocked (or is not running any user-mode
code). All threads have been stopped.
I tried it several times. The second time when the app hung up, I used the Task Manager to create dump file, after which I loaded the .dmp file into Visual Studio & analyzed it. The debugger showed that the deadlock happened somewhere in the kernel:
and if you review the call stack:
It points to the location of the code as such:
CString str;
str.Format(L"Some formatting value=%d, %s", value, etc);
Experimenting further with it, if I remove HeapLock and HeapUnlock calls from the code above, it doesn't seem to hang anymore. But then HeapWalk may sometimes issue an unhandled exception, access violation.
So any suggestions how to resolve this?
The problem is that you're using the C runtime's memory management, and more specifically the CRT's debug heap, while holding the operating system's heap lock.
The call stack you've posted includes _free_dbg, which always claims the CRT debug heap lock before taking any other action, so we know the thread holds the CRT debug heap lock. We can also see that the CRT was inside an operating system call made by _CrtIsValidHeapPointer when the deadlock occurred; the only such call is to HeapValidate and HEAP_NO_SERIALIZE is not specified.
So the thread whose call stack has been posted is holding the CRT debug heap lock and attempting to claim the operating system's heap lock.
The worker thread, on the other hand, holds the operating system's heap lock and makes calls that attempt to claim the CRT debug heap lock.
QED. Classic deadlock situation.
In a debug build, you will need to refrain from using any C or C++ library functions that might allocate or free memory while you are holding the corresponding operating system heap lock.
Even in a release build, you would still need to avoid any library functions that might allocate or release memory while holding a lock, which might be a problem if, for example, a hypothetical future implementation of std::vector was changed to make it thread-safe.
I recommend that you avoid the issue entirely, which is probably best done by creating a dedicated heap for your worker thread and taking all necessary memory allocations out of that heap. It would probably be best to exclude this heap from processing; the documentation for HeapWalk does not explicitly say that you should not modify the heap during enumeration, but it seems risky.
I just ran into a free(): invalid next size (fast) problem while writing a C++ program. And I failed to figure out why this could happen unfortunately. The code is given below.
bool not_corrupt(struct packet *pkt, int size)
{
if (!size) return false;
bool result = true;
char *exp_checksum = (char*)malloc(size * sizeof(char));
char *rec_checksum = (char*)malloc(size * sizeof(char));
char *rec_data = (char*)malloc(size * sizeof(char));
//memcpy(rec_checksum, pkt->data+HEADER_SIZE+SEQ_SIZE+DATA_SIZE, size);
//memcpy(rec_data, pkt->data+HEADER_SIZE+SEQ_SIZE, size);
for (int i = 0; i < size; i++) {
rec_checksum[i] = pkt->data[HEADER_SIZE+SEQ_SIZE+DATA_SIZE+i];
rec_data[i] = pkt->data[HEADER_SIZE+SEQ_SIZE+i];
}
do_checksum(exp_checksum, rec_data, DATA_SIZE);
for (int i = 0; i < size; i++) {
if (exp_checksum[i] != rec_checksum[i]) {
result = false;
break;
}
}
free(exp_checksum);
free(rec_checksum);
free(rec_data);
return result;
}
The macros used are:
#define RDT_PKTSIZE 128
#define SEQ_SIZE 4
#define HEADER_SIZE 1
#define DATA_SIZE ((RDT_PKTSIZE - HEADER_SIZE - SEQ_SIZE) / 2)
The struct used is:
struct packet {
char data[RDT_PKTSIZE];
};
This piece of code doesn't go wrong every time. It would crash with the free(): invalid next size (fast) sometimes in the free(exp_checksum); part.
What's even worse is that sometimes what's in rec_checksum stuff is just not equal to what's in pkt->data[HEADER_SIZE+SEQ_SIZE+DATA_SIZE] stuff, which should be the same according to the watch expressions from my debugging tools. Both memcpy and for methods are used but this problem remains.
I don't quite understand why this would happen. I would be very thankful if anyone could explain this to me.
Edit:
Here's the do_checksum() method, which is very simple:
void do_checksum(char* checksum, char* data, int size)
{
for (int i = 0; i < size; i++)
{
checksum[i] = ~data[i];
}
}
Edit 2:
Thanks for all.
I switched other part of my code from the usage of STL queue to STL vector, the results turn to be cool then.
But still I didn't figure out why. I am sure that I would never pop an empty queue.
The error you report is indicative of heap corruption. These can be hard to track down and tools like valgrind can be extremely helpful. Heap corruptions are often hard to debug with a simple debugger because the runtime error often occurs long after the actual corruption.
That said, the most obvious potential cause of your heap corruption, given the code posted so far, is if DATA_SIZE is greater than size. If that occurs then do_checksum will write beyond the end of exp_checksum.
Three immediate suggestions:
Check for size <= 0 (instead of "!size")
Check for size >= DATA_SIZE
Check for malloc returning NULL
Have you tried Valgrind?
Also, make sure to never send more than RDT_PKTSIZE as size to not_corrupt()
bool not_corrupt(struct packet *pkt, int size)
{
if (!size) return false;
if (size > RDT_PKTSIZE) return false;
/* ... */
Valgrind is good ... but validating all your inputs and checking all error conditions is even better.
Stepping through the code in the debugger isn't a bad idea, either.
I would also call "do_checksum (size)" (your actual size), instead of DATA_SIZE (presumably "maximum size").
DATA_SIZE is a macro defined the max length in my program so the size
should be less than DATA_SIZE
even if that is true, your logic only creates enough memory to hold size characters.
so you should call
do_checksum(exp_checksum, rec_data, size);
and, if you do not want to use std::string (which is fine), you should switch from malloc/free to new/delete when talking C++
Problem solved, thank you all for the help
I've got a bit of a problem here it's not something that's blowing my program up, but it's just bothering me that I can't fix it. I have a function reading in some data from a file, at the end of the execution, the stack around variable longGarbage is corrupted. I've looked around a bit and found that a possible cause is writing to invalid memory. I cleaned up some memory leaks that I had and the problem still persists. What's confusing me is that it happens when the function finishes executing, so it appears to be happening when the variable goes out of scope. Here's the code...
CHCF::CHCF(std::string fileName)
: PAKID("HVST84838672")
{
FILE * archive = fopen(fileName.c_str(), "rb");
std::string strGarbage = "";
unsigned int intGarbage = 0;
unsigned long longGarbage = 0;
unsigned char * data = 0;
char charGarbage = '0';
if (!archive)
{
fclose (archive);
return;
}
for (int i = 0; i < 12; i++)
{
fread(&charGarbage, 1, 1, archive);
strGarbage += charGarbage;
}
if (strGarbage != PAKID)
{
fclose(archive);
throw "Incorrect archive format";
}
strGarbage = "";
fread(&_gameID, sizeof(_gameID),1,archive);
fread(&_fileCount, sizeof(_fileCount),1,archive);
for (int i = 0; i < _fileCount; i++)
{
fread(&longGarbage, 8,1,archive); //file offset
fread(&intGarbage, 4, 1, archive);//fileName
for (int i = 0; i < intGarbage; i++)
{
fread(&charGarbage, 1, 1, archive);
strGarbage += charGarbage;
}
fread(&longGarbage, 8, 1, archive); //fileSize
fread(&intGarbage, 4, 1, archive); //fileType
data = new unsigned char[longGarbage];
for (long i = 0; i < longGarbage; i++)
{
fread(&charGarbage, 1, 1, archive);
data[i] = charGarbage;
}
switch ((FILETYPES)intGarbage)
{
case MAP:
_maps.append(strGarbage, new CFileData(strGarbage, FILETYPES::MAP, data, longGarbage));
break;
default:
break;
}
delete [] data;
data = 0;
strGarbage.clear();
longGarbage = 0;
}
fclose(archive);
} //error happens here
Here is the CFileData constructor:
CFileData::CFileData(std::string fileName, FILETYPES type, unsigned char *data, long fileSize)
{
_fileName = fileName;
_type = type;
_data = new unsigned char[fileSize];
for (int i = 0; i < fileSize; i++)
_data[i] = data[i];
}
Might I suggest std::vector instead of calling new and delete manually? Your code is not exception safe -- you leak if an exception is thrown.
fread(&longGarbage, 8, 1, archive); //fileSize Are you sure sizeof(long) is 8? I suspect it's 4. I believe on Linux boxes sometimes it's 8, but most everywhere else sizeof(long) is 4, and sizeof(long long) is 8.
What about any constructors on members of this class? They can corrupt the stack too.
What's happening is that something is writing to memory around or over the location of longGarbage which is causing the corruption.
You don't say what development environment you are using. One way to diagnose this would be to set a breakpoint that triggers when a specific memory location changes. Choose a memory location that overlaps the area of corruption and wait for it to trigger unexpectedly.
Another way to diagnose this would be to examine code that changes memory around or over longGarbage. That could be almost anything of course but likely candidates are modifications to 'data', modifications to 'intGarbage' and modifications to 'longGarbage' itself.
We can narrow things down even further because we can (usually) be fairly sure the assignment operator itself is safe. Code like data = new... isn't likely to be the culprit so really we need to focus on memory changes that involve taking the address of 'data', 'intGarbage' or 'longGarbage'. In particular memory changes that change more bytes than they should.
Several others have already pointed out that a long is probably not eight bytes in length. If you pass the wrong length to fread, the extra bytes retrieved have to go somewhere.
You are using a lot of magic numbers for data sizes, so I would check that first. In particular, I doubt that sizeof(unsigned long)==8 and sizeof(unsigned in)==4 in all possible circumstances. Refer to your compiler's documentation, but you should still be wary, as this is very likely to change from one compiler/platform to another.
Check for these bits:
fread(&longGarbage, 8,1,archive); //file offset
You also might want to use C++ <iostream> library instead of the C FILE* stuff for reading. It would allow for a much shorter version because you wouldn't need to close the file 3 times.
It seems from the other comments and the information provided that the issue is on the C++ side you should use either __int64 for a windows environment, or int64_t for cross platform.