I'm experimenting with http://www.capstone-engine.org on MacOS and MacOS x86_64 binary. It more or less does work, however i do have 2 concerns.
I'm loading test dylib
[self custom_logging:[NSString stringWithFormat:#"Module Path:%#",clientPath]];
NSMutableData *ModuleNSDATA = [NSMutableData dataWithContentsOfFile:clientPath];
[self custom_logging:[NSString stringWithFormat:#"Client Module Size: %lu MB",(ModuleNSDATA.length/1024/1024)]];
[ModuleNSDATA replaceBytesInRange:NSMakeRange(0, 20752) withBytes:NULL length:0];
uint8_t *bytes = (uint8_t*)[ModuleNSDATA bytes];
long size = [ModuleNSDATA length]/sizeof(uint8_t);
[self custom_logging:[NSString stringWithFormat:#"UInt8_t array size: %lu",size]];
ModuleASM = [NSString stringWithCString:disassembly(bytes,size,0x5110).c_str() encoding:[NSString defaultCStringEncoding]];
As far i did my research it seems i need to trim "first" bytes from binary code to remove header and metadata until it encounters real instructions. However i'm not really sure if capstone do provide any api for this or that i need to scan by byte patterns and locate first instruction address.
In fact i've applied simple workaround, i did found safe address which for sure will have instructions on most modules i will load, however i would like to apply proper solution.
I've successfully loaded and disassembled part of module code using workaround i've described. However, sadly, cs_disasm returns mostly no more than 5000-6000 instructions, which is confusing, it seems it breaks on regular instructions which it shouldn't broke on to. I'm not really sure what i'm doing wrong. Module is more than 15mb of code, so there is a lot more than 5k instructions to disassembly.
Below is function i've based on Capstone Docs example
string disassembly(uint8_t *bytearray, long size, uint64_t startAddress){
csh handle;
cs_insn *insn;
size_t count;
string output;
if (cs_open(CS_ARCH_X86, CS_MODE_64, &handle) == CS_ERR_OK){
count = cs_disasm(handle, bytearray, size, startAddress, 0, &insn);
printf("\nCOUNT:%lu",count);
if (count > 0) {
size_t j;
for (j = 0; j < count; j++) {
char buffer[512];
int i=0;
i = sprintf(buffer, "0x%" PRIx64":\t%s\t\t%s\n", insn[j].address, insn[j].mnemonic,insn[j].op_str);
output += buffer;
}
cs_free(insn, count);
} else {
output = "ERROR: Failed to disassemble given code!\n";
}
}
cs_close(&handle);
return output;
}
I will really appreciate any help on this.
Warmly,
David
Anwser is to simply use SKIPDATA mode. Capstone is great, but their docs are very bad.
Working example below. This mode is still very bugged, so preferably this detection of data sectors should be custom code. For me it works fine only with small chunks of code. However, indeed it does disassembly up to end of file.
string disassembly(uint8_t *bytearray, long size, uint64_t startAddress){
csh handle;
cs_insn *insn;
size_t count;
string output;
cs_opt_skipdata skipdata = {
.mnemonic = "db",
};
if (cs_open(CS_ARCH_X86, CS_MODE_64, &handle) == CS_ERR_OK){
cs_option(handle, CS_OPT_DETAIL, CS_OPT_ON);
cs_option(handle, CS_OPT_SKIPDATA, CS_OPT_ON);
cs_option(handle, CS_OPT_SKIPDATA_SETUP, (size_t)&skipdata);
count = cs_disasm(handle, bytearray, size, startAddress, 0, &insn);
if (count > 0) {
size_t j;
for (j = 0; j < count; j++) {
char buffer[512];
int i=0;
i = sprintf(buffer, "0x%" PRIx64":\t%s\t\t%s\n", insn[j].address, insn[j].mnemonic,insn[j].op_str);
output += buffer;
}
cs_free(insn, count);
} else {
output = "ERROR: Failed to disassemble given code!\n";
}
}
cs_close(&handle);
return output;
}
Shame to those trolls who down-voted this question.
Related
My goal is to read large chunks of executable memory from a target app.
ReadProcessMemory() sometimes fails, but that is okay, I still can examine the rest of the read bytes that I'm interested in.
I don't modify anything in the target application like values.
My problem is, that the target app crashes after a minute or so, or when certain reallocations happen in it.
I went to extremes like reading without VirtualProtectEx() in order to not to modify even the security attributes of the said regions of memory.
I'm curious what could cause a target application to crash after reading form its memory, without modifying values or access rights. (?)
Sidenote: The said memory is probably being accessed simultaneously by the target application as well as my application. (From the target app's perspective it is being read, executed and written.)
You can take a look at my code here:
UINT64 pageNum = 0;
BYTE page[4096];
for (UINT64 i = start; i < end; i+=0x1000)
{
ReadProcessMemory(qtHandle, (void*)i, &page, sizeof(page), &bytesRead);
foundCode = findCode(page, pageNum);
if (foundCode != 0)
{
foundCode += start - 11;
break;
}
pageNum++;
}
cout << hex<< foundCode << endl;
CloseHandle(qtHandle);
return 0;
}
UINT64 findCode(BYTE* pg, UINT64 pageNum)
{
for (size_t i = 0; i < 4096; i++)
{
if (findPattern(asm2, pg, i)) { //asm2 is an array of bytes
return (pageNum * 4096 + i);
}
}
return 0;
}
bool findPattern(BYTE* pattern, BYTE* page, size_t index)
{
for (size_t i = 0; i < sizeof(pattern); i++)
{
if (page[index + i] != pattern[i])
{
return false;
}
}
return true;
}
ReadProcessMemory() cannot cause the target program to crash.
Anticheat/antidebug might be detecting you and terminating the application
If you use VirtualProtectEx() to changing permissions that can cause a crash for sure
We would need to see more code to tell you what the problem is
It was the usage of VirtualProtectEx() that caused the problem.
I have a program that generates files containing random distributions of the character A - Z. I have written a method that reads these files (and counts each character) using fread with different buffer sizes in an attempt to determine the optimal block size for reads. Here is the method:
int get_histogram(FILE * fp, long *hist, int block_size, long *milliseconds, long *filelen)
{
char *buffer = new char[block_size];
bzero(buffer, block_size);
struct timeb t;
ftime(&t);
long start_in_ms = t.time * 1000 + t.millitm;
size_t bytes_read = 0;
while (!feof(fp))
{
bytes_read += fread(buffer, 1, block_size, fp);
if (ferror (fp))
{
return -1;
}
int i;
for (i = 0; i < block_size; i++)
{
int j;
for (j = 0; j < 26; j++)
{
if (buffer[i] == 'A' + j)
{
hist[j]++;
}
}
}
}
ftime(&t);
long end_in_ms = t.time * 1000 + t.millitm;
*milliseconds = end_in_ms - start_in_ms;
*filelen = bytes_read;
return 0;
}
However, when I plot bytes/second vs. block size (buffer size) using block sizes of 2 - 2^20, I get an optimal block size of 4 bytes -- which just can't be correct. Something must be wrong with my code but I can't find it.
Any advice is appreciated.
Regards.
EDIT:
The point of this exercise is to demonstrate the optimal buffer size by recording the read times (plus computation time) for different buffer sizes. The file pointer is opened and closed by the calling code.
There are many bugs in this code:
It uses new[], which is C++.
It doesn't free the allocated memory.
It always loops over block_size bytes of input, not bytes_read as returned by fread().
Also, the actual histogram code is rather inefficient, since it seems to loop over each character to determine which character it is.
UPDATE: Removed claim that using feof() before I/O is wrong, since that wasn't true. Thanks to Eric for pointing this out in a comment.
You're not stating what platform you're running this on, and what compile time parameters you use.
Of course, the fread() involves some overhead, leaving user mode and returning. On the other hand, instead of setting the hist[] information directly, you're looping through the alphabet. This is unnecessary and, without optimization, causes some overhead per byte.
I'd re-test this with hist[j-26]++ or something similar.
Typically, the best timing would be achieved if your buffer size equals the system's buffer size for the given media.
I just ran into a free(): invalid next size (fast) problem while writing a C++ program. And I failed to figure out why this could happen unfortunately. The code is given below.
bool not_corrupt(struct packet *pkt, int size)
{
if (!size) return false;
bool result = true;
char *exp_checksum = (char*)malloc(size * sizeof(char));
char *rec_checksum = (char*)malloc(size * sizeof(char));
char *rec_data = (char*)malloc(size * sizeof(char));
//memcpy(rec_checksum, pkt->data+HEADER_SIZE+SEQ_SIZE+DATA_SIZE, size);
//memcpy(rec_data, pkt->data+HEADER_SIZE+SEQ_SIZE, size);
for (int i = 0; i < size; i++) {
rec_checksum[i] = pkt->data[HEADER_SIZE+SEQ_SIZE+DATA_SIZE+i];
rec_data[i] = pkt->data[HEADER_SIZE+SEQ_SIZE+i];
}
do_checksum(exp_checksum, rec_data, DATA_SIZE);
for (int i = 0; i < size; i++) {
if (exp_checksum[i] != rec_checksum[i]) {
result = false;
break;
}
}
free(exp_checksum);
free(rec_checksum);
free(rec_data);
return result;
}
The macros used are:
#define RDT_PKTSIZE 128
#define SEQ_SIZE 4
#define HEADER_SIZE 1
#define DATA_SIZE ((RDT_PKTSIZE - HEADER_SIZE - SEQ_SIZE) / 2)
The struct used is:
struct packet {
char data[RDT_PKTSIZE];
};
This piece of code doesn't go wrong every time. It would crash with the free(): invalid next size (fast) sometimes in the free(exp_checksum); part.
What's even worse is that sometimes what's in rec_checksum stuff is just not equal to what's in pkt->data[HEADER_SIZE+SEQ_SIZE+DATA_SIZE] stuff, which should be the same according to the watch expressions from my debugging tools. Both memcpy and for methods are used but this problem remains.
I don't quite understand why this would happen. I would be very thankful if anyone could explain this to me.
Edit:
Here's the do_checksum() method, which is very simple:
void do_checksum(char* checksum, char* data, int size)
{
for (int i = 0; i < size; i++)
{
checksum[i] = ~data[i];
}
}
Edit 2:
Thanks for all.
I switched other part of my code from the usage of STL queue to STL vector, the results turn to be cool then.
But still I didn't figure out why. I am sure that I would never pop an empty queue.
The error you report is indicative of heap corruption. These can be hard to track down and tools like valgrind can be extremely helpful. Heap corruptions are often hard to debug with a simple debugger because the runtime error often occurs long after the actual corruption.
That said, the most obvious potential cause of your heap corruption, given the code posted so far, is if DATA_SIZE is greater than size. If that occurs then do_checksum will write beyond the end of exp_checksum.
Three immediate suggestions:
Check for size <= 0 (instead of "!size")
Check for size >= DATA_SIZE
Check for malloc returning NULL
Have you tried Valgrind?
Also, make sure to never send more than RDT_PKTSIZE as size to not_corrupt()
bool not_corrupt(struct packet *pkt, int size)
{
if (!size) return false;
if (size > RDT_PKTSIZE) return false;
/* ... */
Valgrind is good ... but validating all your inputs and checking all error conditions is even better.
Stepping through the code in the debugger isn't a bad idea, either.
I would also call "do_checksum (size)" (your actual size), instead of DATA_SIZE (presumably "maximum size").
DATA_SIZE is a macro defined the max length in my program so the size
should be less than DATA_SIZE
even if that is true, your logic only creates enough memory to hold size characters.
so you should call
do_checksum(exp_checksum, rec_data, size);
and, if you do not want to use std::string (which is fine), you should switch from malloc/free to new/delete when talking C++
Problem solved, thank you all for the help
I've got a bit of a problem here it's not something that's blowing my program up, but it's just bothering me that I can't fix it. I have a function reading in some data from a file, at the end of the execution, the stack around variable longGarbage is corrupted. I've looked around a bit and found that a possible cause is writing to invalid memory. I cleaned up some memory leaks that I had and the problem still persists. What's confusing me is that it happens when the function finishes executing, so it appears to be happening when the variable goes out of scope. Here's the code...
CHCF::CHCF(std::string fileName)
: PAKID("HVST84838672")
{
FILE * archive = fopen(fileName.c_str(), "rb");
std::string strGarbage = "";
unsigned int intGarbage = 0;
unsigned long longGarbage = 0;
unsigned char * data = 0;
char charGarbage = '0';
if (!archive)
{
fclose (archive);
return;
}
for (int i = 0; i < 12; i++)
{
fread(&charGarbage, 1, 1, archive);
strGarbage += charGarbage;
}
if (strGarbage != PAKID)
{
fclose(archive);
throw "Incorrect archive format";
}
strGarbage = "";
fread(&_gameID, sizeof(_gameID),1,archive);
fread(&_fileCount, sizeof(_fileCount),1,archive);
for (int i = 0; i < _fileCount; i++)
{
fread(&longGarbage, 8,1,archive); //file offset
fread(&intGarbage, 4, 1, archive);//fileName
for (int i = 0; i < intGarbage; i++)
{
fread(&charGarbage, 1, 1, archive);
strGarbage += charGarbage;
}
fread(&longGarbage, 8, 1, archive); //fileSize
fread(&intGarbage, 4, 1, archive); //fileType
data = new unsigned char[longGarbage];
for (long i = 0; i < longGarbage; i++)
{
fread(&charGarbage, 1, 1, archive);
data[i] = charGarbage;
}
switch ((FILETYPES)intGarbage)
{
case MAP:
_maps.append(strGarbage, new CFileData(strGarbage, FILETYPES::MAP, data, longGarbage));
break;
default:
break;
}
delete [] data;
data = 0;
strGarbage.clear();
longGarbage = 0;
}
fclose(archive);
} //error happens here
Here is the CFileData constructor:
CFileData::CFileData(std::string fileName, FILETYPES type, unsigned char *data, long fileSize)
{
_fileName = fileName;
_type = type;
_data = new unsigned char[fileSize];
for (int i = 0; i < fileSize; i++)
_data[i] = data[i];
}
Might I suggest std::vector instead of calling new and delete manually? Your code is not exception safe -- you leak if an exception is thrown.
fread(&longGarbage, 8, 1, archive); //fileSize Are you sure sizeof(long) is 8? I suspect it's 4. I believe on Linux boxes sometimes it's 8, but most everywhere else sizeof(long) is 4, and sizeof(long long) is 8.
What about any constructors on members of this class? They can corrupt the stack too.
What's happening is that something is writing to memory around or over the location of longGarbage which is causing the corruption.
You don't say what development environment you are using. One way to diagnose this would be to set a breakpoint that triggers when a specific memory location changes. Choose a memory location that overlaps the area of corruption and wait for it to trigger unexpectedly.
Another way to diagnose this would be to examine code that changes memory around or over longGarbage. That could be almost anything of course but likely candidates are modifications to 'data', modifications to 'intGarbage' and modifications to 'longGarbage' itself.
We can narrow things down even further because we can (usually) be fairly sure the assignment operator itself is safe. Code like data = new... isn't likely to be the culprit so really we need to focus on memory changes that involve taking the address of 'data', 'intGarbage' or 'longGarbage'. In particular memory changes that change more bytes than they should.
Several others have already pointed out that a long is probably not eight bytes in length. If you pass the wrong length to fread, the extra bytes retrieved have to go somewhere.
You are using a lot of magic numbers for data sizes, so I would check that first. In particular, I doubt that sizeof(unsigned long)==8 and sizeof(unsigned in)==4 in all possible circumstances. Refer to your compiler's documentation, but you should still be wary, as this is very likely to change from one compiler/platform to another.
Check for these bits:
fread(&longGarbage, 8,1,archive); //file offset
You also might want to use C++ <iostream> library instead of the C FILE* stuff for reading. It would allow for a much shorter version because you wouldn't need to close the file 3 times.
It seems from the other comments and the information provided that the issue is on the C++ side you should use either __int64 for a windows environment, or int64_t for cross platform.
I'm a fairly new programmer, but I consider my google-fu quite competent and I've spent several hours searching.
I've got a simple SDL application that reads from a binary file (2 bytes as a magic number, then 5 bytes per "tile")
it then displays each tile in the buffer, the bytes decide the x,y,id,passability and such.
So it's just level loading really.
It runs fine on any windows computer (tested windows server 2008, 7/64 and 7/32)
but when I compile it on linux, it displays random tiles in random positions.
I'd be tempted to say it's reading from the wrong portion in the RAM, but I implimented the magic number so it'd return an error if the first 2 bytes were out.
I'd love to figure this out myself but it's bugging me to hell now and I can't progress much further with it unless I can program on the move (my laptop runs linux).
I'm using G++ on linux, mingw32g++ on windows.
bool loadlevel(int level_number)
{
int length;
std::string filename;
filename = "Levels/level";
filename += level_number+48;
filename += ".lvl";
std::ifstream level;
level.open(filename.c_str(),std::ios::in|std::ios::binary);
level.seekg(0,std::ios::end);
length = level.tellg();
level.seekg(0,std::ios::beg);
char buffer[length];
level.read(buffer,length);
if (buffer[0] == 0x49 && buffer[1] == 0x14)
{
char tile_buffer[BYTES_PER_TILE];
int buffer_place = 1;
while(buffer_place < length)
{
for (int i = 1;i <= BYTES_PER_TILE;i++)
{
tile_buffer[i] = buffer[buffer_place+1];
buffer_place++;
}
apply_surface(tile_buffer[1],tile_buffer[2],tiles,screen,&clip[tile_buffer[3]]);
}
}
else
{
// File is invalid
return false;
}
level.close();
return true;
}
Thanks in advance!
Your array handling is incorrect.
Array indexing in C/C++ begins from 0.
You have defined 'tile_buffer' to be an array sized 'BYTES_PER_TILE'.
If BYTES_PER_TILE was 5, your array would have elements tile_buffer[0] to tile_buffer[4].
In your inner for-loop you loop from 1 to 5 so a buffer overflow will occur.
I don't know if this is the cause of your problem but it certainly won't help matters.
This is probably not an answer, but the 1-based array handling and the unneeded copying make my head hurt.
Why not just do something along these lines?
if ((length >= 2+BYTES_PER_TILE) && (buf[0] == CONST1) && (buf[1] == CONST2)) {
for (char *tile = &buf[2]; tile < &buf[length-BYTES_PER_TILE]; tile+=BYTES_PER_TILE) {
apply_surface(tile[0],tile[1],tiles,screen,&clip[tile[2]]);
}
}