C++ HEAP CORRUPTION DETECTED - CRT - c++

I have a BYTE array as follows:
BYTE* m_pImage;
m_pImage = new BYTE[m_someLength];
And at various stages of my program data is copied to this array like so:
BYTE* pDestinationBuffer = m_pImage + m_imageOffset;
memcpy( pDestinationBuffer, (BYTE*)data, dataLength );
But when I go to delete my buffer like so:
delete[] m_pImage;
I am getting the
HEAP CORRUPTION DETECTED - CRT detected that the application wrote to memory after the end of heap buffer
Now I have experimented with a simple program to try and replicate the error in order to help me investigate whats going on. I see from that following that if I create an array of size 5 but write over the end of it and try to delete it I get the exact same error.
int* myArray = new int[5];
myArray[0] = 0;
myArray[1] = 1;
myArray[2] = 2;
myArray[3] = 3;
myArray[4] = 4;
myArray[5] = 5; // writing beyond array bounds
delete[] myArray;
Now my question is how can I possibly debug or find out what is overwriting my original buffer. I know that something is overwriting the end of the buffer, so is there a way for visual studio to help me debug this easily.
The code above that is copying to the data buffer is called several times before the delete soits hard to keep a track of the m_pImage contents and the data copied to it. (Its about 2M worth of data)

Now my question is how can I possibly debug or find out what is overwriting my original buffer.
I would recommend to use assert() statement as much as possible. In this case it should be:
BYTE* pDestinationBuffer = m_pImage + m_imageOffset;
assert( dataLength + m_imageOffset <= m_someLength );
memcpy( pDestinationBuffer, (BYTE*)data, dataLength );
then compile into debug mode and run. Benefit of this method - you will not have any overhead in release mode, where asserts are not evaluated.

On Windows you can use the Application Verifier to find this kind of overwrite

Heap corruption is a tough bug to find. Most times, when the error is reported, the memory has already been corrupted by some up stream code that executed previously. If you decide to use Application Verifier (and you should), I'd also encourage you to try GFLags and PageHeap. They are some additional tools that allow you to set registry flags for debugging these types of problems.

Related

malloc: *** error for object 0x10003b3c4: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug

I'm new on programming C++, please be patient :) My problem is that the model (a DGVM) runs until the end, but the last message I receive is "malloc: * error for object 0x10003b3c4: pointer being freed was not allocated
* set a breakpoint in malloc_error_break to debug". The debugger points to this:
clTreePop::~clTreePop() {free(Trees);}
The debugger points to free(Trees)and gives the message: "EXC_BAD_INSTRUCTION(code=EXC_i386_INVOP, subcode=0x0)". What am I doing wrong? Thanks.
The part of the code that may be important for this question:
void clTreePop::addFirstTree( double init_mass)
{
clTree Tree( init_mass, -1. , pop_size_, count_trees_);
Trees = (clTree *) malloc(sizeof(clTree));
Trees[0] = Tree;
pop_size_ ++;
new_born_ ++;
count_trees_ ++;
root_biomass_ += Tree.getBr();
stem_biomass_ += Tree.getBS();
leaf_biomass_ += Tree.getBl();
canopy_area_ += Tree.getCanopyArea();
gc_weighted_ += Tree.getGc();
max_height_ += MyMax(max_height_,Tree.getHeight());
basal_area_ += Tree.getStemArea();
return; }
First of all in C++ You don't need to use malloc, as the allocation can be done in diffrent, better, and if not, at least easier ways. Malloc is an old, low level, C (not C++) way. Try using
clTree *Trees = new clTree;
The code You copied does not show the situation fully, althrough what I can see is that instead of
Trees = (clTree *) malloc(sizeof(clTree));
You should use:
clTree *Trees = (clTree *) malloc(sizeof(clTree));
This way You create a pointer to which Then You attach structure, which You allocated.
The error "EXC_BAD_INSTRUCTION(code=EXC_i386_INVOP, subcode=0x0)" indicates some kind of incopatibility in between Your code and the architecture of Your computer (processor, system, etc.). I do not know the matter, but I think it is caused by the mistake I listed before.

Real Time Data Store in Buffer # Visual Studio 2013 Professional C++ Windows 7

Currently, I am working on real time interface with Visual Studio C++.
I faced problem is, when buffer is running for data store, that time .exe is not responding at the point data store in buffer. I collect data as 130Hz from motion sensor. I have tried to increase virtual memory of computer, but problem was not solved.
Code Structure:
int main(){
int no_data = 0;
float x_abs;
float y_abs;
int sensorID = 0;
while (1){
// Define Buffer
char before_trial_output_data[][8 * 4][128] = { { { 0, }, }, };
// Collect Real Time Data
x_abs = abs(inchtocm * record[sensorID].y);
y_abs = abs(inchtocm * record[sensorID].x);
//Save in buffer
sprintf(before_trial_output_data[no_data][sensorID], "%d %8.3f %8.3f\n",no_data,x_abs,y_abs);
//Increment point
no_data++;
// Break While loop, Press ESc key
if (GetAsyncKeyState(VK_ESCAPE)){
break;
}
}
//Data Save in File
printf("\nSaving results to 'RecordData.txt'..\n");
FILE *fp3 = fopen("RecordData.dat", "w");
for (i = 0; i<no_data-1; i++)
fprintf(fp3, output_data[i][sensorID]);
fclose(fp3);
printf("Complete...\n");
}
The code you posted doesn't show how you allocate more memory for your before_trial_output_data buffer when needed. Do you want me to guess? I guess you are using some flavor of realloc(), which needs to allocate ever-increasing amount of memory, fragmenting your heap terribly.
However, in order for you to save that data to a file later on, it doesn't need to be in continuous memory, so some kind of list will work way better than an array.
Also, there is no provision in your "pseudo" code for a 130Hz reading; it processes records as fast as possible, and my guess is - much faster.
Is your prinf() call also a "pseudo code"? Otherwise you are looking for trouble by having mismatch of the % format specifications and number and type of parameters passed in.

Unzip buffer with large data length is crashing

This is the function I am using to unzip buffer.
string unzipBuffer(size_t decryptedLength, unsigned char * decryptedData)
{
z_stream stream;
stream.zalloc = Z_NULL;
stream.zfree = Z_NULL;
stream.avail_in = decryptedLength;
stream.next_in = (Bytef *)decryptedData;
stream.total_out = 0;
stream.avail_out = 0;
size_t dataLength = decryptedLength* 1.5;
char c[dataLength];
if (inflateInit2(&stream, 47) == Z_OK)
{
int status = Z_OK;
while (status == Z_OK)
{
if (stream.total_out >= dataLength)
{
dataLength += decryptedLength * 0.5;
}
stream.next_out = (Bytef *)c + stream.total_out;
stream.avail_out = (uint)(dataLength - stream.total_out);
status = inflate (&stream, Z_SYNC_FLUSH);
}
if (inflateEnd(&stream) == Z_OK)
{
if (status == Z_STREAM_END)
{
dataLength = stream.total_out;
}
}
}
std::string decryptedContentStr(c, c + dataLength);
return decryptedContentStr;
}
And it was working fine until today when I realized that it crashes with large data buffer (Ex: decryptedLength: 342792) on this line:
status = inflate (&stream, Z_SYNC_FLUSH);
after one or two iterations. Can anyone help me please?
If your code generally works correctly, but fails for large data sets, then this could be due to a stack overflow as indicated by #StillLearning in his comment.
A usual (default) stack size is 1 MB. When your decryptedLength is 342,792, then you try to allocate 514,188 byte in the following line:
char c[dataLength];
Together with other allocations in your code (and finally in the inflate() function), this might already be too much. To overcome this problem, you should allocate the memory dynamically:
char* c = new char[dataLength];
If you so this, then please do not forget to release the allocated memory at the end of your unzipBuffer() function:
delete[] c;
If you forget to delete the allocated memory, then you will have a memory leak.
In case this doesn't (fully) solve your problem, you should do it anyway, because for even larger data sets your code will break for sure due to the limited size of the stack.
In case you need to "reallocate" your dynamically allocated buffer in your while() loop, then please take a look at this Q&A. Basically you need to use a combination of new, std::copy, and delete[]. However, it would be more appropriate if your exchange your char array with a std::vector<char> or even std::vector<Bytef>. Then you would be able enlarge your buffer easily by using the resize() function. You can directly access the buffer of a vector by using &my_vector[0] in order to assign it to stream.next_out.
c is not going to get bigger just because you increase datalength. You are probably overwriting past the end of c because your initial guess of 1.5 times the compressed size was wrong, causing the fault.
(It might be a stack overflow as suggested in another answer here, but I think that 8 MB stack allocations are common nowadays.)

HeapWalk not working as expected in Release mode

So I used this example of the HeapWalk function to implement it into my app. I played around with it a bit and saw that when I added
HANDLE d = HeapAlloc(hHeap, 0, sizeof(int));
int* f = new(d) int;
after creating the heap then some new output would be logged:
Allocated block Data portion begins at: 0X037307E0
Size: 4 bytes
Overhead: 28 bytes
Region index: 0
So seeing this I thought I could check Entry.wFlags to see if it was set as PROCESS_HEAP_ENTRY_BUSY to keep a track of how much allocated memory I'm using on the heap. So I have:
HeapLock(heap);
int totalUsedSpace = 0, totalSize = 0, largestFreeSpace = 0, largestCounter = 0;
PROCESS_HEAP_ENTRY entry;
entry.lpData = NULL;
while (HeapWalk(heap, &entry) != FALSE)
{
int entrySize = entry.cbData + entry.cbOverhead;
if ((entry.wFlags & PROCESS_HEAP_ENTRY_BUSY) != 0)
{
// We have allocated memory in this block
totalUsedSpace += entrySize;
largestCounter = 0;
}
else
{
// We do not have allocated memory in this block
largestCounter += entrySize;
if (largestCounter > largestFreeSpace)
{
// Save this value as we've found a bigger space
largestFreeSpace = largestCounter;
}
}
// Keep a track of the total size of this heap
totalSize += entrySize;
}
HeapUnlock(heap);
And this appears to work when built in debug mode (totalSize and totalUsedSpace are different values). However, when I run it in Release mode totalUsedSpace is always 0.
I stepped through it with the debugger while in Release mode and for each heap it loops three times and I get the following flags in entry.wFlags from calling HeapWalk:
1 (PROCESS_HEAP_REGION)
0
2 (PROCESS_HEAP_UNCOMMITTED_RANGE)
It then exits the while loop and GetLastError() returns ERROR_NO_MORE_ITEMS as expected.
From here I found that a flag value of 0 is "the committed block which is free, i.e. not being allocated or not being used as control structure."
Does anyone know why it does not work as intended when built in Release mode? I don't have much experience of how memory is handled by the computer, so I'm not sure where the error might be coming from. Searching on Google didn't come up with anything so hopefully someone here knows.
UPDATE: I'm still looking into this myself and if I monitor the app using vmmap I can see that the process has 9 heaps, but when calling GetProcessHeaps it returns that there are 22 heaps. Also, none of the heap handles it returns matches to the return value of GetProcessHeap() or _get_heap_handle(). It seems like GetProcessHeaps is not behaving as expected. Here is the code to get the list of heaps:
// Count how many heaps there are and allocate enough space for them
DWORD numHeaps = GetProcessHeaps(0, NULL);
HANDLE* handles = new HANDLE[numHeaps];
// Get a handle to known heaps for us to compare against
HANDLE defaultHeap = GetProcessHeap();
HANDLE crtHeap = (HANDLE)_get_heap_handle();
// Get a list of handles to all the heaps
DWORD retVal = GetProcessHeaps(numHeaps, handles);
And retVal is the same value as numHeaps, which indicates that there was no error.
Application Verifier had been set up previously to do a full page heap verifying of my executable and was interfering with the heaps returned by GetProcessHeaps. I'd forgotten about it being set up as it was done for a different issue several days ago and then closed without clearing the tests. It wasn't happening in debug build because the application builds to a different file name for debug builds.
We managed to detect this by adding a breakpoint and looking at the callstack of the thread. We could see the AV DLL had been injected in and that let us know where to look.

C++ Program Crashes Due To Large Array Even Though It's On The Heap

I'm allocating memory for three very large arrays (N = 990000001). I know you have to allocate this on the heap because it's so large, but even when I do that, the program keeps crashing. Am I allocating it incorrectly or is my computer simply not have enough memory (I should have plenty)? The other thing that may be the problem is that I'm somehow allocating my memory incorrectly. The way I'm allocating memory right now works perfectly fine when N is small. Any help is appreciated.
int main()
{
double *Ue = new double[N];
double *U = new double[N];
double *X = new double[N];
for (int i = 0; i < N; i++)
{
X[i] = X0 + dx*i;
Ue[i] = U0/pow((X0*X[i]),alpha);
}
//Declare Variables
double K1;double K2; double K3; double K4;
//Set Initial Condition
U[0] = U0;
for (int i = 0; i < N-1; i++)
{
K1 = deriv(U[i],X[i]);
K2 = deriv(U[i]+0.5*dx*K1,X[i]+0.5*dx);
K3 = deriv(U[i]+0.5*dx*K2,X[i]+0.5*dx);
K4 = deriv(U[i]+dx*K3,X[i+1]);
U[i+1] = U[i] + dx/6*(K1 + 2*K2 + 2*K3 + K4);
}
return 0;
}
Your program allocates and uses about 24 GB of memory.
If you are the program as a 32-bit process, this will throw std::bad_alloc, and your program will exit gracefully. (Theoretically there could be an overflow bug in your toolchain, but I think this is unlikely.)
If you are the program as a 64-bit process, you might get snagged by the OOM killer and your program will exit ungracefully. Unless you have 24 GB in combined RAM + swap, then you might churn through at the speed of your disk. (If you actually have 24 GB of RAM, then it probably wouldn't crash, so we can rule this out.) If overcommit is disabled then you will get std::bad_alloc instead of the OOM killer. (This paragraph is kind of Linux-specific, though other kernels are similar.)
Solution: Use less memory or buy more RAM.
If on Windows, you may find useful this information Memory Limits for Applications on Windows -
Note that the limit on static and stack data is the same in both
32-bit and 64-bit variants. This is due to the format of the Windows
Portable Executable (PE) file type, which is used to describe EXEs and
DLLs as laid out by the linker. It has 32-bit fields for image section
offsets and lengths and was not extended for 64-bit variants of
Windows. As on 32-bit Windows, static data and stack share the same
first 2GB of address space.
Then, the only real improvements -
Dynamic data - this is memory that is allocated during program
execution. In or C or C++ this is usually done with malloc or new.
64-bit
Static data 2Gb
Dynamic data 8Tb
Stack data 1GB
(the stack size is set by the linker, the default is 1MB. This can be
increased using the Linker property System > Stack Reserve Size)
Allocation of single array "should be able to allocate as large as the OS is willing to handle" (i.e. limited by RAM and fragmentation).