I have a code like this
char *verboseBuf = NULL;
if(somethin){
for(a loop){
for(another loop){
if(somethin else){
if(curl execution){
if(fail){
verboseBuf = (char *) malloc(sizeof(char) * (currSize +1));
fread(verboseBuf, 1, currSize, verboseFd);
verboseBuf[currSize + 1] = '\0';
string verbose = verboseBuf;
free(verboseBuf);
}
}
}
}
}
}
The only place that i use the verboseBuf is inside the final if loop. but i get
*** glibc detected *** ./test: double free or corruption (!prev): 0x13c13290 ***
But how can be freeing it twice if i only use it in one place? and everytime i use it, i free it.
I tried using addr2line to find the place where it was freed previously but all got was a ??:0.
This line is writing one byte past the end of your buffer.
verboseBuf[currSize + 1] = '\0';
That message doesn't specifically mean that you freed something twice, it means that glibc detected heap corruption, and freeing things twice is one common cause of that, but not the only one.
In this case, the line
verboseBuf[currSize + 1] = '\0';
is overflowing the end of your buffer, corrupting whatever bookkeeping data the allocator stored after it. Remove the +1 and it should work.
Make verboseBuf[currSize + 1] = '\0'; as verboseBuf[currSize] = '\0';
Related
I'm new on programming C++, please be patient :) My problem is that the model (a DGVM) runs until the end, but the last message I receive is "malloc: * error for object 0x10003b3c4: pointer being freed was not allocated
* set a breakpoint in malloc_error_break to debug". The debugger points to this:
clTreePop::~clTreePop() {free(Trees);}
The debugger points to free(Trees)and gives the message: "EXC_BAD_INSTRUCTION(code=EXC_i386_INVOP, subcode=0x0)". What am I doing wrong? Thanks.
The part of the code that may be important for this question:
void clTreePop::addFirstTree( double init_mass)
{
clTree Tree( init_mass, -1. , pop_size_, count_trees_);
Trees = (clTree *) malloc(sizeof(clTree));
Trees[0] = Tree;
pop_size_ ++;
new_born_ ++;
count_trees_ ++;
root_biomass_ += Tree.getBr();
stem_biomass_ += Tree.getBS();
leaf_biomass_ += Tree.getBl();
canopy_area_ += Tree.getCanopyArea();
gc_weighted_ += Tree.getGc();
max_height_ += MyMax(max_height_,Tree.getHeight());
basal_area_ += Tree.getStemArea();
return; }
First of all in C++ You don't need to use malloc, as the allocation can be done in diffrent, better, and if not, at least easier ways. Malloc is an old, low level, C (not C++) way. Try using
clTree *Trees = new clTree;
The code You copied does not show the situation fully, althrough what I can see is that instead of
Trees = (clTree *) malloc(sizeof(clTree));
You should use:
clTree *Trees = (clTree *) malloc(sizeof(clTree));
This way You create a pointer to which Then You attach structure, which You allocated.
The error "EXC_BAD_INSTRUCTION(code=EXC_i386_INVOP, subcode=0x0)" indicates some kind of incopatibility in between Your code and the architecture of Your computer (processor, system, etc.). I do not know the matter, but I think it is caused by the mistake I listed before.
This is the function I am using to unzip buffer.
string unzipBuffer(size_t decryptedLength, unsigned char * decryptedData)
{
z_stream stream;
stream.zalloc = Z_NULL;
stream.zfree = Z_NULL;
stream.avail_in = decryptedLength;
stream.next_in = (Bytef *)decryptedData;
stream.total_out = 0;
stream.avail_out = 0;
size_t dataLength = decryptedLength* 1.5;
char c[dataLength];
if (inflateInit2(&stream, 47) == Z_OK)
{
int status = Z_OK;
while (status == Z_OK)
{
if (stream.total_out >= dataLength)
{
dataLength += decryptedLength * 0.5;
}
stream.next_out = (Bytef *)c + stream.total_out;
stream.avail_out = (uint)(dataLength - stream.total_out);
status = inflate (&stream, Z_SYNC_FLUSH);
}
if (inflateEnd(&stream) == Z_OK)
{
if (status == Z_STREAM_END)
{
dataLength = stream.total_out;
}
}
}
std::string decryptedContentStr(c, c + dataLength);
return decryptedContentStr;
}
And it was working fine until today when I realized that it crashes with large data buffer (Ex: decryptedLength: 342792) on this line:
status = inflate (&stream, Z_SYNC_FLUSH);
after one or two iterations. Can anyone help me please?
If your code generally works correctly, but fails for large data sets, then this could be due to a stack overflow as indicated by #StillLearning in his comment.
A usual (default) stack size is 1 MB. When your decryptedLength is 342,792, then you try to allocate 514,188 byte in the following line:
char c[dataLength];
Together with other allocations in your code (and finally in the inflate() function), this might already be too much. To overcome this problem, you should allocate the memory dynamically:
char* c = new char[dataLength];
If you so this, then please do not forget to release the allocated memory at the end of your unzipBuffer() function:
delete[] c;
If you forget to delete the allocated memory, then you will have a memory leak.
In case this doesn't (fully) solve your problem, you should do it anyway, because for even larger data sets your code will break for sure due to the limited size of the stack.
In case you need to "reallocate" your dynamically allocated buffer in your while() loop, then please take a look at this Q&A. Basically you need to use a combination of new, std::copy, and delete[]. However, it would be more appropriate if your exchange your char array with a std::vector<char> or even std::vector<Bytef>. Then you would be able enlarge your buffer easily by using the resize() function. You can directly access the buffer of a vector by using &my_vector[0] in order to assign it to stream.next_out.
c is not going to get bigger just because you increase datalength. You are probably overwriting past the end of c because your initial guess of 1.5 times the compressed size was wrong, causing the fault.
(It might be a stack overflow as suggested in another answer here, but I think that 8 MB stack allocations are common nowadays.)
I have a BYTE array as follows:
BYTE* m_pImage;
m_pImage = new BYTE[m_someLength];
And at various stages of my program data is copied to this array like so:
BYTE* pDestinationBuffer = m_pImage + m_imageOffset;
memcpy( pDestinationBuffer, (BYTE*)data, dataLength );
But when I go to delete my buffer like so:
delete[] m_pImage;
I am getting the
HEAP CORRUPTION DETECTED - CRT detected that the application wrote to memory after the end of heap buffer
Now I have experimented with a simple program to try and replicate the error in order to help me investigate whats going on. I see from that following that if I create an array of size 5 but write over the end of it and try to delete it I get the exact same error.
int* myArray = new int[5];
myArray[0] = 0;
myArray[1] = 1;
myArray[2] = 2;
myArray[3] = 3;
myArray[4] = 4;
myArray[5] = 5; // writing beyond array bounds
delete[] myArray;
Now my question is how can I possibly debug or find out what is overwriting my original buffer. I know that something is overwriting the end of the buffer, so is there a way for visual studio to help me debug this easily.
The code above that is copying to the data buffer is called several times before the delete soits hard to keep a track of the m_pImage contents and the data copied to it. (Its about 2M worth of data)
Now my question is how can I possibly debug or find out what is overwriting my original buffer.
I would recommend to use assert() statement as much as possible. In this case it should be:
BYTE* pDestinationBuffer = m_pImage + m_imageOffset;
assert( dataLength + m_imageOffset <= m_someLength );
memcpy( pDestinationBuffer, (BYTE*)data, dataLength );
then compile into debug mode and run. Benefit of this method - you will not have any overhead in release mode, where asserts are not evaluated.
On Windows you can use the Application Verifier to find this kind of overwrite
Heap corruption is a tough bug to find. Most times, when the error is reported, the memory has already been corrupted by some up stream code that executed previously. If you decide to use Application Verifier (and you should), I'd also encourage you to try GFLags and PageHeap. They are some additional tools that allow you to set registry flags for debugging these types of problems.
This is a printing thread that prints the statistic of my currently running program
void StatThread::PrintStat(){
clock_t now = 0;
UINT64 oneMega = 1<<20;
const char* CUnique = 0;;
const char* CInserted = 0;;
while((BytesInserted<=fileSize.QuadPart)&&flag){
Sleep(1000);
now = clock();
CUnique = FormatNumber(nUnique);
CInserted = FormatNumber(nInserted);
printf("[ %.2f%%] %u / %u dup %.2f%% # %.2fM/s %.2fMB/s %3.2f%% %uMB\n",
(double)BytesInserted*100/(fileSize.QuadPart),
nUnique,nInserted,(nInserted-nUnique)*100/(double)nInserted,
((double)nInserted/1000000)/((now - start)/(double)CLOCKS_PER_SEC),
((double)BytesInserted/oneMega)/((now - start)/(double)CLOCKS_PER_SEC),
cpu.GetCpuUtilization(NULL),cpu.GetProcessRAMUsage (true));
if(BytesInserted==fileSize.QuadPart)
flag=false;
}
delete[] CUnique; //would have worked with memory leak if commented out
delete[] CInserted; // crash at here! heap corruption
}
This is FormatNumber that returns a pointer to a char array
const char* StatThread::FormatNumber(const UINT64& number) const{
char* result = new char[100];
result[0]='\0';
_i64toa_s(number,result,100,10);
DWORD nDigits = ceil(log10((double)number));
result[nDigits] = '\0';
if(nDigits>3){
DWORD nComma=0;
if(nDigits%3==0)
nComma = (nDigits/3) -1;
else
nComma = nDigits/3;
char* newResult = new char[nComma+nDigits+1];
newResult[nComma+nDigits]='\0';
for(DWORD i=1;i<=nComma+1;i++){
memcpy(newResult+strlen(newResult)-i*3-(i-1),result+strlen(result)-i*3,3);
if(i!=nComma+1){
*(newResult+strlen(newResult)-4*i) = ',';
}
}
delete[] result;
return newResult;
}
return result;
}
What is really weird was that it crashed only in release mode because of a heap corruption, but run smoothly in debug mode. I've already checked everywhere and found no obvious memory leaks, and even Memory Leak Detector said so too.
Visual Leak Detector Version 2.2.3 installed.
The thread 0x958 has exited with code 0 (0x0).
No memory leaks detected.
Visual Leak Detector is now exiting.
The program '[5232] Caching.exe' has exited with code 0 (0x0).
However, when run in release mode,it threw an error that said my program stop working and I clicked on debug, it pointed to the line that caused the heap corruption.
The thread 0xe4c has exited with code 0 (0x0).
Unhandled exception at 0x00000000770E6AE2 (ntdll.dll) in Caching.exe: 0xC0000374: A heap has been corrupted (parameters: 0x000000007715D430).
If I commented out this line, it worked fine but Memory Leak Detector would have complained about memory leak! I don't understand how to cause a heap corruption when there was no memory leaks, (at least that's what the Leak Detector said). Please help, Thank you in advance.
Edit:
Heap corruption was fixed, because in the very last iteration, I still copied 3 byes to the front instead of whatever is leftover. Thank you all for helps!
const char* StatThread::FormatNumber(const UINT64& number) const{
char* result = new char[100];
result[0]='\0';
_ui64toa_s(number,result,100,10);
DWORD nDigits = (DWORD)ceil(log10((double)number));
if(number%10==0){
nDigits++;
}
result[nDigits] = '\0';
if(nDigits>3){
DWORD nComma=0;
if(nDigits%3==0)
nComma = (nDigits/3) -1;
else
nComma = nDigits/3;
char* newResult = new char[nComma+nDigits+1];
DWORD lenNewResult = nComma+nDigits;
DWORD lenResult = nDigits;
for(DWORD i=1;i<=nComma+1;i++){
if(i!=nComma+1){
memcpy(newResult+lenNewResult-4*i+1,result+lenResult-3*i,3);
*(newResult+lenNewResult-4*i) = ',';
}
else{
memcpy(newResult,result,lenNewResult-4*(i-1));
}
}
newResult[nComma+nDigits] = '\0';
delete[] result;
return newResult;
}
return result;
}
Sorry to be blunt, but the code to "format" a string is horrible.
First of all, you pass in an unsigned 64-bit int value, which you formatted as a signed value instead. If you claim to sell bananas, you shouldn't give your customers plantains instead.
But what's worse is that what you do return (when you don't crash) isn't even right. If a user passes in 0, well, then you return nothing at all. And if a user passes in 1000000 you return 100,000 and if he passes in 10000000 you return 1,000,000. Oh well, what's a factor of 10 for some numbers between friends? ;)
These, along with the crash, are symptoms of the crazy pointer arithmetic your code does. Now, to the bugs:
First of all, when you allocate 'newResult' you leave the buffer in a very weird state. The first nComma + nDigits bytes are random values, followed by a NULL. You then call strlen on that buffer. The result of that strlen can be any number between 0 and nComma + nDigits, because any one of the nComma + nDigit characters may contain the null byte, which will cause strlen to terminate prematurely. In other words, the code is non-deterministic after that point.
Sidenote: If you're curious why it works in debug builds, it's because the compiler and the debug version of the runtime libraries try to help you catch bugs by initializing memory for you. In Visual C++ the fill mask is usually 0xCC. This made sure that the bug in your strlen() was covered up in debug builds.
Fixing that bug is pretty simple: simply initialize the buffer with spaces, followed by a NULL.
char* newResult = new char[nComma+nDigits+1];
memset(newResult, ' ', nComma+nDigits);
newResult[nComma+nDigits]='\0';
But there's one more bug. Let's try to format the number 1152921504606846975 which should become 1,152,921,504,606,846,975. Let's see what some of fancy pointer arithmetic operations give us:
memcpy(newResult + 25 - 3 - 0, result + 19 - 3, 3)
*(newResult + 25 - 4) = ','
memcpy(newResult + 25 - 6 - 1, result + 19 - 6, 3)
*(newResult + 25 - 8) = ','
memcpy(newResult + 25 - 9 - 2, result + 19 - 9, 3)
*(newResult + 25 - 12) = ','
memcpy(newResult + 25 - 12 - 3, result + 19 - 12, 3)
*(newResult + 25 - 16) = ','
memcpy(newResult + 25 - 15 - 4, result + 19 - 15, 3)
*(newResult + 25 - 20) = ','
memcpy(newResult + 25 - 18 - 5, result + 19 - 18, 3)
*(newResult + 25 - 24) = ','
memcpy(newResult + 25 - 21 - 6, result + 19 - 21, 3)
As you can see, your very last operation copies data 2 bytes before the beginning of the buffer you allocated. This is because you assume that you will always be copying 3 characters. Of course, that's not always the case.
Frankly, I don't think your version of FormatNumber should be fixed. all that pointer arithmetic and calculations are bugs waiting to happen. Here's the version I wrote, which you can use if you want. I consider it much more sane, but your mileage may vary:
const char *StatThread::FormatNumber(UINT64 number) const
{
// The longest 64-bit unsigned integer 0xFFFFFFFF is equal
// to 18,446,744,073,709,551,615. That's 26 characters
// so our buffer will be big enough to hold two of those
// although, technically, we only need 6 extra characters
// at most.
const int buflen = 64;
char *result = new char[buflen];
int cnt = -1, idx = buflen;
do
{
cnt++;
if((cnt != 0) && ((cnt % 3) == 0))
result[--idx] = ',';
result[--idx] = '0' + (number % 10);
number = number / 10;
} while(number != 0);
cnt = 0;
while(idx != buflen)
result[cnt++] = result[idx++];
result[cnt] = 0;
return result;
}
P.S.: The "off by a factor of 10" thing is left as an exerise to the reader.
At the line
DWORD nDigits = ceil(log10((double)number));
you need three digits for 100 but log 100 = 2. This means that you allocating one too few characters for char* newResult = new char[nComma+nDigits+1];. This means that the end of your heap cell is being overwritten which is resulting in the heap corruption you are seeing. Debug heap allocation may be more forgiving which is why the crash is only in debug mode.
Heap corruption is usually caused by overwriting the heap data structures. There is a lot of use of "result" and "newResult" without good boundary checking. When you do a debug build, the whole alignment changes and by chance the error doesnt happen.
I would start by adding checks like this:
DWORD nDigits = ceil(log10((double)number));
if(nDigits>=100){printf("error\n");exit(1);}
result[nDigits] = '\0';
Two things in your StatThread::PrintStat function.
This is a memory leak if the loop body executes more than once. You would reassign these pointers without calling delete[] for the previous values.
while((BytesInserted<=fileSize.QuadPart)&&flag){
...
CUnique = FormatNumber(nUnique);
CInserted = FormatNumber(nInserted);
...
}
Is this supposed to be an assignment = or a comparison ==?
if(BytesInserted=fileSize.QuadPart)
flag=false;
Edit to add:
In your StatThread::FormatNumber function this statement adds a null terminator to the end of the block but the previous chars may contain garbage (new doesn't zero allocated memory). The subsequest calls to strlen() may return an unexpected length.
newResult[nComma+nDigits]='\0';
I have a simple question. I have a few files, one file is around ~20000 lines.
It has 5 fields, have some other adt (vectors and lists), but those do not cause a segfault.
The map itself will store a key value, equivalent to about 1 per line.
When I added a map to my code, I would instantly get a segfault, I copied 5000 of 20000 lines, and receive a segfault, then 1000, and it worked.
In java there is a way to increase the amount of virtually allocated memory, is there a way to do so in c++? I have even deleted elements as they are no longer used, and I can get around 2000 lines, but not more.
Here is gdb:
(gdb) exec-file readin
(gdb) run
Starting program: /x/x/x/readin readin
Program exited normally.
valgrind:
HEAP SUMMARY:
==7948== in use at exit: 0 bytes in 0 blocks
==7948== total heap usage: 20,206 allocs, 20,206 frees, 2,661,509 bytes allocated
==7948==
==7948== All heap blocks were freed -- no leaks are possible
code:
....
Flow flw = endQueue.top();
stringstream str1;
stringstream str2;
if (flw.getSrc() < flw.getDest()){
str1 << flw.getSrc();
str2 << flw.getDest();
flw_src_dest = str1.str() + "-" + str2.str();
} else {
str1 << flw.getSrc();
str2 << flw.getDest();
flw_src_dest = str2.str() + "-" + str1.str();
}
while (int_start > flw.getEnd()){
if(flw.getFlow() == 1){
ava_bw[flw_src_dest] += 5.5;
} else {
ava_bw[flw_src_dest] += 2.5;
}
endQueue.pop();
}
A segmentation fault doesn't necessarily indicate that you're out of memory. In fact, with C++, it's highly unlikely: you would usually get a bad_alloc or somesuch in this case (unless you're dumping everything in objects with automatic storage duration?!).
More likely, you have a memory corruption bug in your code, that just so happens to only be noticeable when you have more than a certain number of objects.
At any rate, the solution to memory faults is not to blindly throw more memory at the program.
Run your code through valgrind and through a debugger, and see what the real problem is.
Be careful erasing elements from a container while you are iterating over the container.
for (pos = ava_bw.begin(); pos != ava_bw.end(); ++pos) {
if (pos->second == INIT){
ava_bw.erase(pos);
}
}
I believe this will have pos pointing to the next value but then ++pos will advance it yet again. If erase(pos) resulted in pos pointing at ava_bw.end(), the ++pos will fail.
I know if you tried this with a vector, pos will be invalidated.
Edit
In the while loop you do
while (int_start > flw.getEnd()){
if(flw.getFlow() == 1){
ava_bw[flw_src_dest] += 5.5;
} else {
ava_bw[flw_src_dest] += 2.5;
}
endQueue.pop();
}
You need to do flw = endQueue.top() again.
Generally speaking, in C\C++ max amount of available heap isn't fixed at start of the program -- you can always allocate some more memory, either via direct usage of new/malloc or by using STL containers, such as stl::list, which can do it by themselves.
I don't think the problem is memory, as C++ gets as much memory as it asks for, even hogging all available memory on your PC. Look if you delete something you access later on.