New vector with fixed size causes crash - c++

Problem
I had a problem randomly appearing when creating a new vector (pointer) with fixed initial size.
std::vector<double> * ret = new std::vector<double>(size);
This sometimes causes my program to crash an I don't really get why ... maybe stack corruption? I didn't find any explanation on what can cause this issue on the web sadly.
Example:
Code
// <- ... Some independant code
// [size] is an unsigned int passed as parameter to the function
cout << size << endl;
std::vector<double> * ret = new std::vector<double>(size);
cout << "Debug text" << endl;
// More code ... ->
EDIT: I will update the code as soon as possible to have a clear, minimal, reproductible to have a correct question according to: How to create a Minimal, Complete, and Verifiable example
Output
100
... Then it crashes (the trace "Debug text" is not printed and the size is correct)
I tried putting the critical line of code inside a try catch as some people suggested (for memory related errors) but no exception is catched and I still get the crash.
This code is inside a function called multiple times (with various values of size, always between 1 and 1000) and sometimes the function end up witout problem, sometimes not (the value of size does not seem to have any infulence but maybe I'm wrong).
My "solution" (you can skip this part)
I adapted my code to use a pointer to vector without initial size
std::vector<double> * ret;
and I uses push_back() instead of [].
[] was quicker for my algorithm due to how the vector was filled at first (elements order is important and I get positions from external file but I still need a vector and not an array for its dynamic aspect later in code), but I adapted everything to use push_back() (less efficient in my case as I now need more iterations but nothing critical).
Question
In short: Does anyone knows what can be causing the issue OR how I can potentially track what is causing this issue?

Look like your program stopped crashing not because you create a vector without size, but because you use push_back(). The fact that replacing operator[] with push_back() removes your symptom points that somewhere else you access element in a vector out of bounds, corrupt your memory and suddenly get it crashed. Check your code where you access the data.

As you wrote it seems like you are trying to Access it using ret[...] right :-)? Sorry for my smile but this happens when you use a pointer to a vector...
If this is the case you Need to replace it with (*ret)[...]

Related

Underlying reasons for different output after removing a print statement in an "out of range of vector error"

I have a minimal code causing said behavior:
vector<int> result(9);
int count = 0;
cout << "test1\n"; // removing this line causes 'core dump'
for (int j=0; j < 12; j++)
result[count++] = 1;
cout << "test2\n";
result is a vector of size 9, and inside 'for' loop I am accessing elements out of the range.
Now, removing test1 line, the code runs without any errors; but with this cout line, I get
* Error in `./out_of_range_vector2': free(): invalid next size (fast): 0x0000000001b27c20 *
I understand that this is telling me that free() encounter some memory that were not allocated my malloc(), but what role does this cout line plays here? I'd like to know a little bit more about what's going on here. More specifically, I have two questions:
Is this caused by the different state of heap on these 2 cases? If so, what exactly is different?
Why sometimes accessing out of range elements does not cause error? Is it because it hasn't exceeded vector's capacity?
The line
cout << "test1\n";
does a lot of things, and it can possibly allocate an free memory.
Writing outside the bounds of a vector is "undefined behavior" and this is very different from "getting an error". The meaning of "undefined behavior" is that who writes the compiler and the runtime library simply is free to ignore those possibilities and whatever happens happens. They can do so because a programmer will never ever do those things and when does it's 100% his/her fault. Preventing, protecting or even simply notifying of errors when accessing std::vector elements using operator[] is not part of the contract.
Technically what could have happened is that the write operation destroyed some internal data structure used by the memory allocator and this resulted in crazy behavior after that that may or may not result in a segfault.
A segfault is when things get crazy to the point that even the operating system can detect the program is not doing what is supposed to do because it requests access to locations that do not even exist (so for sure they cannot be the correct location that was supposed to contain the data being looked for).
You can however get undefined behavior and corrupted data without getting to that "segfault" point and the program will simply read or write from the wrong locations incorrect data, even without getting any observable difference from a correct program. This is actually what happens most of the times (unfortunately).
So what happens when you read or write outside the size of an std::vector using the unchecked opertor[]? Most of the time nothing (apparently). It may however do whatever it likes after that mistake, including behaving crazy in places where the code is instead correct, one billion machine instructions later and only if that provokes serious damages. Just don't do that.
When programming in C++ you simply cannot do any mistake. There are no "runtime error angels" to protect you like in other higher level languages.

Buffer overrun with STL vector

I am copying the contents of one STL vector to another.
The program is something like this
std::vector<uint_8> l_destVector(100); //Just for illustration let us take the size as 100.
std::vector<uint_8> l_sourceVector; //Let us assume that source vector is already populated.
memcpy( l_destVector.data(), l_sourceVector.data(), l_sourceVector.size() );
The above example is pretty simplistic but in my actual code the size
of destination vector is dynamically calculated.
Also the source vector is getting populated dynamically making it possible to have different length of data.
Hence it increases the chance of buffer overrun.
The problem I faced is my program is not crashing at the point of memcpy when there is a buffer overrun but sometime later making it hard to debug.
How do we explain this behavior?
/******************************************************************************************************/
Based on the responses I am editing the question to make my concern more understandable.
So, this is a legacy code and there are lot of places where vector has been copied using memcpy, and we do not intend to change the existing code. My main concern here is "Should memcpy not guarantee immediate crash, if not why ?", I would honestly admit that this is not very well written code.
A brief illustration of actual use is as follows.
In the below method, i_DDRSPDBuffer and i_dataDQBuffer where generated based on some logic in the calling method.
o_dataBuffer was assigned a memory space that would have been sufficient to take the data from two input buffers, but some recent changes in method that calls updateSPDDataToRecordPerDimm, is causing overrun in one of the flows.
typedef std::vector<uint8_t> DataBufferHndl;
errHdl_t updateSPDDataToRecordPerDimm(
dimmContainerIterator_t i_spdMmap,
const DataBufferHndl & i_DDRSPDBuffer,
const DataBufferHndl & i_dataDQBuffer,
DataBufferHndl & o_dataBuffer)
{
uint16_t l_dimmSPDBytes = (*i_spdMmap).second.dimmSpdBytes;
// Get the Data Buffer Handle for the input and output vectors
uint8_t * l_pOutLDimmSPDData = o_dataBuffer.data();
const uint8_t * l_pInDDRSPDData = i_DDRSPDBuffer.data();
const uint8_t * l_pInDQData = i_dataDQBuffer.data();
memcpy(l_pOutLDimmSPDData, l_pInDDRSPDData, l_dimmSPDBytes);
memcpy(l_pOutLDimmSPDData + l_dimmSPDBytes,
l_pInDQData, LDIMM_DQ_DATA_BYTES);
memcpy(l_pOutLDimmSPDData ,
l_pInDQData, LDIMM_DQ_DATA_BYTES); ====> Expecting the crash here but the crash happens some where after the method updateSPDDataToRecordPerDimm returns.
}
It doesn't have to crash, it's undefined behaviour.
If you had used std::copy instead with std::vector<uint_8>::iterators in debug mode, you probably would've hit an assertion which would've caught it.
Do not do that! It will eventually bite you.
Use either std::copy and and output_iterator, or since you know the size of the destination, resize the vector to the correct size or create a vector of the correct size and pipe the contents straight in, or simply the assignment operator.
It doesn't crash right at the moment of memcpy because you 'only' overwrite the memory behind the allocated vector. As long as your program does not read from that corrupt memory and use the data, your program will continue to run.
As already mentioned before, using memcpy is not the recommended way to copy the contents of stl containers. You'd be on the safe side with
std::copy
std::vector::assign
And in both cases you'd also get the aforementioned iterator debugging which will trigger close to the point where the error actually is.

C++ program...overshoots?

I'm decent at C++, but I may have missed some nuance that applies here. Or maybe I completely missed a giant concept, I have no idea. My program was instantly crashing ("blah.exe is not responding") about 1/5 times it was run (other times it ran completely fine) and I tracked the problem down to a constructor for a world class that was called once in the beginning of the main function. Here is the code (in the constructor) that causes the problem:
int ii;
for(ii=0;ii<=255;ii++)
{
cout<<"ent "<<ii<<endl;
entity_list[ii]=NULL;
}
for(ii=0;ii<=255;ii++)
{
cout<<"sec "<<ii<<endl;
sector_list[ii]=NULL;
}
entity_list[0] = new Entity(0,0);
entity_list[0]->_world = this;
Specifically the second for loop. The cout references are new for the sake of telling where it is having trouble. It would print the entire "ent 1" to "ent 255" and then "sec 1" to "sec 255" and then crash right after, as if it was going for a 257th run through of the second for loop. I set the second for loop to go until "ii<=254" which stopped all crashes. Does C++ code tend to "overshoot" for loops or something? What is causing it to crash at this specific loop seemingly at random?
By the way, entity_list and sector_list point to classes called Entity and Sector, respectively, but they are not constructing anything so I didn't think it would be relevant. I also have a forward declaration for the Entity class in a header for this, but since none were being constructed I didn't think it was relevant either.
You are going beyond the bounds of your array.
Based on your comment in Charles' answer, you stated:
I just declared them in the world class as entity_list[255] and
sector_list[255]
And therein lies your problem. By declaring them to have 255 elements, that means you can only access elements a[0] through a[254] (If you count them up, you'll find that that is 255 elements. If index a[255] existed, then it would mean that there were 256 elements).
Now for the question: Why did it act so erratically when you accessed an element outside of the bounds of the array?
The reason is because accessing elements outside of the bounds of the array is undefined behavior in C++. I can't tell you what it should do, because it has been intentionally left undefined (don't ask me why--maybe someone who knows can comment?).
What this means is that the results will be sporadic and unpredictable, especially when you run it on different machines.
It might work just fine. It might crash. It might delete your hard drive! (this one is unlikely, but doing so wouldn't be a violation of the C++ protocol!)
Bottom line--just because you got a strange or non-existant error message does NOT mean its ok. Just don't do it.
How did you declare entity_list and sector_list? Remember that you are using 0 based indexing, so if you go from ii = 0 to ii <= 255 you need 256 buckets, not 255.

segfault when copying an array to a vector in Linux

I'm trying to debug a legacy code written for Linux. Sometimes the application gets a segfault when it reaches the memcpy call in the following method:
std::vector<uint8> _storage;
size_t _wpos;
void append(const uint8 *src, size_t cnt)
{
if (!cnt)
return;
if (_storage.size() < _wpos + cnt)
_storage.resize(_wpos + cnt);
memcpy(&_storage[_wpos], src, cnt);
_wpos += cnt;
}
The values are as follows:
_storage.size() is 1000
_wpos is 0
*src points to an array of uint8 with 3 values: { 3, 110, 20 }
cnt is 3
I have no idea why this happens since this method gets called thousands of times during the application's runtime but it sometimes gets a segfault.
Any one has any idea how to solve this?
Your code looks good in terms of the data that is written. Are you absolutely sure that you're passing in the right src pointer? What happens when you run the code with a debugger such as gdb? It should halt on the segfault, and then you can print out the values of _storage.size(), src, and cnt.
I'm sure you'll find that (at least) one of those is not at all what you're expecting. You might have passed an invalid src; you might have passed an absurdly large cnt.
I'd suggest to run valgrind on your program.
It's very valuable to spot early memory corruption as it may be the case with your program (since it's not a systematic crash you got).
For the values you give, I can't see why that would segfault. It's possible that your segfault is a delayed failure due to an earlier memory management mistake. Writing past the end of the vector in some earlier function could cause some of the vector's internal members to be corrupted, or you may have accidentally freed part of the memory used by the vector earlier. I'd check the other functions that manipulate the vector to see if any of them are doing any suspicious casting.
I see the size of the vector increasing. I never see it decreasing.
Next to that, vector has expquisit memory management support builtin. You can insert your values right to the end:
vector.insert( src, src+cnt );
This will both expand the vector to the right size, and copy the values.
The only thing I can think of is that _storage.resize() fails (which should throw a bad_alloc exception).
Another alternative would be to append each value separately with a call to push_back() (probably far slower though).
I see one problem here.
The memcpy() function copies n bytes from memory, so if cnt is the number of elements, you need a *sizeof(uint8) in the call to memcpy.
In a comment to my other answer, you said that "The vector gets cleaned up in another method since it is a class member variable. I'll test insert and see what happen".
What about thread-safety? Are you absolutely sure that the clearing method does not clear 'while' the resize is happening, or immediately after it? Since it's a 'sometimes' problem, it may be induced by concurrent access to the memory management in the vector.

C++ What's the max number of bytes you can dynamically allocate using the new operator in Windows XP using VS2005?

I have c++ code that attempts to dynamically allocate a 2d array of bytes that measures approx 151MB in size. When I attempt to go back and index through the array, my program crashes in exactly the same place every time with an "Access violation reading location 0x0110f000" error, but the indicies appear to be in range. That leads me to believe the memory at those indicies wasn't allocated correctly.
1) What's the max number of bytes you can dynamically allocate using the new operator?
2) If it is the case that I'm failing to dynamically allocate memory, would it make sense that my code is crashing when attempting to access the array at exactly the same two indicies every time? For some reason, I feel like they would be different every time the program is run, but what do i know ;)
3) If you don't think the problem is from an unsuccessful call to new, any other ideas what could be causing this error and crash?
Thanks in advance for all your help!
*Edit
Here's my code to allocate the 2d array...
#define HD_WIDTH 960
#define HD_HEIGHT 540
#define HD_FRAMES 100
//pHDVideo is a char**
pHDVideo->VideoData = new char* [HD_FRAMES];
for(int iFrame = 0; iFrame < HD_FRAMES; iFrame++)
{
//Create the new HD frame
pHDVideo->VideoData[iFrame] = new char[HD_WIDTH * HD_HEIGHT * 3];
memset(pHDVideo->VideoData[iFrame], 0, HD_WIDTH * HD_HEIGHT * 3);
}
and here's a screenshot of the crashing code and debugger (Dead Link) it will help.
I should add that the call to memset never fails, which to me means the allocations is successful, but I could be wrong.
EDIT
I found a fix everyone, thanks for all your help. Somehow, and I still need to figure out how, there was one extra horizontal line being upscaled, so I changed...
for(int iHeight = 0; iHeight < HD_HEIGHT; iHeight++)
to
for(int iHeight = 0; iHeight < HD_HEIGHT-1; iHeight++)
and it suddenly worked. Anyhow, thanks so much again!
Some possibilities to look at or things to try:
It may be that the pHDVideo->VideoData[iFrame] or pHDVideo->VideoData is being freed somewhere. I doubt this is the case but I'd check all the places this can happen anyway. Output a debug statement each time you free on of those AND just before your crash statement.
Something might be overwriting the pHDVideo->VideoData[iFrame] values. Print them out when allocated and just before your crash statement to see if they've changed. If 0x0110f000 isn't within the range of one of them, that's almost certainly the case.
Something might be overwriting the pHDVideo value. Print it out when allocated and just before your crash statement to see if it's changed. This depends on what else is within your pHDVideo structure.
Please show us the code that crashes, with a decent amount of context so we can check that out as well.
In answer to your specific questions:
1/ It's implementation- or platform-specific, and it doesn't matter in this case. If your new's were failing you'd get an exception or null return, not a dodgy pointer.
2/ It's not the case: see (1).
3/ See above for some possibilities and things to try.
Following addition of your screenshot:
You do realize that the error message says "Access violation reading ..."?
That means it's not complaining about writing to pHDVideo->VideoData[iFrame][3*iPixel+2] but reading from this->VideoData[iFrame][3*iPixelIndex+2].
iPixelIndex is set to 25458, so can you confirm that this->VideoData[iFrame][76376] exists? I can't see from your screenshot how this->VideoData is allocated and populated.
How are you accessing the allocated memory? Does it always die on the same statement? It looks very much like you're running off the end of either the one dimensional array of pointers, or the one of the big blocks of chars that it's pointing to. As you say, the memset pretty much proves that the memory was allocated correctly. The total amount of memory you're allocating is around 0x9450C00 bytes, so the address you quoted is off the end of allocated memory if it was allocated continguously.
Your screenshot appears to show that iPixel is in range, but it doesn't show what the value of iFrame was. Is it outside the range 0-99?
Update: The bug isn't in allocating the memory, it's in your conversion from HD to SD coordinates. The value you're reading from on the SD buffer is out of range, because it's at coordinates (144,176), which isn't in the range (0,0)-(143,175).
If it is the case that I'm failing to dynamically allocate memory, would it make sense that my code is crashing when attempting to access the array at exactly the same two indicies every time?
No, it wouldn't make sense to me.
If your call to operator new fails, I'd expect it to throw an exception or to return a null pointer (but not to return a non-null pointer to memory that's OK for some indices but not others).
Why are you using floating point math to calculate an integer index?
It looks like you are indexing out of range of the SD image buffer. 25344==SD_WIDTH*SD_HEIGHT, which is less than iPixelIndex, 25458.
Notice that heap allocation (e.g. using new) is efficient when allocating many small objects (that's why it's a Heap). If you're in the business of very large memory allocations, it might be better to use VirtualAlloc and friends.