Strange segfault with tinyxml2 - c++

I've a segfault that I don't understand.
It always occurs at i = 0 and j between 1000 and 1100.
Here is the backtrace and all the sources required to see the problem: https://gist.github.com/Quent42340/7592902
Please help me.
EDIT: Oh I forgot. On my gist map.cpp:72 is commented. It's commented in my source code too. I did that to see where the problem came from, but even without that line, the problem is still here.

I see you allocate an array of pointers here:
m_data = new u16*[m_layers];
But, I never see you allocate the second dimension to this array. It seems like you ought to allocate the rows of your map, either as one large chunk of memory that you separate into chunks yourself, or new each row.
For example, if you add one line to your for (i ...) loop:
for(u8 i = 0 ; i < m_layers ; i++) {
m_data[i] = new u16[m_width * m_height];
If you go that route, you'll also need to upgrade your destructor:
Map::~Map() {
// WARNING: This doesn't handle the case where the map failed to load...
// Exercise for the reader.
for (u8 i = 0; i < m_layers; i++) {
delete[] m_data[i];
}
delete[] m_data;
}
An alternate approach would be to use std::array and let the C++ standard library manage that for you.

Related

why cant i populate a vector using a pointer to an array of characters?

So for the following code snippet, I have been stepping through the debugger and when I print the value of the memory pointed to by dataPtr, I can see it has values. However, after the loop executes, if I print the value of dataStream, it is all 0's. Any idea why I cannot populate the vector from the pointer?
std::vector<uint8_t>* dataStream = new std::vector<uint8_t>();
uint8_t* dataPtr = udpPacket->getData();
for (int i = 0; i < head->m_cb; ++i) {
dataStream->push_back(*(dataPtr + i));
}
edit: I have attached a screenshot of what is happening to make it clear what I am doing. I am new to c++ so maybe I am debugging this wrong?
Thanks to the comments, it seems that initializing the vector with a range solved the problem! The solution is to use
std::vector<uint8_t>* dataStream = new std::vector<uint8_t>(dataPtr, dataPtr + head->m_cb);
rather than a loop to fill the vector

Read amount of lines from a text file and store them as a constant int for array size c++

I'm new to C++ and having some problems. Basically what I have to do is read different kinds of text files and use the amount of lines as the size for the rows of a 2-dimensional array.
The input file looks like this:
int_n1 int_n2 (These are 2 integers needed later on for processing)
(blank line)
[amount of nurses][140] (too much to type out)
link to what it actually looks like here
http://puu.sh/lEh2y/e4f740d30f.png
My code looks like this:
//maak inputStream klaar voor gebruik
ifstream prefFile(INPUTPREF);
//test of de inputstream kan geopend worden
if (prefFile.is_open())
{
// new lines will be skipped unless we stop it from happening:
prefFile.unsetf(std::ios_base::skipws);
// count the newlines with an algorithm specialized for counting:
unsigned line_count = std::count(std::istream_iterator<char>(prefFile),std::istream_iterator<char>(),'\n');
int aantNurse = line_count + 1 - 2;
int nursePref[aantNurse][140];
}
Of course, just putting 'const' in front of 'int aantNurse' doesn't work.
Does anybody have a suggestion on how to solve this? I'd prefer not to have to use an oversized array that could fit everything, although that could be a possiblity.
As one of the possible solutions you can allocate memory for your array nursePref dynamically and release it in the end.
Just something like this:
int** nursePref = new int*[aantNurse];
for (int i = 0; i < aantNurse; ++i) {
nursePref[i] = new int[140];
}
Then release it properly using delete[]:
for (int i = 0; i < aantNurse; ++i) {
delete[] nursePref[i];
}
delete[] nursePref;
Also, as it's said already, using vectors is a better idea:
std::vector<std::vector<int> > nursePref(aantNurse, std::vector<int>(140));

My Visual Studio 2012 program works in Debug, release without debug (ctrl + F5) but not release. How do I fix?

As stated above my program works in Debug and Release without debug (ctrl + F5) however does not work in simply Release.
Just to clarify I have already checked to see if I have some uninitialized variables and I haven't (to the best of my knowledge anyway but I have spent quite some time looking).
I believe to have localized the issue and what I have come across is, in my opinion, very bizarre. First I set up the break points as shown in the picture below:
Then I run the program in release. And instantly the top break point moves:
I found this extremely odd. Now note the number 6302 assigned to 'n'. This number is correct and what I hoped to pass through. Now watch as I continue through the program.
We are still in good shape but then it turns for the worst.
'n' changes to 1178521344, which messes up the rest of my code.
Would someone be able to shed some light on the situation, and even better, offer a solution.
Thanks,
Kevin
Here is the rest of the function if it helps:
NofArr = n;
const int NA = n;
const int NAless = n-1;
double k_0 = (2*PI) / wavelength;
double *E = new double[NAless]; // array to hold the off-diagonal entries
double *D = new double[NA]; // array to hold the diagonal entries on input and eigenvalues on output
int sizeofeach = 0;
trisolver Eigen;
int* start; int* end;
vector< vector<complex <double>> > thebreakup = BreakUp(refidx, posandwidth, start, end);
for(int j = 0; j < (int)thebreakup.size(); j++){
// load the diagonal entries to D
for(int i =0; i < (int)thebreakup[j].size(); i++){
D[i] = -((double)2.0/(dx*dx)) + (k_0*k_0*thebreakup[j][i].real()*thebreakup[j][i].real());
}
// load the off diagonal
for(int i = 0; i < (int)thebreakup[j].size(); i++){
E[i] = (double)1.0 / (dx*dx);
}
sizeofeach = (int)thebreakup[j].size();
double *arr1= new double[sizeofeach];
arr1 = Eigen.EigenSolve(E, D, sizeofeach, mode);
complex <double> tmp( PhaseAndAmp[j][1]*cos(PhaseAndAmp[j][0]), PhaseAndAmp[j][1]*sin(PhaseAndAmp[j][0]));
// rebuild the break up with the mode
for(int i = 0; i < (int)thebreakup[j].size(); i++){
thebreakup[j][i] = (complex<double>(arr1[i],0.0)) * tmp ;
}
delete []arr1;
}
vector<complex<double>> sol = rebuild(thebreakup, start, end);
delete [] E;
delete [] D;
delete [] start;
delete [] end;
return sol;
I'm writing this as an answer, because it's way harder to write as a comment.
What strikes me immediately is the array "arr1"
First you allocate new memory and store a pointer to it in the variable arr1
double *arr1= new double[sizeofeach];
Then, immediately, you overwrite the address.
arr1 = Eigen.EigenSolve(E, D, sizeofeach, mode);
Later, you delete something. Is it safe?
delete []arr1;
It's not the double array you allocated, but something eigensolve returned. Are you sure you have the right to delete it? Try removing the delete here. Also, fix the memory leak too, by removing allocation in the first line I gave.
What worries me even more is that the "this" pointer changes. There is some nasty problem somewhere. At this point, your program has already been corrupted. Look for the issue somewhere else. Valgrind would be a GREAT tool if you can try to compile under linux.
It seems that there is some sort of code optimization going on in your program. It is not always easy to debug optimized code step-by-step since the optimization may reorder instructions.
I cannot see why the fact that 'n' changes to an apparently uninitialized value would be the root cause of the problem, since 'n' is anyways no longer used in your function. Seems like the memory is simply been released as part of the optimization.
I have discovered my mistake. Earlier in the program I was comparing pointers, not what they were pointing at. A stupid mistake but one I wouldn't have spotted without a long debugging session. My boss explained that the information given at the bottom of Visual Studio whilst in release mode cannot be trusted. So to "debug" I had to use std::cout and check variables that way.
So here is the mistake in the code:
if(start > end){
int tmp = start[i];
start[i] = end[i];
end[i] = tmp;
}
Where start and end were defined earlier as:
int* start = new int[NofStacks];
int* end = new int[NofStacks];
And initialized.
Thanks to all those who helped and I feel I must apologise for the stupid error.
The Fix being:
if(start[i] > end[i]){
int tmp = start[i];
start[i] = end[i];
end[i] = tmp;
}

Pointer to 3D Arrays of Pointer

I'm trying to create a Pointer to a dynamic 3D Array full of Pointers. I'm working with Voxel, so let's say that t_cube is my object.
First, I tried doing this:
t_cube* (*m_Array)[][][];
I thought I could do like
m_Array = new t_cube[sizeX][sizeZ][sizeY];
Compiling this failed, however.
Next I tried this:
t_cube *(m_Model[]); // This is my .h
{
t_cube *model_Tempo[sizeX][sizeZ][sizeY]; // And this is in my class constructor.
m_Model = model_Tempo;
}
Again, this failed to compile.
I hope this example would be helpful to solve your problem:
Since, we are dealing with Pointer of 3-D Array. So, if I write it in C++ grammar, it would be like:
t_cube *array[x_size][y_size][z_size];
But, you already mentioned, it fails to execute.
Now, do the same thing using Dynamic Allocation Approach.
t_cube ****array; // Since, it a pointer to the 3D Array
array = new t_cube ***[x_size];
for(int i=0; i<x_size; i++) {
array[i] = new t_cube **[y_size];
for(int j =0; j<y_size; j++) {
array[i][j] = new t_cube *[z_size];
}
} /* I'm sure this will work */
And, the reasons you were facing trouble:
The size of the m_Array could be very large : x_size * y_size * z_size * sizeof(t_cube) .
You must have defined m_Array locally (inside the function), which is the major reason of program malfunction.

Why does this fix a heap corruption?

So I've got code:
float **array = new float*[width + 1]; //old line was '= new float*[width]'
//Create dynamic 2D array
for (int i = 0; i < width; ++i) {
array[i] = new float[height + 1]; //old line was '= new float[height]'
}
//Hardcode 2D array for testing
for (int i = 0; i < height; ++i) {
for (int j = 0; j < width; ++j) {
array[i][j] = i + j;
}
}
//deallocate heap memory
for (int i = 0; i < width; ++i) {
delete [] array[i]; //Where corrupted memory error used to be
}
delete [] array;
(For the record, I know it would be more efficient to allocate a single block of memory, but I work closely with scientists who would never understand why/how to use it. Since it's run on servers, the bosses say this is preferred.)
My question is why does the height+1/width+1 fix the corrupted memory issue? I know the extra space is for the null terminator, but why is it necessary? And why did it work when height and width were the same, but break when they were different?
SOLN:
I had my height/width backwards while filling my array... -.-; Thank you to NPE.
The following comment is a red herring:
delete [] array[i]; //Where corrupted memory error used to be
This isn't where the memory error occurred. This is where it got detected (by the C++ runtime). Note that the runtime isn't obliged to detect this sort of errors, so in a way it's doing you a favour. :-)
You have a buffer overrun (probably an off-by-one error in a loop) in the part of your code that you're not showing.
If you can't find it by examining the code, try Valgrind or -fsanitize=address in GCC.
edit: The issue with the code that you've added to the question:
//Hardcode 2D array for testing
for (int i = 0; i < height; ++i) {
for (int j = 0; j < width; ++j) {
array[i][j] = i + j;
}
}
is that it has width and height (or, equivalently, i and j) the wrong way round. Unless width == height, your code has undefined behaviour.
Changing height and weight by height+1 and weight+1 is probably not going to be enough.
The code you posted was correct with height and weight.
This means that something was likely writing just past the end of those arrays in some other part of the code, and when you grew those arrays, it made the faulty code write right at the end of the arrays instead of crashing. You didn't fix the issue, you just hid it.
The code actually crashed on the delete[] due to some limitations in how the OS detects heap corruptions. Very often off-by-one errors on the heap will be detected by the next call to new/delete/malloc/free, not when they actually happen.
You can use tools like Valgrind if you want to know exactly when and where your program does illegal things with pointers.
You didn't fix the code. What you are doing is changing the executable with the new code, thus moving the corruption bug to another part of your program.
One thing you should not do -- do not change your program to the one you say "works" with the + 1 and then accept it. I know it may be tempting if the bug is hard to diagnose, but don't go this route.
What you must do is go back to the non-working version, and really fix the issue. By "fix", meaning you can explain what the fix does, why it fixes the problem, etc.