Error when attempting to use the heap - c++

I've written a program that works for most input, but if I ask it to make a increase precision by using a larger array (about 320x320 was when I started seeing trouble) it crashes. I searched for my issue online and found this similar problem and this tutorial on what to do about it. The problematic part of my original code is below - I had precision=320 and holepop=770.
double spacing = 2.0/(precision+1);
int lattice_int[precision][precision];
for (i=0; i<precision; ++i){
for (ii=0; ii<precision; ++ii){
mindist_sq = 2.0;
lattice_int[i][ii] = 0;
for (iii=0; iii<holepop; ++iii){
xdist = abs(xcoord[iii] + 1.0 - spacing/2 - spacing*i);
ydist = abs(ycoord[iii] - 1.0 + spacing/2 + spacing*ii);
thisdist_sq = xdist*xdist+ydist*ydist;
if (thisdist_sq < mindist_sq){
lattice_int[i][ii] = dint[iii];
mindist_sq = thisdist_sq;
}
}
}
}
I tried to fix it with this change in the first two lines:
int * lattice_int;
double spacing = 2.0/(precision+1);
lattice_int = new int[precision][precision];
(I also put in "delete lattice_int[][];" at the end.) However, I received this error: 'precision' cannot occur in a constant expression
Is it because I'm trying to work with multiple indices? What can I do to work around my problem? Thank you!

Don't use new[], it'll only cause you pain, suffering, memory leaks, use-after-frees, etc.
You can use std::vector in this respect.
std::vector<std::vector<int>> lattice_int(precision, std::vector<int>(precision));
No memory freeing necessary.

Your lattice_int variable is 2d array. You can allocate it using following code:
int precision = 500;
int ** lattice_int;
double spacing = 2.0/(precision+1);
lattice_int = new int*[precision];
for (int i = 0; i < precision; i++)
{
lattice_int[i] = new int[precision];
}
Same way you have to iterate for deletion of each sub array.
Note: This is pure illustration to use pointers for creating two dimensional array. The better way would be to use vector for this.

Related

How would I overload the + operator for this bigint class which uses arrays? C++

I am currently taking an online data structures course using C++ and I'm working on a personal project to help me better understand the basics. The project I'm working on is an implementation of a bigint class, a class that supports storing and calculation of arbitrary-precision integers using arrays and not vectors or strings. I am struggling with the implementation of the major arithmetic operators.
The numbers are stored in the array from least to most significant digit (201 would be stored as {1,0,2}) and the calculations are performed in this order as well.
I have found some material relating to this but the vast majority use vectors/strings and did not help me much. A couple of other resources, such as this and this did help, but did not work when I tried to implement them in my code. For example, this code to implement the addition operator does not work and I either get a bad_alloc exception or the answer is just way wrong, but I can't seem to figure out why or how to solve it and I've been at it for days now:
bigint& operator+(const bigint& lhs, const bigint& rhs){
bool minus_sign = rhs.is_negative();
size_t amt_used = 0; // to keep track of items in the array
// initial size and size of resulting array
// set initial size to the size of the larger array
// set result_size to ini size plus one in case of carry
size_t ini_size = lhs.get_digit_count() > rhs.get_digit_count() ?
lhs.get_digit_count() : rhs.get_digit_count();
const size_t INITIAL_SIZE = ini_size;
const size_t RESULT_SIZE = INITIAL_SIZE+1;
uint8_t temp[RESULT_SIZE], // temporary array
result_arr[RESULT_SIZE],
lhs_arr[INITIAL_SIZE], rhs_arr[INITIAL_SIZE]; // new arrays for lhs/rhs of the same size to avoid overflow if one is smaller
//assign corresponding values to the new arrays
for (size_t i = 0; i < lhs.get_digit_count(); i++){
lhs_arr[i] = lhs.get_digit(i);
}
for (size_t i = 0; i < rhs.get_digit_count(); i++){
rhs_arr[i] = rhs.get_digit(i);
}
// perform addition
int carry = 0; //carry variable
size_t j = 0;
for ( ; j < INITIAL_SIZE; j++){
uint8_t sum = lhs_arr[j] + rhs_arr[j] + carry;
if (sum > 9){
result_arr[j] = sum - 10;
carry = 1;
amt_used++;
}
else{
result_arr[j] = sum;
carry = 0;
amt_used++;
}
}
if (carry == 1){
result_arr[j] = 1;
amt_used++;
}
// flip the array to most sig to least sig, since the constructor performs a switch to least-most sig.
size_t decrement_index = amt_used - 1;
for (int i = 0; i < RESULT_SIZE; i++){
temp[i] = result_arr[decrement_index];
decrement_index--;
}
for (int i = 0; i < RESULT_SIZE; i++){
result_arr[i] = temp[i];
}
// create new bigint using the just-flipped array and return it
bigint result(result_arr, amt_used, minus_sign);
return result;
}
Here's the error I get: Thread 1: EXC_BAD_ACCESS (code=1, address=0x5)
Either that or I get a really large number when I'm just adding 8700 + 2100
There are several issues with this code.
The use of the VLA extension (for temp etc) is not standard C++. These stack based arrays are not initialized, so they will contain random data. When you fill these arrays with data, you are not assigning to every element. This results in the garbage results when, for example, the left number is shorter than the right (so that several elements of lhs_arr have garbage data in them). These bad values will then be used in the addition array. Using std::vector would be standard compliant and result in the vector elements all being initialized to something appropriate (like 0). This could be where your "really large number" comes from.
When you "flip the array", decrement_index can be negative if not all of the result slots were used. This could be a cause of you EXC_BAD_ACCESS crashes.
Returning a reference to a local variable results in Undefined Behavior, since that local will be destroyed when the function returns resulting in a dangling reference. This could be a cause of either of your stated problems.
Your handling of negative numbers is completely wrong, since you don't really handle them at all.

My Visual Studio 2012 program works in Debug, release without debug (ctrl + F5) but not release. How do I fix?

As stated above my program works in Debug and Release without debug (ctrl + F5) however does not work in simply Release.
Just to clarify I have already checked to see if I have some uninitialized variables and I haven't (to the best of my knowledge anyway but I have spent quite some time looking).
I believe to have localized the issue and what I have come across is, in my opinion, very bizarre. First I set up the break points as shown in the picture below:
Then I run the program in release. And instantly the top break point moves:
I found this extremely odd. Now note the number 6302 assigned to 'n'. This number is correct and what I hoped to pass through. Now watch as I continue through the program.
We are still in good shape but then it turns for the worst.
'n' changes to 1178521344, which messes up the rest of my code.
Would someone be able to shed some light on the situation, and even better, offer a solution.
Thanks,
Kevin
Here is the rest of the function if it helps:
NofArr = n;
const int NA = n;
const int NAless = n-1;
double k_0 = (2*PI) / wavelength;
double *E = new double[NAless]; // array to hold the off-diagonal entries
double *D = new double[NA]; // array to hold the diagonal entries on input and eigenvalues on output
int sizeofeach = 0;
trisolver Eigen;
int* start; int* end;
vector< vector<complex <double>> > thebreakup = BreakUp(refidx, posandwidth, start, end);
for(int j = 0; j < (int)thebreakup.size(); j++){
// load the diagonal entries to D
for(int i =0; i < (int)thebreakup[j].size(); i++){
D[i] = -((double)2.0/(dx*dx)) + (k_0*k_0*thebreakup[j][i].real()*thebreakup[j][i].real());
}
// load the off diagonal
for(int i = 0; i < (int)thebreakup[j].size(); i++){
E[i] = (double)1.0 / (dx*dx);
}
sizeofeach = (int)thebreakup[j].size();
double *arr1= new double[sizeofeach];
arr1 = Eigen.EigenSolve(E, D, sizeofeach, mode);
complex <double> tmp( PhaseAndAmp[j][1]*cos(PhaseAndAmp[j][0]), PhaseAndAmp[j][1]*sin(PhaseAndAmp[j][0]));
// rebuild the break up with the mode
for(int i = 0; i < (int)thebreakup[j].size(); i++){
thebreakup[j][i] = (complex<double>(arr1[i],0.0)) * tmp ;
}
delete []arr1;
}
vector<complex<double>> sol = rebuild(thebreakup, start, end);
delete [] E;
delete [] D;
delete [] start;
delete [] end;
return sol;
I'm writing this as an answer, because it's way harder to write as a comment.
What strikes me immediately is the array "arr1"
First you allocate new memory and store a pointer to it in the variable arr1
double *arr1= new double[sizeofeach];
Then, immediately, you overwrite the address.
arr1 = Eigen.EigenSolve(E, D, sizeofeach, mode);
Later, you delete something. Is it safe?
delete []arr1;
It's not the double array you allocated, but something eigensolve returned. Are you sure you have the right to delete it? Try removing the delete here. Also, fix the memory leak too, by removing allocation in the first line I gave.
What worries me even more is that the "this" pointer changes. There is some nasty problem somewhere. At this point, your program has already been corrupted. Look for the issue somewhere else. Valgrind would be a GREAT tool if you can try to compile under linux.
It seems that there is some sort of code optimization going on in your program. It is not always easy to debug optimized code step-by-step since the optimization may reorder instructions.
I cannot see why the fact that 'n' changes to an apparently uninitialized value would be the root cause of the problem, since 'n' is anyways no longer used in your function. Seems like the memory is simply been released as part of the optimization.
I have discovered my mistake. Earlier in the program I was comparing pointers, not what they were pointing at. A stupid mistake but one I wouldn't have spotted without a long debugging session. My boss explained that the information given at the bottom of Visual Studio whilst in release mode cannot be trusted. So to "debug" I had to use std::cout and check variables that way.
So here is the mistake in the code:
if(start > end){
int tmp = start[i];
start[i] = end[i];
end[i] = tmp;
}
Where start and end were defined earlier as:
int* start = new int[NofStacks];
int* end = new int[NofStacks];
And initialized.
Thanks to all those who helped and I feel I must apologise for the stupid error.
The Fix being:
if(start[i] > end[i]){
int tmp = start[i];
start[i] = end[i];
end[i] = tmp;
}

Why does this fix a heap corruption?

So I've got code:
float **array = new float*[width + 1]; //old line was '= new float*[width]'
//Create dynamic 2D array
for (int i = 0; i < width; ++i) {
array[i] = new float[height + 1]; //old line was '= new float[height]'
}
//Hardcode 2D array for testing
for (int i = 0; i < height; ++i) {
for (int j = 0; j < width; ++j) {
array[i][j] = i + j;
}
}
//deallocate heap memory
for (int i = 0; i < width; ++i) {
delete [] array[i]; //Where corrupted memory error used to be
}
delete [] array;
(For the record, I know it would be more efficient to allocate a single block of memory, but I work closely with scientists who would never understand why/how to use it. Since it's run on servers, the bosses say this is preferred.)
My question is why does the height+1/width+1 fix the corrupted memory issue? I know the extra space is for the null terminator, but why is it necessary? And why did it work when height and width were the same, but break when they were different?
SOLN:
I had my height/width backwards while filling my array... -.-; Thank you to NPE.
The following comment is a red herring:
delete [] array[i]; //Where corrupted memory error used to be
This isn't where the memory error occurred. This is where it got detected (by the C++ runtime). Note that the runtime isn't obliged to detect this sort of errors, so in a way it's doing you a favour. :-)
You have a buffer overrun (probably an off-by-one error in a loop) in the part of your code that you're not showing.
If you can't find it by examining the code, try Valgrind or -fsanitize=address in GCC.
edit: The issue with the code that you've added to the question:
//Hardcode 2D array for testing
for (int i = 0; i < height; ++i) {
for (int j = 0; j < width; ++j) {
array[i][j] = i + j;
}
}
is that it has width and height (or, equivalently, i and j) the wrong way round. Unless width == height, your code has undefined behaviour.
Changing height and weight by height+1 and weight+1 is probably not going to be enough.
The code you posted was correct with height and weight.
This means that something was likely writing just past the end of those arrays in some other part of the code, and when you grew those arrays, it made the faulty code write right at the end of the arrays instead of crashing. You didn't fix the issue, you just hid it.
The code actually crashed on the delete[] due to some limitations in how the OS detects heap corruptions. Very often off-by-one errors on the heap will be detected by the next call to new/delete/malloc/free, not when they actually happen.
You can use tools like Valgrind if you want to know exactly when and where your program does illegal things with pointers.
You didn't fix the code. What you are doing is changing the executable with the new code, thus moving the corruption bug to another part of your program.
One thing you should not do -- do not change your program to the one you say "works" with the + 1 and then accept it. I know it may be tempting if the bug is hard to diagnose, but don't go this route.
What you must do is go back to the non-working version, and really fix the issue. By "fix", meaning you can explain what the fix does, why it fixes the problem, etc.

C++ Segmentation fault: 11

I am getting Segmentation fault: 11 error when trying to run my program (I'm quite a n00b with c++ so take it easy on me). I know it has something to do with memory allocation but I'm not sure what exactly I am doing wrong. Can anyone please help and spot the problem/s?
Basically I'm trying to chop one vector into many small vectors, and analyse each one separately.
std::vector<double> test::getExactHit(std::vector<double> &hitBuffer, double threshold){
int resolution = 100;
int highestRMSBin = 0;
std::vector<double> exactHit(8192);
double* rmsInEachBin = new double[hitBuffer.size()/resolution];
double highestRMSValue = threshold;
for(int i = 0; i<hitBuffer.size()-resolution; i+=resolution){
std::vector<double>::const_iterator first = hitBuffer.begin() + i;
std::vector<double>::const_iterator last = hitBuffer.begin() + i + resolution;
std::vector<double> hitBufferBin(first, last);
rmsInEachBin[i/resolution] = calcRMS(hitBufferBin);
if(rmsInEachBin[i/resolution]>highestRMSValue){
highestRMSValue = rmsInEachBin[i/resolution];
highestRMSBin = i;
}
}
for(int j = 0 ; j < exactHit.size(); j++) {
exactHit[j]=hitBuffer[j+highestRMSBin];
}
return exactHit;
}
Please deallocate all the memory assigned using new or else it will cause memory leak and other bugs also might get introduced because of this .
http://cs.baylor.edu/~donahoo/tools/gdb/tutorial.html
You can debug using GDB , It will be handy to know a debugger if you are programming in C++ .
Hope this info will help you .

Radix Sort using C++

Suppose I have bunch of numbers. I have to first put the least significant digit into the corresponding bucket. Ex: 530 , I have to first put into the bucket 0. For number 61, I have to put into bucket 1.
I planned to use a multidimensional array to do this. So I create a 2-dimenional array, which nrows is 10 ( for 0~ 9) and ncolumns is 999999 ( because I don't know how large will the list be):
int nrows = 10;
int ncolumns = 999999;
int **array_for_bucket = (int **)malloc(nrows * sizeof(int *));
for(i = 0; i < nrows; i++)
array_for_bucket[i] = (int *)malloc(ncolumns * sizeof(int));
left = (a->value)%10;
array_for_bucket[left][?? ] = a->value;
Then I created one node call a. In this node a, there is a value 50. To find out which bucket I want to put it in, I calculate "left" and I got 0. So I want to put this a-> value into bucket 0. But now I am stuck. How do I put this value into the bucket? I have to use a pointer array to do this.
I thought for a long time but still couldn't find a good way to do it. So please share some ideas with me. thank you!
There is a much easier way of doing this, and instead of radix*nkeys space you only need an nkeys-sized buffer.
Allocate a second buffer that can fit nkeys keys. Now do a first pass through your data and simply count how many keys end up in each bucket. You now can create a radix-sized array of pointers where each pointer is to the start of that bucket in the output buffer. Finally, the second pass though the data moves the keys. Every time you move a key, increment that bucket pointer.
Here's some C code to make into C++:
void radix_sort(int *keys, int nkeys)
{
int *shadow = malloc(nkeys * sizeof(*keys));
int bucket_count[10];
int *bucket_ptrs[10];
int i;
for (i = 0; i < 10; i++)
bucket_count[i] = 0;
for (i = 0; i < nkeys; i++)
bucket_count[keys[i] % 10]++;
bucket_ptrs[0] = shadow;
for (i = 1; i < 10; i++)
bucket_ptrs[i] = bucket_ptrs[i-1] + bucket_count[i-1];
for (i = 0; i < nkeys; i++)
*(bucket_ptrs[keys[i] % 10]++) = keys[i];
//shadow now has the sorted keys
free(shadow);
}
But I may have misunderstood the question. If you are doing something a little different than radix sort, pleas add some details.
Look the Boost Pointer containers library if you want to store pointers.
C++ isn't my forte but this code from wikipedia-Raidx Sort is very comprehensive and probably is more C++-ish than what you've implemented so far. Hope it helps
This is C++, we don't use malloc anymore. We use containers. A two-dimensional array is a vector of vectors.
vector<vector<int> > bucket(10);
left = (a->value)%10;
bucket[left].push_back(a->value);