The following code is throwing Segmentation fault (core dumped) error when I run it. The code is compiled with g++
struct SomeClass {
int *available;
int **need;
int **allocation;
}
SomeClass::SomeClass(int nR, int nT) {
available = new int[nR];
for (int i = 0; i < nR; i++) {
available[i] = 1;
}
*allocation = new int[nT];
*need = new int[nT];
for (int i = 0; i < nT; i++) {
allocation[i] = new int[nR];
need[i] = new int[nR];
for (int j = 0; j < nR; j++) {
allocation[i][j] = 0;
need[i][j] = 1; // should equal 1
}
}
}
Am I sure that this code is generating the error? YES! Because I commented it out and everything works fine.
I checked this question:
A segmentation fault error with 2D array
The answer says to set the stack size ulimit -s unlimited... But that didn't fix the problem.
Because your types are:
int **need;
int **allocation;
these lines:
*allocation = new int[nT]; // dereferencing uninitialized pointer
*need = new int[nT];
should be:
allocation = new int*[nT]; // proper allocation
need = new int*[nT];
Didn't you think you'd need elements of int* type for allocation[i] = new int[nR]; to work?
I strongly suggest (and strongly feel deja vu) to move away from an attempt to emulate 2-D arrays with pointers to pointers. It is hard to do this right. Pack all your values into single-dimensional array.
Related
I have the challenge to implement simplex-method (or simplex algorithm). Simplex-method is a popular algorithm for linear programming which is based on rebuilding matrices. My program should return an optimal solution. I have a C++ project in Clion. It works correctly when I run the program, but during the debug I get a SIGSEGV Signal (Segmentation Fault) in one of the methods. It happens when I try to allocate memory for the matrix. Here is the part of code:
double **newTable;
newTable = new double *[rows];
for (int i = 0; i < rows; ++i) {
for (int j = 0; j < cols; ++j) {
newTable[i] = new double [cols];
}
}
I free the memory at the end of the method using delete[], but it doesn’t work.
I’ve already tried to run the program in another IDE (CodeBlocks), but it works properly too, and I have no idea why it happens and where the problem occurs.
No need for this nested loop. You only need one loop to allocate memory for this jagged array:
int main() {
int rows = 5, cols = 10;
double **newTable;
newTable = new double *[rows];
for (int i = 0; i < rows; ++i)
newTable[i] = new double[cols];
for (int i = 0; i < rows; ++i)
delete newTable[i];
delete newTable;
}
The way your code is now it will leak memory, but that alone won't cause a segmentation fault. There might be a mistake with how you're freeing the memory, too.
Also, since this is C++, may I recommend using std::vector instead?
#include <vector>
int main() {
std::vector<std::vector<double>> newTable(5, std::vector<double>(10));
}
I have a function which returns a 2D array in c++ as follows
float** Input_data(float** train_data, int Nv, int N){
float** x_train=new float*[Nv];
int a = 0,b = 0;
for(a = 1;a<= Nv;a++){
x_train[a] = new float[N+1];
for(b = 1; b <= N+1; b++){
if(b == 1){
x_train[a][b] = 1;
}else{
x_train[a][b] = train_data[a][b-1];
}
}return x_train;}
the purpose of the above code is to add ones in the first column and add remaining data from train_data pointer into x_train. after processing and using x_train i am trying to deallocate x_train as follows
void destroyx_array(float**x_train,int Nv){
for (int free_x = 1;free_x <= Nv;free_x++){
delete[] x_train[free_x];}delete[] x_train;}
and calling the destroy function as follows
destroyx_array(x_train,Nv)
the Input_data functions works fine but when i try to destroy_x_array it gives me double free or corruption(out) aborted (core dumped) can anybody explain what wrong i am doing ? thank you
Simply put, your code corrupts memory. The best thing is to not use raw pointers and instead use container classes such as std::vector.
Having said that, to fix your current code, the issue is that you're writing beyond the bounds of the memory here:
for(a = 1;a<= Nv;a++)
when a == Nv, you are writing one "row" beyond what was allocated. This looks like a manifestation of attempting to fake 1-based arrays. Arrays in C++ start from 0, not 1. Trying to fake 1-based arrays invariably can lead to bugs and memory corruption.
The fix is to rewrite your function to start from 0, not 1, and ensure your loop iterates to n-1, where n is the total number of rows:
for (a = 0; a < Nv; ++a)
the purpose of the above code is to add ones in the first column and
add remaining data from train_data pointer into x_train
Instead of the loop you wrote to test for the first column, you could simplify this by simply using memcpy:
for (int i = 0; i < Nv; ++i)
{
x_train[i][0] = 1;
memcpy(&x_train[i][1], &train_data[i][0], N * sizeof(float));
}
Thus the entire function would look like this:
float** Input_data(float** train_data, int Nv, int N)
{
float** x_train=new float*[Nv];
for(int a = 0; a < Nv; a++)
x_train[a] = new float[N+1];
for (int a = 0; a < Nv; a++)
{
x_train[i][0] = 1;
memcpy(&x_train[i][1], &train_data[i][0], N * sizeof(float));
}
return x_train;
}
I get very frustrating error in following piece of code. Thats my array.
int **tab2 = new int*[3];
I allocate this like it.
for(i = 0; i < 10; i++) {
tab2[i] = new int[3];
tab2[i][0] = 40;
tab2[i][1] = 10;
tab2[i][2] = 100;
}
Then after using it i want to destroy it.
for(i = 0; i < 10; i++) {
delete [] tab2[i];
}
delete [] tab2;
And this causes core dump every single time. I tried many different ways to destroy it and every time get this error. What im making wrong here ?
This
int **tab2 = new int*[3];
does not do what you think it does.
You want an array that will contain TEN (10) pointers, each to an array of THREE ints.
new int*[3] is an array that contain THREE pointers.
What you want is this (live at coliru):
#include <iostream>
int main() {
int **tab2 = new int*[10];
for(int i = 0; i < 10; i++) {
tab2[i] = new int[3];
tab2[i][0] = 40;
tab2[i][1] = 10;
tab2[i][2] = 100;
}
for(int i = 0; i < 10; i++) {
delete [] tab2[i];
}
delete [] tab2;
}
With
int **tab2 = new int*[3];
you allocate an array of pointers of size 3. But than with
for(i = 0; i < 10; i++) {
tab2[i] = new int[3];
//...
}
you access it with up to index 9. That will surely go wrong.
The deletion process looks fine to me. To fix it, you should allocate an array of pointers with size 10instead of 3, e.g.
int **tab2 = new int*[10];
Looks like what you're trying to do is to create an N by M array, where N is known at runtime and M is fixed (in this case, 3).
Why not just do this?
{
std::array<int, 3> defaults = {{ 40, 10, 100 }};
std::vector<std::array<int, 3>> thing(10, defaults);
}
The vector, thing is automatically deallocated when it goes out of scope, and its size can be set at runtime. You still access the structure in the same way:
thing[1][2] = 3
Manual memory management can be easily avoided by using standard containers and smart pointers. Doing so will keep you code cleaner, and have fewer opportunities for dangling pointers and memory leaks.
So my program executes as expected and prints out the correct result. The only issue is that after it is done it does not exit. If I wait a few more seconds windows pops up an error message saying "bignumbs.exe has stopped working". Here is the code to the new function which seems to be causing the problem.
void BigInt::u_basic_mult(const BigInt& n, int digs)
{
const base_int* tptr = n.used > used ? n.data : data;
const base_int* bptr = tptr == data ? n.data : data;
const int tlen = tptr == data ? used : n.used;
const int blen = bptr == data ? used : n.used;
if(digs < 1)
digs = tlen + blen + 1;
base_int* new_data = new base_int[digs];
for(int i = 0; i < digs; ++i)
*new_data++ = 0;
for(int i = 0; i < blen; ++i)
{
int stop_pt = MIN(tlen, digs - i);
overflow_int carry = 0;
overflow_int btmp = bptr[i];
for(int j = 0; j < stop_pt; ++j)
{
overflow_int prod = btmp * tptr[j] + carry;
carry = prod >> BASE_BITS;
overflow_int sum = new_data[i + j] + carry + (prod & MAX_DIG);
carry += sum >> BASE_BITS;
new_data[i + j] = sum;
}
}
//delete[] data; these two lines cause the error
//data = new_data;
used = digs;
alloc = digs;
strip_zeros();
}
Notice the two lines I commented out. Without them the program executes and finishes (although now the result is incorrect). What is it about changing the value of a pointer or deleting it which could make my program have this strange error? Also I am pretty sure data is valid since I use it in the code above.
Also I am compiling with G++ through Netbeans.
After inspecting further it seems that the problem may be with my deconstructor. If I comment out the delete[] data in the deconstructor the error seems to go away. I don't know why.
BigInt::~BigInt()
{
if(data) delete[] data;
}
for(int i = 0; i < digs; ++i)
*new_data++ = 0;
This code is modifying where the new_data pointer is pointing at, so that it is no longer pointing at the original array when you enter the subsequent loop, or do anything with it for that matter. The pointer you pass to delete[] must be pointing at the same memory address that new[] returned.
The correct way to zero-initialize the array is to do this instead:
for(int i = 0; i < digs; ++i)
new_data[i] = 0;
Or, get rid of the loop and just use memset() instead:
memset(new_data, 0, digs * sizeof(base_int));
You must be very careful about matching up uses of new and delete. If you allocate something using the array form of new, you must delete it using the array form of delete. If you mix-and-match the array and non-array forms, you'll get crashes like this. You also must never delete something that wasn't allocated with new, and you must never delete the same thing twice.
I can't give you any more specific advice about this particular program, because you do not show us where the pointer named data is allocated.
I got it. I managed to screw up my new_data pointer.
for(int i = 0; i < digs; ++i)
*new_data++ = 0;
I changed it to this.
for(int i = 0; i < digs; ++i)
new_data[i] = 0;
I have this 3D matrix I allocated as one block of memory, but when I try to write to the darn thing, it gives me a segmentation fault. The thing works fine for two dimensions, but for some reason, I'm having trouble with the third...I have no idea where the error is in the allocation. It looks perfect to me.
Here's the code:
phi = new double**[xlength];
phi[0] = new double*[xlength*ylength];
phi[0][0] = new double[xlength*ylength*tlength];
for (int i=0;i<xlength;i++)
{
phi[i] = phi[0] + ylength*i;
for (int j=0;j<ylength;j++)
{
phi[i][j] = phi[i][0] + tlength*j;
}
}
Any help would be greatly appreciated. (Yes, I want a 3D matrix)
Also, this is where I get the segmentation fault if it matters:
for (int i = 0; i < xlength; i++)
{
for (int j = 0; j < ylength; j++)
{
phi[i][j][1] = 0.1*(4.0*i*h-i*i*h*h)
*(2.0*j*h-j*j*h*h);
}
}
This does work for two dimensions though!
phi = new double*[xlength];
phi[0] = new double[xlength*ylength];
for (int i=0;i<xlength;i++)
{
phi[i] = phi[0] + ylength*i;
}
You did not allocate other submatrixes like e.g. phi[1] or phi[0][1]
You need at least
phi = new double**[xlength];
for (int i=0; i<xlength; i++) {
phi[i] = new double* [ylength];
for (int j=0; j<ylength; j++) {
phi[i][j] = new double [zlength];
for (k=0; k<zlength; k++) phi[i][j][k] = 0.0;
}
}
and you should consider using std::vector (or even, if in C++2011, std::array), i.e.
std::vector<std::vector<double> > phi;
and then with std::vector you'll need to phi.resize(xlength) and a loop to resize each subelement phi[i].resize(ylength) etc.
If you want to allocate all the memory at once, you could have
double* phi = new double[xlength*ylength*zlength]
but then you cannot use the phi[i][j][k] notation, so you should
#define inphi(I,J,K) phi[(I)*xlength*ylength+(J)*xlength+(K)]
and write inphi(i,j,k) instead of phi[i][j][k]
Your second code does not work: it is undefined behavior (it don't crash because you are lucky, it could crash on other systems....), just some memory leak which don't crash yet (but could crash later, perhaps even by re-running the program again). Use a memory leakage detector like valgrind