why terminate called after throwing an instance of 'std::bad_alloc'? - c++

Every 1 second, function works.
my system the linux.
Runs suddenly dies.
-----global-------
static int arrayNum[33000];
-------------------
function(){
unsigned short int** US_INT;
US_INT= new unsigned short int*[255];
for(int i = 0; i < 255; i++)
{
US_INT[i] = new unsigned short int[128];
memset(US_INT[i], 0, sizeof(unsigned short int) * 128);
}
double x;
double y;
int cnt= 0;
int nArrayCount=0;
for(int i = 0; i < 255; i++)
{
for(int j=0;j<128;j++){
x=j;
y=cnt
nArray[nArrayCount]=US_INT[i][j];
nArrayCount++;
}
cnt=cnt+(256/255);
}
for(int i = 0; i < 255; i++)
{
delete US_INT[i];
}
delete[] US_INT;
}
program stop. and message↓
terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc

The bad_alloc exception is triggered by a failure in the memory allocation (so one of your new). terminate() is called automatically because you don't catch this exception.
The root cause of the bad_alloc is that you don't have enough memory (or the free store is corrupted). This could for example happen if you reapeatedly fail to free memory in some loops.
In fact, in your code, it appears that you don't delete correctly the arrays US_INT[i] . Your must use delete[]US_INT[i]. As a general rule, every time you use new[], you shall use delete[].
P.S.: You could also opt for vectors instead of arrays and free your mind from memory maangement issues.

Related

Locally defined Double array makes the program crash

I have the following variables defined locally in a function member of a class in C++:
double coeff, mincoeff, minratio,
equality[100][5000],
tableau[51][5052],
x[50][100];
When running the program crashes. When I comment out equality array it works but If I do not comment it out, it make the program crashes. It is not true for tableau array and it always works with 'tableau' array and without 'equality' array. I saw a post to use malloc() function to assign space dynamically like :
double *equality;
equality = malloc(500000*sizeof(double));
But it gives me an error of no conversion from void* to double*. Is there another way?
allocate eqaulity on the heap and when you're done with it free memory:
int main()
{
double** equality = new double* [100];
for(int i(0); i < 100; i++)
equality[i] = new double[5000];
for(int i = 0; i < 100; i++)
delete[] equality[i];
delete[] equality;
equality = NULL;
std::cout << std::endl;
return 0;
}
Like #user657267 have mentioned, you are asking 4MB of continuous chunk of memory for equality. Best thing to do here is to ask for the memory dynamically.
double **equality = new (nothrow) double*[100]; //Or do exception check to make sure you have enough memory
if (equality!=nullptr)
{
for(int i(0); i < 100; i++)
{
equality[i] = new (nothrow) double[5000]; //Again or do exception check to handle exception if it cannot get asked memory.
if (equality[i] == nullptr)
{
//Handle the situation where memory could not be allocated
...
}
}
}
else
{
//Handle not being able to allocate memory
}
Regarding c-style malloc (works with C++ as well), you have to cast to correct data type like following:
double *equality;
equality = (double*) malloc(500000*sizeof(double));
Note: do not forget to free what you have allocated.

Deleting pointers in destructor

I have some pointers that I allocate in the constructor of a class and then attempt to delete in its destructor:
TileMap::TileMap(int x, int y) {
mapSize.x = x;
mapSize.y = y;
p_p_map = new Tile*[x];
for(int i = 0; i < x; i++) {
p_p_map[i] = new Tile[y];
}
randomize();
}
TileMap::~TileMap() {
for(int i = 0; i < mapSize.x; i++) {
delete p_p_map[i];
}
delete p_p_map;
}
void TileMap::randomize() {
for(int i = 0; i < mapSize.x; i++) {
for(int j = 0; j < mapSize.y; j++) {
p_p_map[i][j] = *new Tile(Tile::TileSize * i, Tile::TileSize * j, TileType::randomType());
}
}
}
At the end of the program the destructor is called to free the memory of the pointers I allocated, but when it reaches "delete p_p_map[i];" in the destructor, XCode informs me that the pointer was not allocated. I am new to C++, but I feel that I pretty explicitly allocated memory to the pointers in the randomize() function.
What error am I making?
You have to match delete with new and delete[] with new[]. Mixing one up with the other leads to issues. So if you do:
p_p_map = new Tile*[x];
you have to delete it like:
delete[] p_p_map;
and same with
delete[] p_p_map[i];
If you create something like:
pSomething = new Type;
then you delete it like:
delete pSomething;
What error am I making?
A few:
First, as #uesp pointed out, you mismatch new and delete calls
Second, you are using the "memory leak operator":
p_p_map[i][j] = *new Tile(Tile::TileSize * i, Tile::TileSize * j, TileType::randomType());
The construct new Tile(...) allocates memory. Then, this memory (not stored anywhere) is dereferenced, and the result is assigned to p_p_map[i][j].
Because the pointer is not stored anywhere, it is leaked.
Third, you are not respecting RAII. While this is not technically an error in itself, the way you write the code is unsafe, and in low memory conditions, you will get UB.
For example, here's what happens if you construct a Tile instance with large values for x and y:
TileMap::TileMap(int x, int y) { // e.g. (x = 1024 * 1024, y = 1024 * 1024 * 1024)
mapSize.x = x;
mapSize.y = y;
p_p_map = new Tile*[x]; // allocate 1049600 pointers block
for(int i = 0; i < x; i++) {
p_p_map[i] = new Tile[y]; // run out of memory (for example) half way through the loop
}
randomize();
}
Depending where your allocations fail, your constructor will not finish executing, meaning your TileMap instance is "half-constructed" (i.e. in an invalid state) and the destructor will not be called.
In this case, everything the class allocated is leaked, and (especially if you allocated a large size) your application is left in low memory conditions.
To fix this, make sure each pointer is managed by a different instance of a class (part of RAII). This ensures that if an allocation fails, the allocated resources are released before exitting the scope, as part of stack unwinding (as #CaptainObvlious said, use std::vector for the array and std::unique_ptr for each element).

Allocate multidimensional array using new

When I allocate multidimensional arrays using new, I am doing it this way:
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
}
where ncols[] contains the length for each element in matrix, and nrows the number of element in matrix.
If I want to populate matrix, I then have
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
But I am reading C++ FAQ, who is telling be to be very careful. I should initialize each row with NULL first. Then, I should trycatch the allocation for rows. I really do not understand why all this. I have always (but I am in the beginning) initialized in C style with the above code.
FAQ wants me to do this
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = NULL;
try {
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
}
}
}
catch (...) {
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
throw; // Re-throw the current exception
}
}
1/ Is it farfetched or very proper to always initialize so cautiously ?
2/ Are they proceeding this way because they are dealing with non built-in types? Would code be the same (with same level of cautiousness) with double* matrix = new double[nrows]; ?
Thanks
EDIT
Part of the answer is in next item in FAQ
The reason for being this careful is that you'll have memory leaks if any of those allocations fail, or if the Fred constructor throws. If you were to catch the exception higher up the callstack, you have no handles to the memory you allocated, which is a leak.
1) It's correct, but generally if you're going to this much trouble to protect against memory leaks, you'd prefer to use std::vector and std::shared_ptr (and so on) to manage memory for you.
2) It's the same for built-in types, though at least then the only exception that will be thrown is std::bad_alloc if the allocation fails.
I would think that it depends on the target platform and the requirements to your system. If safety is a high priority and / or if you can run out of memory, then no, this is not farfetched. However, if you are not concerned too much with safety and you know that the users of your system will have ample free memory, then I would not do this either.
It does not depend on whether builtin-types are used or not. The FAQ solution is nulling the pointers to the rows so that in the event of an exception, only those rows which have already been created are deleted (and not some random memory location).
That said, I can only second R. Martinho Ferndandes' comment that you should use STL containers for this. Managing your own memory is tedious and dangerous.

Dynamic matrix in C++

I am working with a multidimensional array but i get an exception, i have searched a lot but i find the same answer i'm using, the exception jumps when i try to allocate matriz[i] = new double[n]. I have tried both the commented and uncommented solutions with no luck.
void interpol(double *arr_x, double *arr_y, int n, double *results) {
//double** matriz = new double*[n];
double** matriz;
matriz = (double**) malloc(n * sizeof(double*));
for(int i = 0; i < n; i++){
//matriz[i] = new double[n+1];
matriz[i] = (double*) malloc(n+1 * sizeof(double));
for(int j = 0; j < n; j++) {
matriz[i][j] = pow(arr_x[i],j);
}
matriz[i][n] = arr_y[i];
}
gaussiana(matriz, n, results);
}
--- EDIT---
The function gaussiana is working fine, since i have tested outside this function. The exception is thrown in either:
//matriz[i] = new double[n];
matriz[i] = (double*) malloc(n * sizeof(double));
n is never more than 10.
The exception thrown is:
First-chance exception at 0x00071c4d in Interpolacion.exe: 0xC0000005:
Access violation reading location 0x00000000.
Unhandled exception at 0x774b15de in Interpolacion.exe: 0xC0000005: Access violation reading location 0x00000000.
The program '[8012] Interpolacion.exe: Native' has exited with code -1073741819 (0xc0000005).
----EDIT----
I finally got it working, the issue was not in matriz, but with arr_x/arr_y, the external routine was sending the data wrong (oddly the error and the stacktrace always referred me to the new double[n] assignation)
If you want to use the std::vector route, you can use something like below (untested, shown just as a guide). Keep in mind that std::vector<std::vector<double> > is not compatible with double **, so your gaussiana function might need to be rewritten to accept the new type.:
// Include the header!
#include <vector>
// Be careful about the use of "using namespace std", I'm only using it here
// because it's a small example
using namespace std;
vector<vector<double> > matriz;
for (int i = 0; i < n; i++)
{
// Create a new vector "v" with n+1 elements
vector<double> v(n + 1);
// fill this vector
for (int j = 0; j < n; j++)
v[j] = pow(arr_x[i], j);
v[n] = arr_y[i];
// add it to the matrix
matriz.push_back(v);
}
I don't see anything in the code present which would cause an exception. It must be gaussiana() causing the trouble. Try commenting that line out and see if the program still faults.
It would be useful to know the range of n. As long as it is relatively small (< 1000) on modern 32- or 64-bit machines, malloc() should not fail. However, if the program runs with restricted memory, or n is large, it is likely that some mallocs would fail. Since there is no checking for NULL being returned, the program would indicate trouble by SEGFAULTing when trying to dereference the pointer.
If the function is called multiple times, the memory leaking could add up to a significant heap shortage and induce malloc() to fail.

delete multidimensional arrays

In C++ FAQ, the [16.16] gives the following example,
void manipulateArray(unsigned nrows, unsigned ncols[])
{
typedef Fred* FredPtr;
FredPtr* matrix = new FredPtr[nrows];
// Set each element to NULL in case there is an exception later.
// (See comments at the top of the try block for rationale.)
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = NULL;
try {
for (unsigned i = 0; i < nrows; ++i)
matrix[i] = new Fred[ ncols[i] ];
for (unsigned i = 0; i < nrows; ++i) {
for (unsigned j = 0; j < ncols[i]; ++j) {
someFunction( matrix[i][j] );
}
}
if (today == "Tuesday" && moon.isFull()) {
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
return;
}
...code that fiddles with the matrix...
}
catch (...) {
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
throw; // Re-throw the current exception
}
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
}
Why we have to use delete this way, I mean,
First delete[] matrix[i-1];
then delete[] matrix;
Moreover, what’s the point of after the whole “try…catch” cycle, we still have to put
for (unsigned i = nrows; i > 0; --i)
delete[] matrix[i-1];
delete[] matrix;
at the end of this function.
What you're missing is the horribly evil indentation.
delete matrix[i-1]; happens once per loop iteration and deletes the nested arrays.
delete matrix happens just one time after the loop completes and deletes the outer array.
Never write code like this in C++, use vector<vector<T> > instead.
The reason the deletes also exist in the catch is because if you catch an exception you're still responsible to clean up the memory you allocated.
The try/catch block is necessary to ensure proper clean-up even if an exception is thrown anywhere in the code before the normal clean-up happens. This includes an exception in one of the new expressions. The delete[] is safe because all the relevant pointers were initially set to zero, so that the deletion is valid even if no allocation ever occurred.
(Note that if any exception does occur, it will still be propagated outside the function. The local try/catch block only ensures that the function itself doesn't leak any memory.)
There are two sets of arrays: one is the outer array matrix, which is an array of pointers. This array gets allocated first and deleted last. Second, each element matrix[i] is itself a pointer to an array of Fred elements. Each array gets allocated in the first for loop, and thus has to be deleted in another loop at the end.
When you in a loop deleting each of the rows, you're freeing up the memory allocated to the corresponding row. Then you need to free up the memory allocated for the pointers to each row.
Think of it this way:
FredPtr* matrix = new FredPtr[nrows];
allocates an array of pointers to rows - and it will need to be freed up at the end.
Then for each of the rows,
matrix[i] = new Fred[ ncols[i] ];
allocates memory for an array of pointers to columns - and it will need to be freed up separately.
yes, that is not a quality of an example-code but it is working fine. The copy-pasted code in the catch-block and after the catch-block is needed because in case of an exception the memory should be freed and in this case the exception is forwarded to the caller of that function. if you dont want to forward that exception you can delete the code inside the catch-block (but at least a console-output would be nice ;) )
The catch block in the try...catch is there to delete the matrix if an exception was thrown, and then re-throw the exception.
If no exception is thrown, the catch block never gets hit, and the matrix has to be deleted on the way out through the normal exit of the routine.