I've written a program that allocates a new object of the class T like this:
T* obj = new T(tid);
where tid is an int
Somewhere else in my code, I'm trying to release the object I've allocated, which is inside a vector, using:
delete(myVec[i]);
and then:
myVec[i] = NULL;
Sometimes it passes without any errors, and in some cases it causes a crash—a segmentation fault.
I've checked before calling delete, and that object is there—I haven't deleted it elsewhere before.
What can cause this crash?
This is my code for inserting objects of the type T to the vector:
_myVec is global
int add() {
int tid = _myVec.size();
T* newT = new T (tid);
if (newT == NULL){
return ERR_CODE;
}
_myVec.push_back(newT);
// _myVec.push_back(new T (tid));
return tid;
}
as it is - the program sometimes crash.
When I replace the push_back line with the commented line, and leave the rest as it is-it works.
but when I replace this code with:
int add() {
int tid = _myVec.size();
if (newT == NULL){
return ERR_CODE;
}
_myVec.push_back(new T (tid));
return tid;
}
it crashes in a different stage...
the newT in the second option is unused, and still - changes the whole process... what is going on here?
Segfaulting mean trying to manipulate a memory location that shouldn't be accessible to the application.
That means that your problem can come from three cases :
Trying to do something with a pointer that points to NULL;
Trying to do something with an uninitialized pointer;
Trying to do something with a pointer that pointed to a now deleted object;
1) is easy to check so I assume you already do it as you nullify the pointers in the vector. If you don't do checks, then do it before the delete call. That will point the case where you are trying to delete an object twice.
3) can't happen if you set NULL to the pointer in the vector.
2) might happen too. In you case, you're using a std::vector, right? Make sure that implicit manipulations of the vector (like reallocation of the internal buffer when not big enough anymore) doesn't corrupt your list.
So, first check that you delete NULL pointers (note that delete(NULL) will not throw! it's the standard and valid behaviour! ) - in your case you shouldn't get to the point to try to delete(NULL).
Then if it never happen, check that you're not having your vector fill with pointers pointing to trash. For example, you should make sure you're familiar with the [Remove-Erase idiom][1].
Now that you added some code I think I can see the problem :
int tid = _myVec.size();
You're using indice as ids.
Now, all depends on the way you delete your objects. (please show it for a more complete answer)
You just set the pointer to NULL.
You remove the pointer from the vector.
If you only do 1), then it should be safe (if you don't bother having a vector that grows and never get released and ids aren't re-used).
If you do 2. then this is all wrong : each time you remove an object from a vector, all the object still contains after the removed object position will be lowered by one. Making any stored id/index invalid.
Make sure you're coherent on this point, it is certainly a source of errors.
that segmentation fault is most probably and memory access violation. Some reasons
1) object already deallocated. be sure you set that array position on NULL after delete
2) you are out of array bounds
3) if you access that array from multiple threads make sure you are synchronizing correctly
If you're completely certain that pointer points to a valid object, and that the act of deleting it causes the crash, then you have heap corruption.
You should try using a ptr_vector, unlike your code, it's guaranteed to be exception-safe.
Hint: if you write delete, you're doing it wrong
You can't be sure that the object is still valid: the memory that was occupied by the object is not necessarily cleaned, and therefore, you can be seeing something that appears to be your object but it is not anymore.
You can use a mark in order to be sure that the object is still alive, and delete that mark in the destructor.
class A {
public:
static const unsigned int Inactive;
static const unsigned int Active;
A();
~A();
/* more things ...*/
private:
unsigned int mark;
};
const unsigned int A::Inactive = 0xDEADBEEF;
const unsigned int A::Active = 0x11BEBEEF;
A::A() : mark( Active )
{}
A::~A()
{
mark = Inactive;
}
This way, checking the first 4 bytes in your object you can easily verify whether your object has finished its live or not.
Related
This Question statement is came in picture due to statement made by user (Georg Schölly 116K Reputation) in his Question Should one really set pointers to `NULL` after freeing them?
if this Question statement is true
Then How data will corrupt I am not getting ?
Code
#include<iostream>
int main()
{
int count_1=1, count_2=11, i;
int *p=(int*)malloc(4*sizeof(int));
std::cout<<p<<"\n";
for(i=0;i<=3;i++)
{
*(p+i)=count_1++;
}
for(i=0;i<=3;i++)
{
std::cout<<*(p+i)<<" ";
}
std::cout<<"\n";
free(p);
p=(int*)malloc(6*sizeof(int));
std::cout<<p<<"\n";
for(i=0;i<=5;i++)
{
*(p+i)=count_2++;
}
for(i=0;i<=3;i++)
{
std::cout<<*(p+i)<<" ";
}
}
Output
0xb91a50
1 2 3 4
0xb91a50
11 12 13 14
Again it is allocating same memory location after freeing (0xb91a50), but it is working fine, isn't it ?
You do not reuse the old pointer in your code. After p=(int*)malloc(6*sizeof(int));, p point to a nice new allocated array and you can use it without any problem. The data corruption problem quoted by Georg would occur in code similar to that:
int *p=(int*)malloc(4*sizeof(int));
...
free(p);
// use a different pointer but will get same address because of previous free
int *pp=(int*)malloc(6*sizeof(int));
std::cout<<p<<"\n";
for(i=0;i<=5;i++)
{
*(pp+i)=count_2++;
}
p[2] = 23; //erroneouly using the old pointer will corrupt the new array
for(i=0;i<=3;i++)
{
std::cout<<*(pp+i)<<" ";
}
Setting the pointer to NULL after you free a block of memory is a precaution with the following advantages:
it is a simple way to indicate that the block has been freed, or has not been allocated.
the pointer can be tested, thus preventing access attempts or erroneous calls to free the same block again. Note that free(p) with p a null pointer is OK, as well as delete p;.
it may help detect bugs: if the program tries to access the freed object, a crash is certain on most targets if the pointer has been set to NULL whereas if the pointer has not been cleared, modifying the freed object may succeed and result in corrupting the heap or another object that would happen to have been allocated at the same address.
Yet this is not a perfect solution:
the pointer may have been copied and these copies still point to the freed object.
In your example, you reuse the pointer immediately so setting it to NULL after the first call to free is not very useful. As a matter of fact, if you wrote p = NULL; the compiler would probably optimize this assignment out and not generate code for it.
Note also that using malloc() and free() in C++ code is frowned upon. You should use new and delete or vector templates.
int main() {
int* i = new int(1);
i++;
*i=1;
delete i;
}
Here is my logic:
I increment I by 1, and then assign a value to it. Then I delete the I, so I free the memory location while leaking the original memory. Where is my problem?
I also tried different versions. Every time, as long as I do the arithmetics and delete the pointer, my program crashes.
What your program shows is several cases of undefined behaviour:
You write to memory that hasn't been allocated (*i = 1)
You free something that you didn't allocate, effectively delete i + 1.
You MUST call delete on exactly the same pointer-value that you got back from new - nothing else. Assuming the rest of your code was valid, it would be fine to do int *j = i; after int *i = new int(1);, and then delete j;. [For example int *i = new int[2]; would then make your i++; *i=1; valid code]
Who allocates is who deallocates. So you should not be able to delete something you did not new by yourself. Furthermore, i++;*i=1; is UB since you may access a restricted memory area or read-only memory...
The code made no sense . I think You have XY problem. If you could post your original problem there will be more chance to help you.
In this case you need to have a short understanding how the heap memory management works. in particular implementation of it, when you allocate an object you receive a pointer to the start of the memory available to you to work with. However, the 'really' allocated memory starts a bit 'earlier'. This means the allocated block is a bit more than you have requested to allocate. The start of the block is the address you have received minus some offset. Thus, when you pass the incremented pointer to the delete it tries to find the internal information at the left side of it. And because your address is now incremented this search fails what results in a crash. That's in short.
The problem lies here:
i++;
This line doesn't increment the value i points to, but the pointer itself by the number of bytes an int has (4 on 32-bit platform).
You meant to do this:
(*i)++;
Let's take it step by step:
int* i = new int(1); // 1. Allocate a memory.
i++; // 2. Increment a pointer. The pointer now points to
// another location.
*i=1; // 3. Dereference a pointer which points to unknown
// memory. This could cause segmentation fault.
delete i; // 4. Delete the unknown memory which is undefined
// behavior.
In short: If you don't own a piece of memory you can't do arithmetic with it neither delete it!
I'm having a memory leak issue and it's related to an array of structs inside a class (not sure if it matters that they're in a class). When I call delete on the struct, the memory is not cleared. When I use the exact same process with int and dbl it works fine and frees the memory as it should.
I've created very simple examples and they work correctly so it's related to something else in the code but I'm not sure what that could be. I never get any errors and the code executes correctly. However, the allocation / deallocation occurs in a loop so the memory usage continually rises.
In other words, here's a summary of the problem:
struct myBogusStruct {
int bogusInt1, bogusInt2;
};
class myBogusClass {
public:
myBogusStruct *bogusStruct;
};
void main(void) {
int i, arraySize;
double *bogusDbl;
myBogusClass bogusClass;
// arraySize is read in from an input file
for(i=0;i<100;i++) {
bogusDbl = new double[arraySize];
bogusClass.bogusStruct = new myBogusStruct[arraySize];
// bunch of other code
delete [] bogusDbl; // this frees memory
delete [] bogusClass.bogusStruct; // this does not free memory
}
}
When I remove the bunch of other code, both delete lines work correctly. When it's there, though, the second delete line does nothing. Again, I never get any errors from the code, just memory leaks. Also, if I replace arraySize with a fixed number like 5000 then both delete lines works correctly.
I'm not really sure where to start looking - what could possibly cause the delete line not to work?
There is no reason at all for you to either allocate or delete myBogusDbl inside the for loop, because arraySize never changes inside the loop.
Same goes for myBogusClass.myBogusStruct. No reason to allocate/delete it at all inside the loop:
myBogusDbl = new double[arraySize];
myBogusClass.myBogusStruct = new bogusStruct[arraySize];
for (i = 0; i < 100; i++) {
// bunch of other code
}
delete[] myBogusDbl;
delete[] myBogusClass.myBogusStruct;
You should also consider using std::vector instead of using raw memory allocation.
Now to the possible reason of why the second delete in the original code doesn't do anything: deleting a NULL pointer does, by definition, nothing. It's a no-op. So for debugging purposes, try introducing a test before deleting it to see if it's NULL and if yes abort(). (I'd use a debugger instead though, as it's much quicker to set up a watch expression there compared to writing debug code.)
In general though, we need to see that "bunch of other code".
I have a quick question regarding the scope of dynamic arrays, which I assume is causing a bug in a program I'm writing. This snippet checks a function parameter and branches to either the first or the second, depending on what the user passes.
When I run the program, however, I get a scope related error:
error: ‘Array’ was not declared in this scope
Unless my knowledge of C++ fails me, I know that variables created within a conditional fall out of scope when when the branch is finished. However, I dynamically allocated these arrays, so I cannot understand why I can't manipulate the arrays later in the program, since the pointer should remain.
//Prepare to store integers
if (flag == 1) {
int *Array;
Array = new int[input.length()];
}
//Prepare to store chars
else if (flag == 2) {
char *Array;
Array = new char[input.length()];
}
Can anyone shed some light on this?
Declare Array before if. And you can't declare array of different types as one variable, so I think you should use to pointers.
int *char_array = nullptr;
int *int_array = nullptr;
//Prepare to store integers
if (flag == 1) {
int_array = new int[input.length()];
}
//Prepare to store chars
else if (flag == 2) {
char_array = new char[input.length()];
}
if (char_array)
{
//do something with char_array
}
else if (int_array)
{
//do something with int_array
}
Also as j_random_hacker points, you might want to change you program design to avoid lot's of if
While you are right that since you dynamically allocated them on the heap, the memory won't be released to the system until you explicitly delete it (or the program ends), the pointer to the memory falls out of scope when the block it was declared in exits. Therefore, your pointer(s) need to exist at a wider scope if they will be used after the block.
The memory remains allocated (i.e. taking up valuable space), there's just no way to access it after the closing }, because at that point the program loses the ability to address it. To avoid this, you need to assign the pointer returned by new[] to a pointer variable declared in an outer scope.
As a separate issue, it looks as though you're trying to allocate memory of one of 2 different types. If you want to do this portably, you're obliged to either use a void * to hold the pointer, or (less commonly done) a union type containing a pointer of each type. Either way, you will need to maintain state information that lets the program know which kind of allocation has been made. Usually, wanting to do this is an indication of poor design, because every single access will require switching on this state information.
If I understand your intend correctly what you are trying to do is: depending on some logic allocate memory to store n elements of either int or char and then later in your function access that array as either int or char without the need for a single if statement.
If the above understanding is correct than the simple answer is: "C++ is a strong-typed language and what you want is not possible".
However... C++ is also an extremely powerful and flexible language, so here's what can be done:
Casting. Something like the following:
void * Array;
if(flag1) Array = new int[len]
else Array = new char[len];
// ... later in the function
if(flag) // access as int array
int i = ((int*)Array)[0];
Yes, this is ugly and you'll have to have those ifs sprinkled around the function. So here's an alternative: template
template<class T> T foo(size_t _len)
{
T* Array = new T[_len];
T element = Array[0];
return element;
}
Yet another, even more obscure way of doing things, could be the use of unions:
union int_or_char {int i; char c;};
int_or_char *Array = new int_or_char[len];
if(flag) // access as int
int element = Array[0].i;
But one way or the other (or the third) there's no way around the fact that the compiler has to know how to deal with the data you are trying to work with.
Turix's answer is right. You need to keep in mind that two things are being allocated here, The memory for the array and the memory when the location of the array is stored.
So even though the memory from the array is allocated from the heap and will be available to the code where ever required, the memory where the location of the array is stored (the Array variable) is allocated in the stack and will be lost as soon as it goes out of scope. Which in this case is when the if block end. You can't even use it in the else part of the same if.
Another different code suggestion from Andrew I would give is :
void *Array = nullptr;
if (flag == 1) {
Array = new int[input.length()];
} else if (flag == 2) {
Array = new char[input.length()];
}
Then you can directly use if as you intended.
This part I am not sure : In case you want to know if its an int or char you can use the typeid literal. Doesn't work, at least I can't get it to work.
Alternatively you can use your flag variable to guess what type it is.
I have a zap() function written to deallocate a 1-d array as follows.
void zap(double *(&data))
{
if (data != NULL)
{
delete [] data;
data = NULL;
}
return;
}
I was under the impression that if data != NULL would not try to deallocate memory that had never been allocated, but I think I am mistaken. I am having the following implementation problem.
void fun()
{
int condition = 0;
double *xvec;
double *yvec;
allocate_memory_using_new(yvec); //a function that allocates memory
if (condition == 1) allocate_memory_using_new(xvec);
//some code
//deallocate memory:
zap (yvec);
zap (xvec); //doesn't work
return;
}
The output is the following:
Unhandled exception at 0x6b9e57aa (msvcr100d.dll) in IRASC.exe: 0xC0000005: Access
violation reading location 0xccccccc0.
So I realize it is not a desirable thing to try to call zap when it is obvious that the pointer was never actually used. I am just wondering if there is a way to check the address of the pointer at some point in the zap() function to avoid the exception. Thanks in advance for your help and insight!
Pointers do not get magically initialized to 0, only when they are global or static. You need to do so:
double *xvec = NULL;
double *yvec = NULL;
If you do not, they contain random junk that was left on the stack where they are created. And this junk is most of the time not NULL.
Also, you do not need to check against NULL, as delete is a no-op in that case:
double* xvec = NULL;
delete xvec; // perfectly valid
Further, if you're working with Visual Studio 2010, I recommend to use nullptr instead of NULL.
The values of xvec and yvec point to random numbers, not NULL. I think your allocate_memory function isn't working properly, as it usually would return a pointer to a block of memory, which you would assign to xvec and yvec
In C++ pointers are not initialized automagically to NULL as in other languages (think Java), so the value of xvec (pointer) is undefined, and might or not be NULL when you test.
void fun()
{
double *xvec; // value of xvec undefined, might be 0 or not
// ...
zap (xvec); // if it is not 0, you will try to delete: Undefined Behavior
}
The simple solution is initializing the pointer in the definition double *xvec = 0;. Also, you do not need to test for NULL (or 0) in your zap function, delete will not cause undefined behavior if called on a null pointer:
template <typename T>
inline void zap( T *& p ) {
delete p;
p = 0;
}
I've concluded the the array forms of new and delete qualify as C++ anti-patterns -- they seem reasonable, but in reality almost any and all use of them is basically guaranteed to lead to more grief and problems than usable code.
As such, I'd say that trying to fix your zap is a bit like finding finding a woman who's just been in a fire and gotten 3rd degree burns on at least 85% of her body, and trying to make her better by trimming the fingernail she broke while escaping from the fire.