After implementing the C++ code below, I ran valgrind --leak-check=full in order to check if there was any memory leak. The result was 0 bytes were in use at exit and no leaks are possible.
However, later I discovered that I've forgotten to use delete[] x instead of just delete x inside the destructor.
I searched for some explanations (for instance: delete vs delete[] operators in C++), and everything I read said that using delete without [] can cause memory leak, since it calls just the destructor for the first object in the array.
I changed the code to delete[] and the valgrind output was the same (as expected). But now I'm confused: "Is there a problem with valgrind, or does delete really works fine for arrays even without the operator []?"
#include <iostream>
#include <string.h>
using namespace std;
class Foo {
private: char *x;
public:
Foo(const char* px) {
this->x = new char[strlen(px)+1];
strcpy(this->x, px);
}
~Foo() {
delete x;
}
void printInfo() { cout << "x: " << x << endl; }
};
int main() {
Foo *objfoo = new Foo("ABC123");
objfoo->printInfo();
delete objfoo;
return 0;
}
using delete without [] causes memory leak.
No, it causes Undefined Behavior.
Your program where you allocate using new [] and deallocate using delete has Undefined Behavior. Actually, you are lucky(rather unlucky) it doesn't show some weird behavior.
As a side note, You also need to follow the Rule of Three for your class. Currently, you don't do so and are calling for trouble in near future.
The main difference between delete and delete[] is not about memory deallocation, but about destructor calls.
While formally undefined behavior, in practice it will more or less work... except that delete will only call the destructor of the first item.
As a side note, you may get an issue:
#include <iostream>
struct A{
~A() { std::cout << "destructor\n"; }
};
int main() {
A* a = new A[10];
delete a;
}
a bit like:
*** glibc detected *** ./prog: munmap_chunk(): invalid pointer: 0x093fe00c ***
======= Backtrace: =========
/lib/libc.so.6[0xb759dfd4]
/lib/libc.so.6[0xb759ef89]
/usr/lib/gcc/i686-pc-linux-gnu/4.3.4/libstdc++.so.6(_ZdlPv+0x21)[0xb77602d1]
./prog(__gxx_personality_v0+0x18f)[0x8048763]
./prog(__gxx_personality_v0+0x4d)[0x8048621]
======= Memory map: ========
See it at ideone.
Related
Here I have a class definition. It is a little long, but the focus will be on the move constructor and the destructor. Below the class definition is a short test.
#include <cassert>
#include <iostream>
#include <utility>
template <typename T>
class SharedPtr {
public:
SharedPtr() {}
explicit SharedPtr(T* input_pointer) : raw_ptr_(input_pointer), ref_count_(new size_t(1)) {}
SharedPtr(const SharedPtr& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {
if (ref_count_) {
++*ref_count_;
}
}
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {}
SharedPtr& operator=(SharedPtr other) {
swap(other, *this);
return *this;
}
size_t use_count() const {
return ref_count_ ? *ref_count_ : 0;
}
~SharedPtr() {
if (ref_count_) {
--*ref_count_;
if (*ref_count_ == 0) {
delete raw_ptr_;
delete ref_count_;
}
}
}
private:
T* raw_ptr_ = nullptr;
size_t* ref_count_ = nullptr;
friend void swap(SharedPtr<T>& left, SharedPtr<T>& right) {
std::swap(left.raw_ptr_, right.raw_ptr_);
std::swap(left.ref_count_, right.ref_count_);
}
};
int main() {
// Pointer constructor
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
}
std::cout << "All tests passed." << std::endl;
return 0;
}
If I run the code I get an error message indicating memory corruption:
*** Error in `./a.out': corrupted size vs. prev_size: 0x0000000001e3dc0f ***
======= Backtrace: =========
...
======= Memory map: ========
...
Aborted (core dumped)
We may suspect something is wrong with the move constructor: if we move from a SharedPtr and then later destruct that SharedPtr, it will still destruct as if it were an "active" SharedPtr. So we could fix that by setting the other object's pointers to nullptr in the move constructor.
But that's not the interesting thing about this code. The interesting thing is what happens if I don't do that, and instead simply add std::cout << "x" << std::endl; to the move constructor.
The new move constructor is given below, and the rest of the code is unchanged.
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {
std::cout << "x" << std::endl;
}
The code now runs without error on my machine and yields the output:
x
All tests passed.
So my questions are:
Do you get the same results as I do?
Why does adding a seemingly innocuous std::cout line cause the program to run "successfully"?
Please note: I am not under any sort of impression that error message gone implies bug gone.
bolov's answer explains the cause of the undefined behavior (UB), when the move constructor of SharedPtr does not invalidate the moved-from pointer.
I disagree with bolov's view that it is pointless to understand UB. The question why code changes result in different behavior, when facing UB, is extremely interesting. Knowing what happens can help debugging, on one hand, and it can help intruders intrude the system, on the other.
The difference in the code in question comes from adding std::cout << something. In fact, the following change also makes the crash go away:
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
std::cout << "hi\n"; // <-- added
}
The std::cout << allocates some internal buffer, which std::cout << uses. The allocation in cout happens only once, and the question is if this allocation happens before or after the double free. Without the additional std::cout, this allocation happens after the double free, when the heap is corrupted. When the heap is corrupted, the allocation in std::cout << triggers the crash. But when there is a std::cout << before the double-free, there is no allocation after the double-free.
Let's have few other experiments to validate this hypothesis:
Remove all std::cout << lines. All works fine.
Move two calls to new int(some number) right before the end:
int main() {
int *p2 = nullptr;
int *cnt = nullptr;
// Pointer constructor
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
}
p2 = new int(100);
cnt = new int(1); // <--- crash
return 0;
}
This crashes, since the new is attempted on a corrupted heap.
(you can try it out here)
Now move the two new lines to slightly up, right before the closing } of the inner block. In this case, the new is performed before the heap is corrupted, so nothing triggers a crash. The delete simply puts the data in the free list, which is not corrupted. As long as the corrupted heap is not touched, then things will work fine. One can call new int, and get a pointer of one of the lately released pointers, and nothing bad will happen.
{
SharedPtr<int> p(new int(5));
SharedPtr<int> p_move(std::move(p));
assert(p_move.use_count() == 1);
p2 = new int(100);
cnt = new int(1);
}
delete p2;
delete cnt;
p2 = new int(100); // No crash. We are reusing one of the released blocks
cnt = new int(1);
(you can try it out here)
The interesting fact is that the corrupted heap can be undetected to much later in the code. The computer may run millions of unrelated lines of code, and suddenly crash on a completely unrelated new in a completely different part of the code. This is why sanitizers and the likes of valgrind are needed: debugging memory corruption can be practically impossible to debug otherwise.
Now, the really interesting question is "can this be exploited more than for denial of service?". Yes it can. It depends on the kind of object that is destroyed twice, and what it does in the destructor. It also depends on what happens between the first destruction of the pointer, and its second free. In this trivial example, nothing substantial seems to be possible.
SharedPtr(SharedPtr&& other) : raw_ptr_(other.raw_ptr_), ref_count_(other.ref_count_) {}
When you move the moved from object remains the same. This means that at some point in your program you will delete raw_ptr_ twice for the same memory. The same for ref_count_. This is Undefined Behavior.
The behaviour you observe falls well within Undefined Behavior because that's what UB means: the standard doesn't mandate absolutely any kind of behavior from your program. Trying to understand why exactly happens what happens on your particular compiler and your particular version on your particular platform with your specific flags is ... kind of pointless.
I am new to C++ and I was wondering why...
#include <iostream>
using namespace std;
class myClass{
public:
void myMethod(){
cout << "It works!" << endl;
}
myClass(){
cout << "myClass is constructed!" << endl;
}
~myClass(){
cout << "This class is destructed!" << endl;
}
};
int main()
{
myClass c;
c.myMethod();
myClass *e = &c;
delete e;
cout << "This is from main" << endl;
return 0;
}
So up there is the code. and the output is
myClass is constructed!
It works!
This class is destructed!
I am wondering where did the "This is from main" output go away.. does C++ doesn't execute codes after delete keyword?
You can only delete objects that have been created with new.
What you're doing is UB, by the means of double deletion.
By the way, while in this case your program stopped execution right after the statement that had UB, it doesn't necessarily have to happen this way because
However, if any such execution contains an undefined operation, this
International Standard places no requirement on the implementation
executing that program with that input (not even with regard to
operations preceding the first undefined operation).
You have undefined behavior. You are not allowed to delete something that was not allocated with new. In doing so you have undefined behavior and your program is allowed to do what it wants.
Most likely you should have received some sort of hard fault that stopped the program from running.
You caused undefined behavior by deleting something that was not newed. You don't need to delete stuff that just points at some general location. You only delete that which you created by calling new (not placement new!).
Perhaps it is good idea for you to read about stack and heap memory.
You must free only when you malloc (or other variations such as calloc).
For example:
char *c = (char *)malloc(255);
...
free(c)
You must delete only if you use new.
MyClass *e = new MyClass()
...
delete e
You must delete[] only if you use new[]
char *data = new char[20]
...
delete[] data
Now, if you do something like this:
...
{
int x;
x = 3;
}
x will be destroyed after the bracers because it is out of scope. This puts x on the stack. However, if you use malloc, new, or delete, the variable itself can be lost if you are not careful, but the memory is allocated. This is memory leak.
What you have is even more dangerous. You are deleting something which was not allocated. The behavior is not defined. With time and patience, a skilled hacker can study the behavior and may be able to break into your system and acquire same privileges as your program.
I have some question about the delete[] p. I've written some code snippet for test this function. But I found that after executing the delete[] p, only the first 2 array elements were deallocated while the remaining not. Please see my test snippet and the output result as below. Who can tell me why and how can I clear off the whole memory for the unused array? Thanks!
#include <iostream>
using namespace std;
int main()
{
int *p;
p = new int[20];
for(int i=0;i<20;i++)
{
p[i]=i+1;
}
cout<<"------------------------Before delete-------------------------"<<endl;
for(int i=0;i<20;i++)
{
cout<<"p+"<<i<<":"<<p+i<<endl;
cout<<"p["<<i<<"]:"<<p[i]<<endl;
}
delete[] p;
cout<<"-------------------After delete------------------------"<<endl;
for(int i=0;i<20;i++)
{
cout<<"p+"<<i<<":"<<p+i<<endl;
cout<<"p["<<i<<"]:"<<p[i]<<endl;
}
return 0;
}
OUTPUT IN www.compileronline.com
Compiling the source code....
$g++ main.cpp -o demo -lm -pthread -lgmpxx -lgmp -lreadline 2>&1
Executing the program....
$demo
------------------------Before delete-------------------------
p+0:0xa90010
p[0]:1
p+1:0xa90014
p[1]:2
p+2:0xa90018
p[2]:3
p+3:0xa9001c
p[3]:4
p+4:0xa90020
p[4]:5
p+5:0xa90024
p[5]:6
p+6:0xa90028
p[6]:7
p+7:0xa9002c
p[7]:8
p+8:0xa90030
p[8]:9
p+9:0xa90034
p[9]:10
p+10:0xa90038
p[10]:11
p+11:0xa9003c
p[11]:12
p+12:0xa90040
p[12]:13
p+13:0xa90044
p[13]:14
p+14:0xa90048
p[14]:15
p+15:0xa9004c
p[15]:16
p+16:0xa90050
p[16]:17
p+17:0xa90054
p[17]:18
p+18:0xa90058
p[18]:19
p+19:0xa9005c
p[19]:20
-------------------After delete------------------------
p+0:0xa90010
p[0]:0
p+1:0xa90014
p[1]:0
p+2:0xa90018
p[2]:3
p+3:0xa9001c
p[3]:4
p+4:0xa90020
p[4]:5
p+5:0xa90024
p[5]:6
p+6:0xa90028
p[6]:7
p+7:0xa9002c
p[7]:8
p+8:0xa90030
p[8]:9
p+9:0xa90034
p[9]:10
p+10:0xa90038
p[10]:11
p+11:0xa9003c
p[11]:12
p+12:0xa90040
p[12]:13
p+13:0xa90044
p[13]:14
p+14:0xa90048
p[14]:15
p+15:0xa9004c
p[15]:16
p+16:0xa90050
p[16]:17
p+17:0xa90054
p[17]:18
p+18:0xa90058
p[18]:19
p+19:0xa9005c
p[19]:20
The memory is cleared - there's just no requirement that the compiler will actually zero out an int in its destructor. What you're seeing is that the compiler didn't think that was necessary, so it's still there. The destructor is called though and the memory is freed.
You can see that more clearly if you did something like:
struct A {
int i;
A() : i(7) { }
~A() {
std::cout << "deleting A." << std::endl;
i = 0;
}
};
And repeat your experiment with A* p = new A[20];.
It is important to distinguis between cleared memory and deleted memory.
When memory is cleared it is initialized to some value - perhaps all zeros. But when memory is deleted or freed, it is returned to the heap. When the memory is released the memory manager is under no obligation to clear the memory. The two bytes you see being changed are likely from a debug marker used by the memory manager to help track buffer overruns.
If you want to clear all of the memory, you need to do it before you delete it - only at that point do you still have ownership of that memory. Clearing it after deleting it is operating on memory you do not own.
I created an example classes (only for learning purposes) which don't have to use constructor initialization lists because I want to get the same effects using new/delete and malloc/free. What are other constraints besides not using constructor initialization list? Do you think that the following code simulate the new/delete behavior in the right way?
#include <iostream>
using namespace std;
class X
{
private:
int* a;
public:
X(int x)
{
this->a = new int;
*(this->a) = x;
}
~X() { delete this->a; }
int getA() { return *(this->a); }
};
class Y
{
private:
int* a;
public:
void CTor(int x)
{
this->a = new int;
*(this->a) = x;
}
void DTor() { delete this->a; }
int getA(){ return *(this->a); }
};
void main()
{
X *xP = new X(44);
cout<<xP->getA()<<endl;
delete xP;
Y *yP = static_cast<Y*>(malloc(sizeof(Y)));
yP->CTor(44);
cout<<yP->getA()<<endl;
yP->DTor();
free(yP);
system("pause");
}
Without using delete xP, destructor will be called automatically when program ends, but a free store won't be freed (i.e. free store for xP, free store for field a will be freed) . When using delete xP destructor is called and then a free store is completely freed.
Please correct me if I'm wrong.
The actual answer to your question is that you are incorrect, not calling delete xP will lead to a memory leak. It is very likely that the OS then cleans up the memory for you, but it's not GUARANTEED by the C++ runtime library that this happens - it's something that OS's generally offer as a nice service [and very handy it is too, since it allows us to write imperfect code and have code that does things like if (condition) { cout << "Oh horror, condition is true, I can't continue" << endl; exit(2); } without having to worry about cleaning everything up when we've got something that went horribly wrong].
My main point, however, is that this sort of stuff gets very messy as soon as you add members that in themselves need destruction, e.g.
class X
{
private:
std::string str;
...
};
There is no trivial way to call the destructor for str. And it gets even worse if you decide to use exceptions and automated cleanup.
There are many subtle differences that distinguish new from malloc and delete from free. I believe they are only equivalent if the class is POD type.
I should warn that you can never mix the two types freely. A new must always be balanced by a delete and never by a free. Similarly malloc must be balanced only by free. Mixing the two results in undefined behaviour.
I would like to know why in the following code the first delete won't free the memory:
#include <list>
#include <stdio.h>
struct abc {
long a;
abc() {
puts("const");
}
~abc() {
puts("desc");
}
};
int main() {
std::list<abc*> test;
abc* pA = new abc;
printf("pA: 0x%lX\n", (unsigned long int)pA);
test.push_back(pA);
abc* pB = test.back();
printf("pB: 0x%lX\n", (unsigned long int)pB);
delete pB; // just ~abc()
test.pop_back();
delete pA; // ~abc() and free (works)
puts("before double-free");
delete pA; // ~abc() and second free (crash)
return 0;
}
Output is:
const
pA: 0x93D8008
pB: 0x93D8008
desc
desc
before double-free
desc
*** glibc detected *** ./test: double free or corruption (fasttop): 0x093d8008 ***
...
I tried it with free() also but same behavior.
delete pA; // ~abc() and free (works)
puts("before double-free");
delete pA; // ~abc() and second free (crash)
These delete statements are not needed once you write delete pB. You've a misconception that delete pB only calls the destructor. No, it calls the destructor and also deallocates the memory.
Also, since you've already written delete pB, the next two further delete expressions invoke undefined behavior, which means anything can happen : the program may or may not crash!
Have a look at these topics:
Undefined, unspecified and implementation-defined behavior
Is there any way to predict the undefined behaviour or implementation defined behaviour?
What are all the common undefined behaviours that a C++ programmer should know about?
Your first delete renders the pointer to invalid state. So use of that pointer now leads to undefined behaviour.
int* p = new int;
delete p;
delete p; //undefined behaviour
This is guaranteed to be fine by the standard (as pointed out in the first comment):
int* p = new int;
delete p;
p = 0;
delete p; //fine
First, you can't free memory allocated with new, you have to use delete.
Secondly, just because glibc doesn't detect the double free immediately, how do you know delete pB; isn't freeing it? That's what delete does, and from your own logging you're passing the same address to delete each time.
Thirdly: what are you actually trying to accomplish?
It's just an indiosyncracy of your compiler/platform that you had to call delete twice on pA before glibc complained ... for instance, on my current platform, which is OSX 10.6.8 with gcc version i686-apple-darwin10-gcc-4.2.1, I get the following output:
const
pA: 0x100100080
pB: 0x100100080
desc
desc
test(14410) malloc: *** error for object 0x100100080: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap
So you can see the first call to delete on pA, which by the standard results in undefined behavior since you didn't set the pointer to NULL or 0, did get caught as an attempt to deallocate already deallocated memory by the C++ run-time on my platform.
Since the results of the double-delete are "undefined behavior", it's really up to the implementation and platform on what happens.
You may be able to detect allocation/deallocation memory errors quicker if you use the link against the glibc memory consistency checker by compiling with the -lmcheck flag.