Whether following code is prone to memory leak? - c++

I am new to C++ and I want to know whether following code is prone to memory leak.
Here I am using a std::ostream pointer to redirect output to either console or to a file.
For this I am calling new operator for std::ofstream.
#include <iostream>
#include <fstream>
int main() {
bool bDump;
std::cout << "bDump bool" << std::endl;
std::cin >> bDump;
std::ostream *osPtr;
if (bDump) {
osPtr = new std::ofstream("dump.txt");
} else {
osPtr = &std::cout;
}
*osPtr << "hello";
return 0;
}
And one more thing, I have not closed the file which I opened while calling constructor for ofstream. Do we have any potential data loss situation here. as file is not closed.

Yes. Definitely. Any time you call new without delete, there's a memory leak.
After your code has executed, you need to add this:
if(bDump)
{
delete osPtr;
}

As #Mahmoud Al-Qudsi mentioned anything you new must also be deleted otherwise it will be leaked.
In most situation you do not want to use delete but rather you want to use a smart pointer to auto-delete the object. This is because in situations with exceptions you could again leak memory (while RAII) the smart pointer will guarantee that the object is deleted and thus the destructor is called.
It is important the at the destructor is called (especially in this case). If you do not call the destructor there is a potential that the not everything in the stream will be flushed to the underlying file.
#include <iostream>
#include <fstream>
int doStuff()
{
try
{
bool bDump;
std::cout<<"bDump bool"<<std::endl;
std::cin>>bDump;
// Smart pointer to store any dynamic object.
std::auto_ptr<std::ofstream> osPtr;
if(bDump)
{
// If needed create a file stream
osPtr.reset(new std::ofstream("dump.txt"));
}
// Create a reference to the correct stream.
std::ostream& log = bDump ? *osPtr : std::cout;
log << "hello";
}
catch(...) {throw;}
} // Smart pointer will correctly delete the fstream if it exists.
// This makes sure the destructor is called.
// This is guaranteed even if exceptions are used.

Yes, anything that is newed but never deleteed leaks.
In some cases, it is perfectly reasonable to allocate left right and center, and then just exit, particularly for short-lived batch-style programs, but as a general rule you should delete everything you new and delete[] everything you new[].
Especially in the case above, leaking the object is unsafe, since the object being leaked is an ostream that will never write unflushed content.

There is no leak in the code shown. At all times during the execution, all allocated objects are referenceable. There is only a memory leak if an object has been allocated and cannot be referenced in any way.
If the pointer goes out of scope, or its value is changed, without the object being deallocated, that is a memory leak. But so long as the pointer is in the outermost scope and nothing else changes its value, there is no leak.
"In object-oriented programming, a memory leak happens when an object is stored in memory but cannot be accessed by the running code." Wikipedia -- 'Memory leak'
The other answers suggest that any program that uses typical singleton patterns or doesn't free all allocated objects prior to termination has a memory leak. This is, IMO, quite silly. In any event, if you accept that definition, almost every real world program or library has a memory leak, and memory leaks are certainly not all bad.
In a sense, this kind of coding is prone to a memory leak, because it's easy to change the value of the pointer or let it go out of scope. In that case, there is an actual leak. But as shown, there is no leak.
You may have a different problem though: If the destructor has side-effects, not calling it can result in incorrect operation. For example, if you never call the destructor on a buffered output stream writing to a file, the last writes may never actually happen because the buffer didn't get flushed to the file.

Related

C++ member array destructor [duplicate]

I have a simple C++ code, but I don't know how to use the destructor:
class date {
public:
int day;
date(int m)
{
day =m;
}
~date(){
cout << "I wish you have entered the year \n" << day;
}
};
int main()
{
date ob2(12);
ob2.~date();
cout << ob2.day;
return 0;
}
The question that I have is, what should I write in my destructor code, that after calling the destructor, it will delete the day variable?
Rarely do you ever need to call the destructor explicitly. Instead, the destructor is called when an object is destroyed.
For an object like ob2 that is a local variable, it is destroyed when it goes out of scope:
int main()
{
date ob2(12);
} // ob2.~date() is called here, automatically!
If you dynamically allocate an object using new, its destructor is called when the object is destroyed using delete. If you have a static object, its destructor is called when the program terminates (if the program terminates normally).
Unless you create something dynamically using new, you don't need to do anything explicit to clean it up (so, for example, when ob2 is destroyed, all of its member variables, including day, are destroyed). If you create something dynamically, you need to ensure it gets destroyed when you are done with it; the best practice is to use what is called a "smart pointer" to ensure this cleanup is handled automatically.
You do not need to call the destructor explicitly. This is done automatically at the end of the scope of the object ob2, i.e. at the end of the main function.
Furthermore, since the object has automatic storage, its storage doesn’t have to be deleted. This, too, is done automatically at the end of the function.
Calling destructors manually is almost never needed (only in low-level library code) and deleting memory manually is only needed (and only a valid operation) when the memory was previously acquired using new (when you’re working with pointers).
Since manual memory management is prone to leaks, modern C++ code tries not to use new and delete explicitly at all. When it’s really necessary to use new, then a so-called “smart pointer” is used instead of a regular pointer.
You should not call your destructor explicitly.
When you create your object on the stack (like you did) all you need is:
int main()
{
date ob2(12);
// ob2.day holds 12
return 0; // ob2's destructor will get called here, after which it's memory is freed
}
When you create your object on the heap, you kinda need to delete your class before its destructor is called and memory is freed:
int main()
{
date* ob2 = new date(12);
// ob2->day holds 12
delete ob2; // ob2's destructor will get called here, after which it's memory is freed
return 0; // ob2 is invalid at this point.
}
(Failing to call delete on this last example will result in memory loss.)
Both ways have their advantages and disadvantages. The stack way is VERY fast with allocating the memory the object will occupy and you do not need to explicitly delete it, but the stack has limited space and you cannot move those objects around easily, fast and cleanly.
The heap is the preferred way of doing it, but when it comes to performance it is slow to allocate and you have to deal with pointers. But you have much more flexibility with what you do with your object, it's way faster to work with pointers further and you have more control over the object's lifetime.
Only in very specific circumstances you need to call the destructor directly. By default the destructor will be called by the system when you create a variable of automatic storage and it falls out of scope or when a an object dynamically allocated with new is destroyed with delete.
struct test {
test( int value ) : value( value ) {}
~test() { std::cout << "~test: " << value << std::endl; }
int value;
};
int main()
{
test t(1);
test *d = new t(2);
delete d; // prints: ~test: 2
} // prints: ~test: 1 (t falls out of scope)
For completeness, (this should not be used in general) the syntax to call the destructor is similar to a method. After the destructor is run, the memory is no longer an object of that type (should be handled as raw memory):
int main()
{
test t( 1 );
t.~test(); // prints: ~test: 1
// after this instruction 't' is no longer a 'test' object
new (&t) test(2); // recreate a new test object in place
} // test falls out of scope, prints: ~test: 2
Note: after calling the destructor on t, that memory location is no longer a test, that is the reason for recreation of the object by means of the placement new.
In this case your destructor does not need to delete the day variable.
You only need to call delete on memory that you have allocated with new.
Here's how your code would look if you were using new and delete to trigger invoking the destructor
class date {
public: int* day;
date(int m) {
day = new int;
*day = m;
}
~date(){
delete day;
cout << "now the destructor get's called explicitly";
}
};
int main() {
date *ob2 = new date(12);
delete ob2;
return 0;
}
Even though the destructor seems like something you need to call to get rid of or "destroy" your object when you are done using it, you aren't supposed to use it that way.
The destructor is something that is automatically called when your object goes out of scope, that is, when the computer leaves the "curly braces" that you instantiated your object in. In this case, when you leave main(). You don't want to call it yourself.
You may be confused by undefined behavior here. The C++ standard has no rules as to what happens if you use an object after its destructor has been run, as that's undefined behavior, and therefore the implementation can do anything it likes. Typically, compiler designers don't do anything special for undefined behavior, and so what happens is an artifact of what other design decisions were made. (This can cause really weird results sometimes.)
Therefore, once you've run the destructor, the compiler has no further obligation regarding that object. If you don't refer to it again, it doesn't matter. If you do refer to it, that's undefined behavior, and from the Standard's point of view the behavior doesn't matter, and since the Standard says nothing most compiler designers will not worry about what the program does.
In this case, the easiest thing to do is to leave the object untouched, since it isn't holding on to resources, and its storage was allocated as part of starting up the function and will not be reclaimed until the function exits. Therefore, the value of the data member will remain the same. The natural thing for the compiler to do when it reads ob2.day is to access the memory location.
Like any other example of undefined behavior, the results could change under any change in circumstances, but in this case they probably won't. It would be nice if compilers would catch more cases of undefined behavior and issue diagnostics, but it isn't possible for compilers to detect all undefined behavior (some occurs at runtime) and often they don't check for behavior they don't think likely.

What are the reasons to allocate a pointer on the heap?

Probably this question was already asked but I couldn't find it. Please redirect me if you you saw something.
Question :
what is the benefit of using :
myClass* pointer;
over
myClass* pointer = new(myClass);
From reading on other topics, I understand that the first option allocates a space on the stack and makes the pointer point to it while the second allocates a space on the heap and make a pointer point to it.
But I read also that the second option is tedious because you have to deallocate the space with delete.
So why would one ever use the second option.
I am kind of a noob so please explain in details.
edit
#include <iostream>
using namespace std;
class Dog
{
public:
void bark()
{
cout << "wouf!!!" << endl;
}
};
int main()
{
Dog* myDog = new(Dog);
myDog->bark();
delete myDog;
return 0;
}
and
#include <iostream>
using namespace std;
class Dog
{
public:
void bark()
{
cout << "wouf!!!" << endl;
}
};
int main()
{
Dog* myDog;
myDog->bark();
return 0;
}
both compile and give me "wouf!!!". So why should I use the "new" keyword?
I understand that the first option allocates a space on the stack and
makes the pointer point to it while the second allocates a space on
the heap and make a pointer point to it.
The above is incorrect -- the first option allocates space for the pointer itself on the stack, but doesn't allocate space for any object for the pointer to point to. That is, the pointer isn't pointing to anything in particular, and thus isn't useful to use (unless/until you set the pointer to point to something)
In particular, it's only pure blind luck that this code appears to "work" at all:
Dog* myDog;
myDog->bark(); // ERROR, calls a method on an invalid pointer!
... the above code is invoking undefined behavior, and in an ideal world it would simply crash, since you are calling a method on an invalid pointer. But C++ compilers typically prefer maximizing efficiency over handling programmer errors gracefully, so they typically don't put in a check for invalid-pointers, and since your bark() method doesn't actually use any data from the Dog object, it is able to execute without any obvious crashing. Try making your bark() method virtual, OTOH, and you will probably see a crash from the above code.
the second allocates a space on the heap and make a pointer point to
it.
That is correct.
But I read also that the second option is tedious because you have to
deallocate the space with delete.
Not only tedious, but error-prone -- it's very easy (in a non-trivial program) to end up with a code path where you forgot to call delete, and then you have a memory leak. Or, alternatively, you could end up calling delete twice on the same pointer, and then you have undefined behavior and likely crashing or data corruption. Neither mistake is much fun to debug.
So why would one ever use the second option.
Traditionally you'd use dynamic allocation when you need the object to remain valid for longer than the scope of the calling code -- for example, if you needed the object to stick around even after the function you created the object in has returned. Contrast that with a stack allocation:
myClass someStackObject;
... in which someStackObject is guaranteed to be destroyed when the calling function returns, which is usually a good thing -- but not if you need someStackObject to remain in existence even after your function has returned.
These days, most people would avoid using raw/C-style pointers entirely, since they are so dangerously error-prone. The modern C++ way to allocate an object on the heap would look like this:
std::shared_ptr<myClass> pointer = std::make_shared<myClass>();
... and this is preferred because it gives you a heap-allocated myClass object whose pointed-to-object will continue to live for as long as there is at least one std::shared_ptr pointing to it (good), but also will automagically be deleted the moment there are no std::shared_ptr's pointing to it (even better, since that means no memory leak and no need to explicitly call delete, which means no potential double-deletes)

Deleting an object declared on the stack

I wanted to know what happens if we delete an object declared on the stack, two times. In order to test this I've written this simple program:
#include <iostream>
using namespace std;
class A {
public:
A() {}
virtual ~A() {
cout << "test" << endl;
}
};
int main()
{
A a;
a.~A();
}
I was actually expecting a segmentation fault, as I'm deleting A once explicitly in the code, and it will be deleted again when it goes out of scope, however suprisingly the program produces the following ouput:
"test"
"test"
Can anybody explain why this code is working??
There are three reasons:
The destructor does not deallocate the object, it performs whatever cleanup operation you find useful (and by default, nothing). It is implicitly called before a variable goes out of scope or is explicitly deleted, but you are free to call it as well.
Deallocation usually does not cause memory to cease existing nor to be inaccessible. It is rather marked as reusable. (Anyway, double deallocation should raise a memory management error condition.)
Last but not least, an object allocated on the stack is not deallocated (when you exit the function, the stack pointer moves to the previous frame, leaving the stack unchanged).
Your program has undefined behaviour, so it may just as easily have segfaulted, or stolen my car, or gone into space to start an exciting new colony of lesbian parrots.
But, in practice, the behaviour you've witnessed can be explained. Calling the destructor does not "delete" an object; it just calls the destructor. A destructor call is one part of object deletion; yours just prints to standard output, so there's really nothing to trigger a memory access violation here.
More generally, "expecting a segmentation fault" is always folly.
However, if you'd actually attempted to delete the object with delete, I would be surprised if your program didn't crash at runtime.
You called the destructor of a. This does not delete the variable, it just call the destructor function. The variable will be remove from the stack at the exit of the function.
As the name says, a is a "variable with automatic lifetime" which means you can't end it's lifetime prematurely yourself. It is bound to the scope it was created in. You're just manually calling the destructor which is just another method that prints something. Then it's called again automatically by the runtime on actual destruction and end of lifetime. You can somewhat control the lifetime of an automatically managed object by defining the scope of it:
int main()
{
{
A a;
} // 'a' is destroyed here.
} // instead of here.

Deconstructing a *Thing

I'm sure this is answered somewhere, but I'm lacking the vocabulary to formulate a search.
#include <iostream>
class Thing
{
public:
int value;
Thing();
virtual ~Thing() { std::cout << "Destroyed a thing with value " << value << std::endl; }
};
Thing::Thing(int newval)
{
value = newval;
}
int main()
{
Thing *myThing1 = new Thing(5);
std::cout << "Value 1: " << myThing1->value << std::endl;
Thing myThing2 = Thing(6);
std::cout << "Value 2: " << myThing2.value << std::endl;
return 0;
}
Output indicates myThing2 was destroyed, my myThing1 was not.
So... do I need to deconstruct it manually somehow? Is this a memory leak? Should I avoid using the * in this situation, and if so, when would it be appropriate?
The golden rule is, wherever you use a new you must use a delete. You are creating dynamic memory for myThing1, but you never release it, hence the destructor for myThing1 is never called.
The difference between this and myThing2 is that myThing2 is a scoped object. The operation:
Thing myThing2 = Thing(6);
is not similar at all to:
Thing *myThing1 = new Thing(5);
Read more about dynamic allocation here. But as some final advice, you should be using the new keyword sparingly, read more about that here:
Why should C++ programmers minimize use of 'new'?
myThing1 is a Thing* not a Thing. When a pointer goes out of scope nothing happens except that you leak the memory it was holding as there is no way to get it back. In order for the destructor to be called you need to delete myThing1; before it goes out of scope. delete frees the memory that was allocated and calls the destructor for class types.
The rule of thumb is for every new/new[] there should be a corresponding delete/delete[]
You need to explicitly delete myThing1 or use shared_ptr / unique_ptr.
delete myThing1;
The problem is not related to using pointer Thing *. A pointer can point to an object with automatic storage duration.
The problem is that in this statement
Thing *myThing1 = new Thing(5);
there is created an object new Thing(5) using the new operator. This object can be deleted by using the delete operator.
delete myThing1;
Otherwise it will preserve the memory until the program will not finish.
Thing myThing2 = Thing(6);
This line creates a Thing in main's stack with automatic storage duration. When main() ends it will get cleaned up.
Thing *myThing1 = new Thing(5);
This, on the other hand, creates a pointer to a Thing. The pointer resides on the stack, but the actual object is in the heap. When the pointer goes out of scope nothing happens to the pointed-to thing, the only thing reclaimed is the couple of bytes used by the pointer itself.
In order to fix this you have two options, one good, one less good.
Less good:
Put a delete myThing1; towards the end of your function. This will free to allocated object. As noted in other answers, every allocation of memory must have a matching deallocation, else you will leak memory.
However, in modern C++, unless you have good reason not to, you should really be using shared_ptr / unique_ptr to manage your memory. If you had instead declared myThing1 thusly:
shared_ptr<Thing> myThing1(new Thing(5));
Then the code you have now would work the way you expect. Smart pointers are powerful and useful in that they greatly reduce the amount of work you have to do to manage memory (although they do have some gotchas, circular references take extra work, for example).

Should pointers to "raw" resources be zeroed in destructors?

When I wrap "raw" resources in a C++ class, in destructor code I usually simply release the allocated resource(s), without paying attention to additional steps like zeroing out pointers, etc.
e.g.:
class File
{
public:
...
~File()
{
if (m_file != NULL)
fclose(m_file);
}
private:
FILE * m_file;
};
I wonder if this code style contains a potential bug: i.e. is it possible that a destructor is called more than once? In this case, the right thing to do in the destructor would be to clear pointers to avoid double/multiple destructions:
~File()
{
if (m_file != NULL)
{
fclose(m_file);
m_file = NULL; // avoid double destruction
}
}
A similar example could be made for heap-allocated memory: if m_ptr is a pointer to memory allocated with new[], is the following destructor code OK?
// In destructor:
delete [] m_ptr;
or should the pointer be cleared, too, to avoid double destruction?
// In destructor:
delete [] m_ptr;
m_ptr = NULL; // avoid double destruction
No. It is useful if you have a Close() function or the like:
void Close()
{
if (m_file != NULL)
{
fclose(m_file);
m_file = NULL;
}
}
~File()
{
Close();
}
This way, the Close() function is idempotent (you can call it as many times as you want), and you avoid one extra test in the destructor.
But since destructors in C++ can only be called once, assigning NULL to pointers there is pointless.
Unless, of course, for debuggin-purposes, particularly if you suspect a double-delete.
If a destructor is called more than once, you already have undefined behavior. This will also not affect clients that may have a pointer to the resource themselves, so this is not preventing a double delete. A unique_ptr or scoped_ptr seem to be better solutions to me.
In a buggy application (for example, improper use of std::unique_ptr<> can result in two std::unique_ptr<> holding the same raw pointer), you can end up with a double delete, as the second one goes out of scope.
We care about these bad cases - otherwise, what's the point of discussing setting a pointer to nullptr in the destructor? It's going away anyways!
Hence, in this example, at least, it would be better to let the program seg-fault inside a debugger during a unit-test, so you can trace the real cause of the problem.
So, in general, I don't find setting pointers to nullptr to be particularly useful for memory management.
You could do it, but a more robust alternative is to do unit tests and to judiciously use a memory checker like valgrind.
After all, with some memory errors, your program can seemingly run ok many times, until it crashes unexpectedly - much safer to do quality assurance with a memory checker, especially as your program gets larger, and memory errors become less obvious.