Clearing std::map inside map [duplicate] - c++

I'm new to the language and I have a basic doubt about memory leaks.
Is it possible to have a leak if I don't use the new keyword? (i.e having my variables in the stack and using data containers like std::vector)
Should I worry about this issue?
If that is the case, can someone give me an example of a situation that creates a leak without dynamically allocating memory?

i.e having my variables in the stack and using data containers like std::vector
No, with std::vector or other standard containers you shouldn't have to worry.
can someone give me an example of a situation that creates a leak without dynamically allocating memory?
One popular mistake are circularly dependent smart pointers of the form:
class Child;
class Parent {
std::vector<std::shared_ptr<Child>> childs;
};
class Child {
std::shared_ptr<Parent> parent;
};
Since the reference counters of the shared pointers will never drop to zero those instances never would be deleted and cause a memory leak.
More info about what causes that and how to avoid it can be found here
How to avoid memory leak with shared_ptr?

I think it is not possible to leak memory if you do not reserve memory dynamically. Probably, global variables are not going to be freed, but I would not call that a memory leak.
However, there are more ways to dynamically reserve memory than using the keyword new.
For example, malloc allocates a memory block. Also calloc reserves memory and zeroes it.
Your operating can also give you methods to manage the memory. For example strdup for Linux.
You can also be using smart pointers and calling std::make_unique or std::make_shared. Both methods dynamically allocate memory.
For std::unique_ptr you can leak if you call release() and forget to delete the pointer.
std::make_unique<int>(3).release(); // Memory leak
For std::shared_ptr you can leak if you create a circular reference. You can find more information here.
Also, when you use static variables, the destructor is not called when the variable goes out of scope but at the end of the execution. This is not exactly a memory leak because the destructor is finally called but you may have some memory allocated and not used.
For example, consider the following code:
#include <iostream>
#include <string>
#include <vector>
void f()
{
static std::vector<int> v;
v.insert(v.begin(), 100*1024*1024, 0);
v.clear();
}
int main()
{
f();
return 0;
}
std::vector::clear() is not required to free the memory allocated by the vector. So, after calling f(), you will have 400 MB of memory allocated but only accesible inside f(). Not exactly a memory leak, but it is a resource allocated that it is not automatically freed until the end.

In addition to the other answers, an easy source for memory leaks are external libraries. A lot of them, especially C or C-like libraries, have functions like create_* and destroy_* for their data types. Even though you never explicitly call new, it is still just as easy to have a memory leak.

Related

Can I encounter memory leaks when using vectors of classes? (C++)

I'm using numerous vectors in a program and I want to avoid memory leaks. Here is an example of a vector containing classes I have created myself.
vector<MyClass> objects;
objects = vector<MyClass>(10);
As you can see, I haven't used the "new" operator and the vector is not of a pointer type. Will I still encounter memory leaks without deleting the vector in some way? If so, how can I delete the vector and deallocate the memory?
No, you won't encounter directly a memory leaks related to the vector this way. Indeed, objects is a variable with automatic storage duration. What this means is that the variable you created will live in the scope you created it. If you've created it in a function, and if/for/while/etc scope or even a raw block scope, it will be cleaned up at the end of this same scope, without the need for any actions from your part.
Then, nothing is preventing your class itself from leaking, e.g. if you have ownership of some memory and don't release it as an instance of your class go away.
A memory leak is defined as memory that exists, but you don't have access to anymore because you lost the pointer. Do this in a loop and you will be out of available memory real quick.
If you don't use new to allocate memory dynamically, you cannot have a memory leak.
Lets assume you add instances to this vector in a loop. This consumes a lot of memory. But it's not a leak, because you know exactly where your memory went. And you can still release it if no longer needed.

How compiler is going to know which memory is allocated using which operator or function?

Suppose I have allocated memory for two arrays, one using new operator and other using malloc function. As far as I know both of the memories are allocated in heap segment then my question is how the compiler is going to know which memory is allocated using which operator or function? Or is there any other concept behind this.
The compiler doesn't have to know how memory behind a pointer was allocated, it's the responsibility of the programmer. You should always use matching allocate-deallocate functions/operators. For example the operator new can be overloaded. In this case when you allocate object with new, and release it with free(), you're in trouble because free() has no idea what kind of book-keeping you have there. Here's simplified an example of this situation:
#include <iostream>
#include <stdlib.h>
struct MyClass
{
// Really dumb allocator.
static void* operator new(size_t s)
{
std::cout << "Allocating MyClass " << s << " bytes.\n";
void* res = Pool + N * sizeof(MyClass);
++N;
return res;
}
// matching operator delete not implemented on purpose.
static char Pool[]; // take memory from this statically allocated array.
static unsigned N; // keep track of allocated objects.
};
char MyClass::Pool[10*sizeof(MyClass)];
unsigned MyClass::N = 0;
int main(int argc, char** argv)
{
MyClass* p = new MyClass();
if (argc == 1)
{
std::cout << "Trying to delete\n";
delete p; // boom - non-matching deallocator used.
}
else
{
std::cout << "Trying to free\n";
free(p); // also boom - non-matching deallocator used.
}
}
If you mix and match the allocators and deallocators you will run into similar problems.
Internally, both allocation mechanisms may or may not finally use the same mechanism, but pairing new and free or malloc and delete would mix conceptually different things and cause undefined behaviour.
You must not use delete for malloc or free for new. Although for basic data types you might get away with it on most compilers, it is still wrong. It is not guaranteed to work. malloc and new could deal with different heaps and not the same one. Furthermore, delete will call destructors of objects whereas free will not.
Compilers don't have to keep track of which memory blocks are allocated by malloc or new. They might as a debug help, or they might not. Don't rely on that.
It does not know. It just calls a function that returns a pointer, and pointers do not carry the information of how they got to be or what kind of memory they point to. It just passes along that pointer and does not care about it any further.
However, the function you use to deallocate the memory (i.e. free/delete) might depend on information that got stored somewhere hidden by malloc/new. So if you allocate memory by malloc and try to deallocate it by using delete (or new and free), it might not work (apart from the obvious problems with constructors/destructors).
Might not work in this case it is undefined what happens. This is a huge bonus for cmpiler developers and performance, because they simply don't have to care. On the other hand, the effort is put off to the developers who have to keep track of how certain memory got allocated. The easiest way to do that is by using just one of the two methods.
new/delete is the C++ way to allocate memory and deallocate memory from the heap
whereas
malloc/free/family is the C way to allocate and free memory from the heap
I don't know why you want the compiler to know who allocated the heap memory but
if you want to track how there is a way.
One way of doing so is new would initialize the allocated memory by calling a constructor you can monitor this constructor to know who allocated the memory to heap.
Regards,
yanivx
As far as I know both of the memories are allocated in heap segment then my question is how compiler is going to know which memory is allocated using which operator or function?
What is this thing you call the "heap segment"?
There is no such thing as far as the C and C++ standards are concerned. The "heap" and "stack" are implementation-specific concepts. They are very widely used concepts, but neither standard mandates a "heap" or a "stack".
How the implementation (not the compiler!) knows where things are allocated is up to the implementation. Your best bet, and the only safe bet, is to follow what the standards say to do:
If you allocate memory using new[] you must deallocate it with delete[] (or leave it undeleted).
Any other deallocation is undefined behavior.
If you allocate memory using new you must deallocate it with delete (or leave it undeleted).
Any other deallocation is undefined behavior.
If you allocate memory using malloc or its kin you must deallocate it with free (or leave it undeleted).
Any other deallocation is undefined behavior.
Not freeing allocated memory can sometimes be a serious problem. If you continuously allocate big chunks of memory and never free a single one you will run into problems. Other times, it's not a problem at all. Allocating one chunk of memory at program start and oops, you didn't free it oftentimes is not a problem because that allocated memory is released when the program terminates. It's up to you to determine whether those memory leaks truly are a problem.
The easiest way to avoid these larger issues is to have the program properly free every single byte of allocated memory before the program exits.
Note well: Doing that doesn't guarantee that you don't have a memory problem. Just because your program eventually should free every single one of the multiple terabytes allocated over the course of the program's execution doesn't necessarily mean that the program is okay memory-wise.

dynamic object creation in vector

so i would like to have vector<OtherClassName> theVector as a member of a BaseClass
i am wondering in many ways that i can get a memory leaks...
will doing this results in memory leaks?
BaseClass::someFunction(){
OtherClassName * c = new OtherClassName();
theVector.push_back((*c));
}
i'm a beginner in C++, learning from the internet.
will doing this result in memory leaks?
Yes, this will result in a memory leak. Every object allocated with new must be destroyed with delete. Failing to do so causes a memory leak.
In particular, what you are storing in your vector here is a copy of the object allocated with new. If you want your container to hold objects of a certain class, it is enough to do:
BaseClass::someFunction()
{
OtherClassName c;
theVector.push_back(c);
}
Notice that the vector, like all containers of the C++ Standard library, has value semantics: this means that what you are inserting in the vector is a copy of the object you pass to push_back(). Further modifications to the original objects won't be reflected by the state of the object contained in the vector, and vice versa.
If you want this to happen, i.e. if you need reference semantics, you will have to let your vector contain (possibly smart) pointers. For instance:
#include <memory>
// theVector would be declared as:
// std::vector<std::shared_ptr<OtherClassName>> theVector;
BaseClass::someFunction()
{
std::shared_ptr<OtherClassName> pC = std::make_shared<OtherClassName>();
theVector.push_back(pC);
}
Manual memory management through new and delete is considered bad programming practice in Modern C++, because it easily leads to memory leaks or undefined behavior, and negatively affects the design of your program in terms of robustness, readability, and ease of maintenance.
Classes that dynamically create anything should have a destructor that will free the memory when the object is destroyed. If you don't have one you have memory leaks. Any memory taken by a new statement must have a corresponding delete statement or you will have a memory leak. As your class is written now it will have memory leaks since you never free the memory again. Your destructor should simply go through the vector and free the memory from every pointer it stores.

fully deallocating the memory of a std::vector container

From the vector docs it would appear that the proper way to completely deallocate a vector of values to which you have a class member pointer such as:
std::vector<MyObject>* mvMyObjectVector_ptr;
...
//In the class constructor:
mvMyObjectVector_ptr = new std::vector<MyObject>();
would be to invoke the following, in order, in the class's destructor implementation
mvMyObjectVector_ptr->clear();
delete mvMyObjectVector_ptr;
However, this appears to be leading to SIGABRT 'pointer being freed was not allocated' errors. Is the above idiom the correct way to completely deallocate the memory held at the address pointed to by a pointer to a vector (if it is, I assume my errors are coming from something else)? If not, what is the correct way?
Yes, it is correct, provided mvMyObjectVector_ptr has been allocated using new.
Additionally, MyObject needs to satisfy certain requirements before it can be used with std::vector.
The call to clear() is redundant and can be omitted.
Some likely reasons for the SIGABRT include:
mvMyObjectVector_ptr hasn't been allocated using new;
MyObject violates the Rule of Three;
the class the contains the vector violates the Rule of Three.
I don' think your problem lies in the code you have shown us.
This line:
//In the class constructor:
suggests you are using this inside a class and not implementing the rule of three correctly.
A better idea is not to use a pointer to a vector.
Just declare it as a normal automatic member.
class MyClassContainingVector
{
public:
std::vector<MyObject> mvMyObjectVector;
// ^^^^^ notice no pointer.
};
Now it will be created and destroyed correctly and automatically.
Neither your constructor or destructor will need any code to manage this object.
Yes, dynamically allocating a std::vector object by calling new and invoking its destruction by calling delete will result in memory that has been internally used for holding its elements being freed.
However it would be much simpler and more reliable to follow RAII idiom and use an object with automatic storage duration:
{
std::vector<MyObject> myVector;
...
} // <-- memory is freed here
when execution leaves this scope (note that it could be also caused by exception being thrown etc.), it is guaranteed that myVector object will be destructed and memory will be freed.
Vectors usually shouldn't be held as dynamically allocated pointers. They should be just member variables of your class, in most situations (though there are always exceptions). If they are pointers allocated by 'new', and simple 'delete' should work.
Vectors handle memory allocation for you, so you don't have to. However, if you want to truly free all the storage memory held in a vector, even if it's on the stack, there's a new function for that.
Because vector.reserve() only expands memory, but never shrinks it, in C++11, a new function was added for the purpose of freeing the reserved memory: vector.shrink_to_fit(). Implementations and free to do what they want with it, but you're basically asking the implementation to free the memory, so they should answer the request in a reasonable way.
You can truly clear a vector by going like this:
vector.clear(); //Erases the elements.
vector.shrink_to_fit(); //Erases the memory no longer needed by the elements.
The reasons why vectors hold onto their memory is for performance reasons, so you should only do this if you actually understand why they have memory in reserve, and are willing to deal with (or know you won't have to deal with) any performance hits that result from micro-managing the memory.

How large is the attributes can a class object hold? how to determine the stack/heap limit?

I have a class which requiring a large amount of memory.
class BigClass {
public:
BigClass() {
bf1[96000000-1] = 1;
}
double bf1[96000000];
};
I can only initiate the class by "new" a object in heap memory.
BigClass *c = new BigClass();
assert( c->bf1[96000000-1] == 1 );
delete c;
If I initiate it without "new". I will get a segmentation fault in runtime.
BigClass c; // SIGSEGV!
How can I determine the memory limit? or should I better always use "new"?
First of all since you've entitled this C++ and not C why are you using arrays? Instead may I suggest vector<double> or, if contiguous memory is causing problems deque<double> which relaxes the constraint on contiguous memory without removing the nearly constant time lookup.
Using vector or deque may also alleviate other seg fault issues which could plague your project at a later date. For instance, overrunning bounds in your array. If you convert to using vector or deque you can use the .at(x) member function to retrieve and set values in your collection. Should you attempt to write out of bounds, that function will throw an error.
The stack have a fixed size that is dependant on the compiler options. See your compiler documentation to change the stack size for your executable.
Anyway, for big objects, prefer using new or better : smart pointers like shared_pointer (from boost or from std::tr1 or std:: if you have very recent compiler).
You shouldn't play that game ever. Your code could be called from another function or on a thread with a lower stack size limit and then your code will break nastily. See this closely related question.
If you're in doubt use heap-allocation (new) - either directly with smart pointers (like auto_ptr) or indirectly using std::vector.
There is no platform-independent way of determining the memory limit. For "large" amounts of memory, you're far safer allocating on the heap (i.e. using new); you can check for success by comparing the resulting pointer against NULL, or catching std::bad_alloc exceptions.
The way your class is designed is, as you discovered, quite fragile. Instead of always allocating your objects on the heap, instead your class itself should allocate the huge memory block on the heap, preferably with std::vector, or possibly with a shared_ptr if vector doesn't work for some reason. Then you don't have to worry about how your clients use the object, it's safe to put on the stack or the heap.
On Linux, in the Bash shell, you can check the stack size with ulimit -s. Variables with automatic storage duration will have their space allocated on the stack. As others have said, there are better ways of approaching this:
Use a std::vector to hold your data inside your BigClass.
Allocate the memory for bf1 inside BigClass's constructor and then free it in the destructor.
If you must have a large double[] member, allocate an instance of BigClass with some kind of smart pointer; if you don't need shared access something as simple as std::auto_ptr will let you safely construct/destroy your object:
std::auto_ptr<BigClass>(new BigClass) myBigClass;
myBigClass->bf1; // your array