Have you overloaded operator new in C++?
If yes, why?
An interview question, for which I humbly request some of your thoughts.
We had an embedded system where new was only rarely allowed, and the memory could never be deleted, as we had to prove a maximum heap usage for reliability reasons.
We had a third party library developer who didn't like those rules, so they overloaded new and delete to work against a chunk of memory we allocated just for them.
Yes.
Overloading operator new gives you a chance to control where an object lives in memory. I did this because I knew some details about the lifetime of my objects, and wanted to avoid fragmentation on a platform which didn't have virtual memory.
You would overload new if you're using your own allocator, doing something fancy with reference counting, instrumenting for garbage collection, debugging object lifetimes or something else entirely; you're replacing the allocator for objects. I've personally had to do it to ensure certain objects get allocated on specific mmap'ed pages of memory.
Yes, for two reasons: Custom allocator, and custom allocation tracking.
Overloading new operator may look as a good idea at first glance if you want to do custom allocation for some reason (i.e. avoiding memory fragmentation intrinsic to c-runtime allocator or/and avoiding locks on memory management calls in multithreaded programs). But when you get to implementation you may realize that in most cases you want to pass some additional context to this call, for example a thread-specific heap for objects of a given size. And overloading of new/delete simply doesn't work here. So eventually you may want to create your own facade to your custom memory management subsystem.
I found it very handy to overload operator new when writing Python extension code in C++. I wrapped the Python C-API code for allocation and deallocation in operator new and operator delete overloads, respectively – this allows for PyObject*-compatible structures that can be created with new MyType() and managed with predictable heap-allocation semantics.
It also allows for a separation of the allocation code (normally in the Python __new__ method) and the initialization code (in Python’s __init__) into, respectively, the operator new overloads and any constructors one sees fit to define.
Here’s a sample:
struct ModelObject {
static PyTypeObject* type_ptr() { return &ModelObject_Type; }
/// operator new performs the role of tp_alloc / __new__
/// Not using the 'new size' value here
void* operator new(std::size_t) {
PyTypeObject* type = type_ptr();
ModelObject* self = reinterpret_cast<ModelObject*>(
type->tp_alloc(type, 0));
if (self != NULL) {
self->weakrefs = NULL;
self->internal = std::make_unique<buffer_t>(nullptr);
}
return reinterpret_cast<void*>(self);
}
/// operator delete acts as our tp_dealloc
void operator delete(void* voidself) {
ModelObject* self = reinterpret_cast<ModelObject*>(voidself);
PyObject* pyself = reinterpret_cast<PyObject*>(voidself);
if (self->weakrefs != NULL) { PyObject_ClearWeakRefs(pyself); }
self->cleanup();
type_ptr()->tp_free(pyself);
}
/// Data members
PyObject_HEAD
PyObject* weakrefs = nullptr;
bool clean = false;
std::unique_ptr<buffer_t> internal;
/// Constructors fill in data members, analagous to __init__
ModelObject()
:internal(std::make_unique<buffer_t>())
,accessor{}
{}
explicit ModelObject(buffer_t* buffer)
:clean(true)
,internal(std::make_unique<buffer_t>(buffer))
{}
ModelObject(ModelObject const& other)
:internal(im::buffer::heapcopy(other.internal.get()))
{}
/// No virtual methods are defined to keep the struct as a POD
/// ... instead of using destructors I defined a 'cleanup()' method:
void cleanup(bool force = false) {
if (clean && !force) {
internal.release();
} else {
internal.reset(nullptr);
clean = !force;
}
}
/* … */
};
Related
Background:
For hardware dependant reasons, I need to allocate memory of using the DOS Protected Mode Interface in order to communicate with some low-level interfaces (e.g. VESA BIOS Extensions).
Situation:
So I can overload new and delete for dynamically allocated memory which is great but I really want to overload the allocators for statically allocated memory. The project I'm working on is rather old library and therefore requires a fair number of static global variables.
Question:
Is there some way I could overload the allocation process for these variables? If not, is there a template to dynamically allocate these variables that wouldn't require explicit allocation or deletion and be almost entirely transparent?
Sadly, there is no keyword to specify the type of memory allocation you wish to use and there are no standard templates to force heap allocation. The best answer I could find was to just make a small wrapper class a that allocates and deletes memory how you want and provides you with access to the pointer. It's nothing fancy but here's some of my code.
template<typename T>
class mem
{
public:
mem(void) { m_data = reinterpret_cast<T*>(dos::malloc(sizeof(T))); }
~mem(void) { dos::free(m_data); m_data = nullptr; }
T* operator ->(void) noexcept { return m_data; }
operator T*(void) noexcept { return m_data; }
private:
T* m_data;
};
I was reading an article that stated due to something called RAII, you no longer needed to cleanup your code.
What prompted this research was I am currently coding something that requires cleanup before exiting the function.
For example, I have created a file, and mapped a view of a file.
Normally, I'd just use goto or do {break;} while(false); to exit. However, is it true this is no longer necessary with C++11?
I.e. no more
if( fail ) {
UnmapViewOfFile(lpMapView);
CloseHandle(hFileMap);
CloseHandle(hFile);
}
every few lines of code?
Does the compiler automatically wrap this up once the function exits? It just seems hard to believe that it actually cleans up function calls like the article said it did. (I may have misinterpreted it somehow.) What seems more likely is that it just cleans up created class libraries by calling their deconstructor from the C++ library.
EDIT: The article - from Wikipedia:
It doesn't necessarily state that it cleans up these function calls, but it does imply it does for C++ library function objects (such as FILE * , fopen, etc objects)
Does it work for WinAPI too?
C++ standard surely says nothing about usage of windows API functions like UnmapViewOfFile or CloseHandle. RAII is a programming idiom, you can use it or not, and its a lot older than C++11.
One of the reasons why RAII is recomended is that it makes life easier when working with exceptions. Destructors will always safely release any resources - mostly memory, but also handles. For memory your have classes in standard library, like unique_ptr and shared_ptr, but also vector and lots of other. For handles like those from WinAPI, you must write your own, like:
class handle_ptr {
public:
handle_ptr() {
// aquire handle
}
~handle_ptr() {
// release
}
}
Cleanup is still necessary, but due to the possibility of exceptions the code should not do cleanup simply by executing cleanup operations at the end of a function. That end may never be reached! Instead,
Do cleanup in destructors.
In C++11 it is particularly easy to any kind of cleanup in a destructor without defining a custom class, since it's now much easier to define a scope guard class. Scope guards were invented by Petru Marginean, who with Andrei Alexandrescu published an article about it in DDJ. But that original C++03 implementation was pretty complex.
In C++11, a bare bones scope guard class:
class Scope_guard
: public Non_copyable
{
private:
function<void()> f_;
public:
void cancel() { f_ = []{}; }
~Scope_guard()
{ f_(); }
Scope_guard( function<void()> f )
: f_( move( f ) )
{}
};
where Non_copyable provides move assignment and move construction, as well as default construction, but makes copy assignment and copy construction private.
Now right after successfully acquiring some resource you can declare a Scope_guard object that will guaranteed clean up at the end of the scope, even in the face of exceptions or other early returns, like
Scope_guard unmapping( [&](){ UnmapViewOfFile(lpMapView); } );
Addendum:
I should better also mention the standard library smart pointers shared_ptr and unique_ptr, which take care of pointer ownership, calling a deleter when the number of owners goes to 0. As the names imply they implement respectively shared and unique ownership. Both of them can take a custom deleter as argument, but only shared_ptr supports calling the custom deleter with the original pointer value when the smart pointer is copied/moved to base class pointer.
Also, I should better also mention the standard library container classes such as in particular vector, which provides a dynamic size copyable array, with automatic memory management, and string, which provides much the same for the particular case of array of char uses to represent a string. These classes free you from having to deal directly with new and delete, and get those details right.
So in summary,
use standard library and/or 3rd party containers when you can,
otherwise use standard library and/or 3rd party smart pointers,
and if even that doesn't cut it for your cleanup needs, define custom classes that do cleanup in their destructors.
As #zero928 said in the comment, RAII is a way of thinking. There is no magic that cleans up instances for you.
With RAII, you can use the object lifecycle of a wrapper to regulate the lifecycle of legacy types such as you describe. The shared_ptr<> template coupled with an explicit "free" function can be used as such a wrapper.
As far as I know C++11 won't care of cleanup unless you use elements which would do. For example you could put this cleaning code into the destructor of a class and create an instance of it by creating a smart-pointer. Smart-pointers delete themselves when they are not longer used or shared. If you make a unique-pointer and this gets deleted, because it runs out of scope then it automatically calls delete itself, hence your destructor is called and you don't need to delete/destroy/clean by yourself.
See http://www.cplusplus.com/reference/memory/unique_ptr/
This is just what C++11 has new for automatically cleaning. Of course an usual class instance running out of scope calls its destructor, too.
No!
RAII is not about leaving clean-up aside, but doing it automatically. The clean-up can be done in a destructor call.
A pattern could be:
void f() {
ResourceHandler handler(make_resource());
...
}
Where the ResourceHandler is destructed (and does the clean-up) at the end of the scope or if an exception is thrown.
The WIN32 API is a C API - you still have to do your own clean up. However nothing stops you from writing C++ RAII wrappers for the WIN32 API.
Example without RAII:
void foo
{
HANDLE h = CreateFile(_T("C:\\File.txt"), FILE_READ_DATA, FILE_SHARE_READ,
NULL, OPEN_ALWAYS, 0, NULL);
if ( h != INVALID_HANDLE_VALUE )
{
CloseHandle(h);
}
}
And with RAII:
class smart_handle
{
public:
explicit smart_handle(HANDLE h) : m_H(h) {}
~smart_handle() { if (h != INVALID_HANDLE_VALUE) CloseHandle(m_H); }
private:
HANDLE m_H;
// this is a basic example, could be implemented much more elegantly! (Maybe a template param for "valid" handle values since sometimes 0 or -1 / INVALID_HANDLE_VALUE is used, implement proper copying/moving etc or use std::unique_ptr/std::shared_ptr with a custom deleter as mentioned in the comments below).
};
void foo
{
smart_handle h(CreateFile(_T("C:\\File.txt"), FILE_READ_DATA, FILE_SHARE_READ,
NULL, OPEN_ALWAYS, 0, NULL));
// Destructor of smart_handle class would call CloseHandle if h was not NULL
}
RAII can be used in C++98 or C++11.
I really liked the explanation of RAII in The C++ Programming Language, Fourth Edition
Specifically, sections 3.2.1.2, 5.2 and 13.3 explain how it works for managing leaks in the general context, but also the role of RAII in properly structuring your code with exceptions.
The two main reasons for using RAII are:
Reducing the use of naked pointers that are prone to causing leaks.
Reducing leaks in the cases of exception handling.
RAII works on the concept that each constructor should secure one and only one resource. Destructors are guaranteed to be called if a constructor completes successfully (ie. in the case of stack unwinding due to an exception being thrown). Therefore, if you have 3 types of resources to acquire, you should have one class per type of resource (class A, B, C) and a fourth aggregate type (class D) that acquires the other 3 resources (via A, B & C's constructors) in D's constructor initialization list.
So, if resource 1 (class A) succeeded in being acquired, but 2 (class B) failed and threw, resource 3 (class C) would not be called. Because resource 1 (class A)'s constructor had completed, it's destructor is guaranteed to be called. However, none of the other destructors (B, C or D) will be called.
It does NOT cleanup `FILE*.
If you open a file, you must close it. I think you may have misread the article slightly.
For example:
class RAII
{
private:
char* SomeResource;
public:
RAII() : SomeResource(new char[1024]) {} //allocated 1024 bytes.
~RAII() {delete[] SomeResource;} //cleaned up allocation.
RAII(const RAII& other) = delete;
RAII(RAII&& other) = delete;
RAII& operator = (RAII &other) = delete;
};
The reason it is an RAII class is because all resources are allocated in the constructor or allocator functions. The same resource is automatically cleaned up when the class is destroyed because the destructor does that.
So creating an instance:
void NewInstance()
{
RAII instance; //creates an instance of RAII which allocates 1024 bytes on the heap.
} //instance is destroyed as soon as this function exists and thus the allocation is cleaned up
//automatically by the instance destructor.
See the following also:
void Break_RAII_And_Leak()
{
RAII* instance = new RAII(); //breaks RAII because instance is leaked when this function exits.
}
void Not_RAII_And_Safe()
{
RAII* instance = new RAII(); //fine..
delete instance; //fine..
//however, you've done the deleting and cleaning up yourself / manually.
//that defeats the purpose of RAII.
}
Now take for example the following class:
class RAII_WITH_EXCEPTIONS
{
private:
char* SomeResource;
public:
RAII_WITH_EXCEPTIONS() : SomeResource(new char[1024]) {} //allocated 1024 bytes.
void ThrowException() {throw std::runtime_error("Error.");}
~RAII_WITH_EXCEPTIONS() {delete[] SomeResource;} //cleaned up allocation.
RAII_WITH_EXCEPTIONS(const RAII_WITH_EXCEPTIONS& other) = delete;
RAII_WITH_EXCEPTIONS(RAII_WITH_EXCEPTIONS&& other) = delete;
RAII_WITH_EXCEPTIONS& operator = (RAII_WITH_EXCEPTIONS &other) = delete;
};
and the following functions:
void RAII_Handle_Exception()
{
RAII_WITH_EXCEPTIONS RAII; //create an instance.
RAII.ThrowException(); //throw an exception.
//Event though an exception was thrown above,
//RAII's destructor is still called
//and the allocation is automatically cleaned up.
}
void RAII_Leak()
{
RAII_WITH_EXCEPTIONS* RAII = new RAII_WITH_EXCEPTIONS();
RAII->ThrowException();
//Bad because not only is the destructor not called, it also leaks the RAII instance.
}
void RAII_Leak_Manually()
{
RAII_WITH_EXCEPTIONS* RAII = new RAII_WITH_EXCEPTIONS();
RAII->ThrowException();
delete RAII;
//Bad because you manually created a new instance, it throws and delete is never called.
//If delete was called, it'd have been safe but you've still manually allocated
//and defeated the purpose of RAII.
}
fstream always did this. When you create an fstream instance on the stack, it opens a file. when the calling function exists, the fstream is automatically closed.
The same is NOT true for FILE* because FILE* is NOT a class and does NOT have a destructor. Thus you must close the FILE* yourself!
EDIT: As pointed out in the comments below, there was a fundamental problem with the code above. It is missing a copy constructor, a move constructor and assignment operator.
Without these, trying to copy the class would create a shallow copy of its inner resource (the pointer). When the class is destructed, it would have called delete on the pointer twice! The code was edited to disallow copying and moving.
For a class to conform with the RAII concept, it must follow the rule for three: What is the copy-and-swap idiom?
If you do not want to add copying or moving, you can simply use delete as shown above or make the respective functions private.
I am writing a performance critical application in which I am creating large number of objects of similar type to place orders. I am using boost::singleton_pool for allocating memory. Finally my class looks like this.
class MyOrder{
std::vector<int> v1_;
std::vector<double> v2_;
std::string s1_;
std::string s2_;
public:
MyOrder(const std::string &s1, const std::string &s2): s1_(s1), s2_(s2) {}
~MyOrder(){}
static void * operator new(size_t size);
static void operator delete(void * rawMemory) throw();
static void operator delete(void * rawMemory, std::size_t size) throw();
};
struct MyOrderTag{};
typedef boost::singleton_pool<MyOrderTag, sizeof(MyOrder)> MyOrderPool;
void* MyOrder:: operator new(size_t size)
{
if (size != sizeof(MyOrder))
return ::operator new(size);
while(true){
void * ptr = MyOrderPool::malloc();
if (ptr != NULL) return ptr;
std::new_handler globalNewHandler = std::set_new_handler(0);
std::set_new_handler(globalNewHandler);
if(globalNewHandler) globalNewHandler();
else throw std::bad_alloc();
}
}
void MyOrder::operator delete(void * rawMemory) throw()
{
if(rawMemory == 0) return;
MyOrderPool::free(rawMemory);
}
void MyOrder::operator delete(void * rawMemory, std::size_t size) throw()
{
if(rawMemory == 0) return;
if(size != sizeof(Order)) {
::operator delete(rawMemory);
}
MyOrderPool::free(rawMemory);
}
I recently posted a question about performance benefit in using boost::singleton_pool. When I compared the performances of boost::singleton_pool and default allocator, I did not gain any performance benefit. When someone pointed that my class had members of the type std::string, whose allocation was not being governed by my custom allocator, I removed the std::string variables and reran the tests. This time I noticed a considerable performance boost.
Now, in my actual application, I cannot get rid of member variables of time std::string and std::vector. Should I be using boost::pool_allocator with my std::string and std::vector member variables?
boost::pool_allocator allocates memory from an underlying std::singleton_pool. Will it matter if different member variables (I have more than one std::string/std::vector types in my MyOrder class. Also I am employing pools for classes other than MyOrder which contain std::string/std::vector types as members too) use the same memory pool? If it does, how do I make sure that they do one way or the other?
Now, in my actual application, I cannot get rid of member variables of time std::string and std::vector. Should I be using boost::pool_allocator with my std::string and std::vector member variables?
I have never looked into that part of boost, but if you want to change where strings allocate their memory, you need to pass a different allocator to std::basic_string<> at compile time. There is no other way. However, you need to be aware of the downsides of that: For example, such strings will not be assignable to std::string anymore. (Although employing c_str() would work, it might impose a small performance penalty.)
boost::pool_allocator allocates memory from an underlying std::singleton_pool. Will it matter if different member variables (I have more than one std::string/std::vector types in my MyOrder class. Also I am employing pools for classes other than MyOrder which contain std::string/std::vector types as members too) use the same memory pool? If it does, how do I make sure that they do one way or the other?
The whole point of a pool is to put more than one object into it. If it was just one, you wouldn't need a pool. So, yes, you can put several objects into it, including the dynamic memory of several std::string objects.
Whether this gets you any performance gains, however, remains to be seen. You use a pool because you have reasons to assume that it is faster than the general-purpose allocator (rather than using it to, e.g., allocate memory from a specific area, like shared memory). Usually such a pool is faster because it can make assumptions on the size of the objects allocated within. That's certainly true for your MyOrder class: objects of it always have the same size, otherwise (larger derived classes) you won't allocate them in the pool.
That's different for std::string. The whole point of using a dynamically allocating string class is that it adapts to any string lengths. The memory chunks needed for that are of different size (otherwise you could just char arrays instead). I see little room for a pool allocator to improve over the general-purpose allocator for that.
On a side note: Your overloaded operator new() returns the result of invoking the global one, but your operator delete just passes anything coming its way to that pool's free(). That seems very suspicious to me.
Using a custom allocator for the std::string/std::vector in your class would work (assuming the allocator is correct) - but only performance testing will see if you really see any benefits from it.
Alternatively, if you know that the std::string/std::vector will have upper limits, you could implement a thin wrapper around a std::array (or normal array if you don't have c++11) that makes it a drop in replacement.
Even if the size is unbounded, if there is some size that most values would be less than, you could extend the std::array based implementations above to be expandable by allocating with your pooled allocator if they fill up.
I know you can overload the operator new. When you do, you method gets sent a size_t parameter by default. However, is it possible to send the size_t parameter - as well as additional user-provided parameters, to the overloaded new operator method?
For example
int a = 5;
Monkey* monk = new Monkey(a);
Because I want to override new operator like this
void* Monkey::operator new(size_t size, int a)
{
...
}
Thanks
EDIT: Here's what I a want to accomplish:
I have a chunk of virtual memory allocated at the start of the app (a memory pool). All objects that inherit my base class will inherit its overloaded new operator.
The reason I want to sometimes pass an argument in overloaded new is to tell my memory manager if I want to use the memory pool, or if I want to allocate it with malloc.
Invoke new with that additional operand, e.g.
Monkey *amonkey = new (1275) Monkey(a);
addenda
A practical example of passing argument[s] to your new operator is given by Boehm's garbage collector, which enables you to code
Monkey *acollectedmonkey = new(UseGc) Monkey(a);
and then you don't have to bother about delete-ing acollectedmonkey (assuming its destructor don't do weird things; see this answer). These are the rare situations where you want to pass an explicit Allocator argument to template collections like std::vector or std::map.
When using memory pools, you often want to have some MemoryPool class, and pass instances (or pointers to them) of that class to your new and your delete operations. For readability reasons, I won't recommend referencing memory pools by some obscure integer number.
In the following code, there is a memory leak if Info::addPart1() is called multiple times by accident:
typedef struct
{
}part1;
typedef struct
{
}part2;
class Info
{
private:
part1* _ptr1;
part2* _ptr2;
public:
Info()
{
_ptr1 = _ptr2 = NULL;
}
~Info()
{
delete _ptr1;
delete _ptr2;
}
addPart1()
{
_ptr1 = new part1;
}
addPart2()
{
_ptr2 = new part2;
}
};
Info _wrapper;
_wrapper.addPart1();
_wrapper.addPart2();
Is there a C++ idiom to handle this problem ?
I could rewrite addPart1 and addPart2 like this to defend the MLK
addPart1()
{
if(_ptr1 != NULL) delete _ptr1;
_ptr1 = new part1;
}
Is that a good solution?
Use a smart pointer such as boost:shared_ptr , boost:scoped_ptr is recommended to manage the raw pointer. auto_ptr is tricky to work with, you need pay attention to that.
You should read about the smart pointer idiom and about RAII.
I suggest taking a look into the new technical report (TR1).
Take a good look here and here.
Also take a look at boost's smart pointers.
I recommend loki-lib's SmartPtr or StrongPtr classes.
Bear with me here...
In the distant past, programmers used constructs like "jump" and "goto" for flow control. Eventually common patterns emerged and constructs like for, do/while, function call and try/catch emerged, and the spaghetti was tamed. Those named constructs give a lot more information about intent than a generic goto, where you have to examine the rest of the code for context to understand what it's doing. In the unlikely event you see a goto in modern code by a competent coder, you know something pretty unusual is going on.
In my opinion, "delete" is the "goto" of memory management. There are enough smart pointer and container classes available to the modern developer that there's very little reason for most code to contain a single explicit delete (other than in the smart pointer implementations of course). When you see a plain "delete" you get no information about intent; when you see a scoped_ptr/auto_ptr/shared_ptr/ptr_container you get a lot more.
ie the idiom should be to aspire to write delete-free code by using appropriate smart pointer types (as recommended by just about every other answer here).
Update 2013-01-27: I note Herb Sutter's excellent talk on C++11 includes some similar sentiments re delete free code.
Checking for nonzero pointer before delete is redundant. delete 0 is guaranteed to be a no-op.
A common way to handle this is
delete _ptr1;
_ptr1 = 0;
_ptr1 = new part1;
Zeroing the pointer ensures there won't be any dangling pointers for example in the case part1 construction throws an exception.
Your suggested fix will work (though of course you're still at risk for a memory leak if addPart2() is called twice). A much safer approach is to use scoped_ptr from the Boost library collection (www.boost.org), which is a container that acts like a pointer, but guarantees that its target is deleted when the container is destroyed. Your revised class would then look like
class Info
{
private:
boost::scoped_ptr<part1> _ptr1;
boost::scoped_ptr<part2> _ptr2;
public:
Info() {} // scoped_ptrs default to null
// You no longer need an explicit destructor- the implicit destructor
// works because the scoped_ptr destructor handles deletion
addPart1()
{
_ptr1.reset(new part1);
}
addPart2()
{
_ptr2.reset(new part2);
}
};
As a general principle, it's a good idea to avoid writing code that requires you to explicitly delete pointers. Instead, try to use containers that do it automatically at the appropriate time. Boost is a good resource for this kind of thing.
All this assumes you have a reason ptr1_ and ptr2_ need to be pointers. If not, it's much better to make them ordinary objects; then you get memory management for free.
Use construction is initialization instead.
class Info
{
private:
part1* _ptr1;
part2* _ptr2;
public:
Info() : _ptr1(new part1), _ptr2(new part2)
{
}
~Info()
{
delete _ptr1;
delete _ptr2;
}
};
But in this case you might as well create the parts on the stack, so no new and delete is required.
class Info
{
private:
part1 _part1;
part2 _part2;
public:
Info()
{
}
~Info()
{
}
};
But I guess you want the pointers to be lazy created, then I wouldn't suggest to create public class methods that takes care of the initializations. This should be handled internally in the class, when the class need to allocate them.
If you want it to have a lazy behavior you might consider this:
addPart1()
{
if(_ptr1 == NULL) {
_ptr1 = new part1;
}
}
The way you suggested is also an alternative depending how you want it to behave.
But other people have suggested better ways to do it, but as we really don't know why you made it this way and how the surrounding code works ...
I agree with the group that you should use some kind of smart pointer.
If you do decide to continue with bare pointers, be aware that your class above does not have a copy constructor defined by you. Therefore, the C++ compiler has defined one for you that will just do a simple copy of all the pointers; which will lead to a double delete. You'll need to define your own copy constructor (or at least create a stub private copy constructor if you don't think you need a copy constructor).
Info(const Info &rhs)
{
_ptr1 = new part1[rhs._ptr1];
_ptr2 = new part2[rhs._ptr2];
}
You will have a similar problem with the default assignment operator.
If you choose the correct smart pointer, these problems will go away. :)
Option 1: Use Java :)
Option 2: Use auto_ptr
std::auto_ptr<part1> _ptr1;
std::auto_ptr<part2> _ptr2;
public:
addPart1()
{
_ptr1 = auto_ptr<part1>(new part1);
}
...
// no destructor is needed
You should take a look at RAII
On the far extreme of possible ways to deal with memory leaks is the boehm garbage collector, a conservative mark & sweep collector. Interestingly, this can be used in addition to all the good advice offered in other answers.