Overriding operator new/delete in derived class - c++

I have a stateless, abstract base class from which various concrete classes inherit. Some of these derived classes are stateless as well. Because many of them are created during a run, I'd like to save memory and overhead by having all stateless derived classes emulate a singleton, by overriding operator new()/delete(). A simplified example would look something like this:
#include <memory>
struct Base {
virtual ~Base() {}
protected:
Base() {} // prevent concrete Base objects
};
struct D1 : public Base { // stateful object--default behavior
int dummy;
};
struct D2 : public Base { // stateless object--don't allocate memory
void* operator new(size_t size)
{
static D2 d2;
return &d2;
}
void operator delete(void *p) {}
};
int main() {
Base* p1 = new D1();
Base* p2 = new D1();
Base* s1 = new D2();
Base* s2 = new D2();
delete p1;
delete p2;
delete s1;
delete s2;
return 0;
}
This example doesn't work: delete s2; fails because delete s1; called ~Base(), which deallocated the shared Base in d2. This can be addressed by adding the same trick with new/delete overloading to Base. But I'm not sure this is the cleanest solution, or even a correct one (valgrind doesn't complain, FWIW). I'd appreciate advice or critique.
edit: actually, the situation is worse. The Base class in this example isn't abstract, as I claimed. If it's made abstract, through the addition of a pure virtual method, then I can no longer apply the new/delete overriding trick, because I cannot have a static variable of type Base. So I don't have any solution for this problem!

You just can't do that - that would violate "object identity" requirement that states that each object must have its own address. You have to allocate distinct memory block to each object - this can be done rather fast if you override operator new to use a fast block allocator specifically tailored for objects of fixed size.

I would say the best solution here is to make your derived class an actual singleton. Make your derived constructor private and just provide a static Base* getInstance() method that either creates the required object or returns the static instance. This way the only way to get a D1 object would be via this method since calling new D1 would be illegal.

Related

Must I either mark the destructor of the base class as virtual function or mark it as protected function?

As per this article, which says that[emphasis mine]:
Making base class destructor virtual guarantees that the object of derived class is destructed properly, i.e., both base class and derived class destructors are called.
As a guideline, any time you have a virtual function in a class, you should immediately add a virtual destructor (even if it does nothing). This way, you ensure against any surprises later.
I think even if your base class has no virtual function,you should either add a virtual destructor or mark the destructor of the base class as protected.Otherwise, you may face memory leak when trying to delete a pointer pointed to a derived instance. Am I right?
For example,here is the demo code snippet:
#include<iostream>
class Base {
public:
Base(){};
~Base(){std::cout << "~Base()" << std::endl;};
};
class Derived : public Base {
private:
double val;
public:
Derived(const double& _val){};
~Derived(){std::cout << "~Derived()" << std::endl;}; //It would not be called
};
void do_something() {
Base* p = new Derived{1};
delete p;
}
int main()
{
do_something();
}
Here is the output of the said code snippet:
~Base()
Could anybody shed some light on this matter?
The behavior of the program in the question is undefined. It deletes an object of a derived type through a pointer to its base type and the base type does not have a virtual destructor. So don't do that.
Some people like to write code that has extra overhead in order to "ensure against any surprises later". I prefer to write code that does what I need and to document what it does. If I decide later that I need it to do more, I can change it.
This question will lead to a bunch of other questions about whether a programmer should always be super safe by protecting its code even against currently non existent problems.
In your current code, the Derived class only adds a trivial double to its base class, and a rather useless destructor, that only contains a trace print. If you deleted an object through a pointer to the base class, the Derived destructor will not be called, but it will be harmless. Furthermore, as you were told in the comment, using polymorphism (casting a pointer to a base class one) with no virtual function does not really makes sense.
Long story made short, if you have a class hierarchy with no virtual function, users are aware of it and never delete an object through a pointer to its base class. So you have no strong reason to make the destructor virtual nor protected. But IMHO, you should at least leave a comment on the base class to warn future maintainers about that possible problem.
Without a virtual destructor your call to delete is arguably wrong. It should be:
void do_something() {
Base* p = new Derived{1};
Derived *t = dynamic_cast<Derived*>(p);
if (t) {
delete t;
} else {
delete p;
}
}
You can use up/down casting of objects to store them in a common array or vector and still be able to call methods of the derived objects all without a virtual destructor. It's usually a sign of bad design but it is legal C++ code. The cost is that you have to do cast back to the original types before delete like above.
Note: The dynamic_cast can only be done when Base has at least one virtual function. And if you have a virtual functions you should just add the virtual destructor.
And if you don't need to dynmaic_cast anywhere then show me case where you can't use an array of Base instead of Base *.
The point of making the destructor of the base class protected I guess is to generate an error when someone deletes a Base* so you can then make the destructor virtual. Means you don't have the overhead of a virtual destructor until you actually need it.
fwiw it's happening because you try to delete it through Base. if you somehow really want to have object reference/pointer of only the base type, there are some alternatives.
void do_something() {
Base&& b = Derived{1};
}
void do_something() {
Derived d{1};
Base* p = &d;
Base& b = d;
}
void do_something() {
std::shared_ptr<Base> sb = std::make_shared<Derived>(1);
}
// NOTE: `std::unique_ptr` doesn't work
void do_something() {
std::unique_ptr<Base> ub = std::make_unique<Derived>(1); // warning: this not work
}

Polymorphic pointer change at run time

I am really confused about polymorphic pointers. I have 2 classes derived from an interface as shown below code.
#include <iostream>
using namespace std;
class Base {
public:
virtual ~Base() { }
virtual void addTest() = 0;
};
class B: public Base {
public:
B(){}
~B(){}
void addTest(){
cout << "Add test B\n";
}
};
class C: public Base {
public:
C(){}
~C(){}
void addTest(){
cout << "Add test C\n";
}
private:
void deleteTest(){
}
};
int main()
{
Base *base = new B();
base->addTest();
base = new C();
base->addTest();
return 0;
}
I want to change the pointer dynamically according to a condition at run time to use the same pointer with different kinds of scenarios.
Derived classes are different from each other, so what happens in memory when the polymorphic pointer object changes?
If that usage is not good practice, how can I change the polymorphic pointer object dynamically at the run time?
It's perfectly fine to change what a pointer points to. A Base* is not an instance of Base, it is a pointer that points to an instance of a Base (or something derived from it -- in this case B or C).
Thus in your code, base = new B() sets it to point to a new instance of a B, and then base = new C() sets it to point to a new instance of a C.
Derived classes are different from each other, so what happens in memory when the polymorphic pointer object changes?
Because Base* points to an instance of a Base, all this is doing is changing which instance (or derived instance) Base* points to. In effect, it just changes the memory address of the pointer.
From that Base* pointer, you still have access to anything defined in that Base class -- which still allows polymorphic calls to functions satisfied by derived types if the function is defined as virtual.
The exact mechanism for how this is dispatched to derived types is technically an implementation-detail of the language, but generally this is done through a process called double-dispatch, which uses a "V-Table". This is additional type-information stored alongside any classes that contain virtual functions (it's conceptually just a struct of function pointers, where the function pointers are satisfied by the concrete types).
See: Why do we need a virtual table? for more information on vtables.
What is problematic, however, is the use of new here. new allocates memory that must be cleaned up with delete to avoid a memory leak. By doing the following:
Base *base = new B();
base->addTest();
base = new C(); // overwriting base without deleting the old instance
base->addTest();
The B object's destructor is never run, no resources are cleaned up, and the memory for B itself is never reclaimed. This should be:
Base *base = new B();
base->addTest();
delete base;
base = new C(); // overwriting base without deleting the old instance
base->addTest();
delete base;
Or, better yet, this should be using smart-pointers like std::unique_ptr to do this for you. In which case you don't use new and delete explicitly, you use std::make_unique for allocation, and the destructor automagically does this for you:
auto base = std::make_unique<B>();
base->addTest();
base = std::make_unique<C>(); // destroy's the old instance before reassigning
base->addTest();
This is the recommended/modern way to write dynamic allocations

Using "rule of zero" when I have pointers for polymorphism

For the "rule of zero", I understand that I want to separate data management out into simple classes implementing rule of 3, rule of 5, whatever, so that the the more complicated classes can use constructors, assignment operators, etc, as automatically provided.
How does this work when a class member has to be a pointer because of polymorphism?
E.g., suppose I have a class
class MyClass{
private:
s_array<int> mynumbers;
s_array<double> mydoubles;
Base * object;
...
};
Here, Base is a base class with multiple derived classes, and object may be point to one of the derived classes. So object is a pointer in order to get polymorphism.
If it wasn't for the presence of this Base pointer, I could use the rule-of-zero for MyClass assuming s_array<> is properly implemented. Is there a way to set things up so that MyClass can use the rule of zero, even though object is a pointer? The behavior that I want on copy is that a new instance of MyClass gets a pointer to a new copy of object.
If you want to apply the rule of 0 with pointers, you need to use a shared pointer:
shared_ptr<Base> object;
However this doesn't fully fulfil your requirement. Because shared_ptr will provide for the rule of 5, but the copied pointers will always point to the same original object.
To get the behavior that you want, you'd need to create your own smart pointer that provides for the rule of 3 or 5.
If multiple MyClass objects can point to the same Base object, then simply use std::shared_ptr<Base> instead of Base* for your object member, as other responders mentioned.
But, if each MyClass object needs to point to its own Base object, then you have no choice but to implement the Rule of 3/5 in MyClass so that it can create its own Base object and/or clone a Base object from another MyClass object.
Just for the record, what I using to solve this is the following (basically as suggested as above):
template <class myClass>
class clone_ptr
{
public:
clone_ptr(){location=nullptr;}
clone_ptr(myClass* d) { location = d;}
~clone_ptr() { delete location; }
clone_ptr(const clone_ptr<myClass>& source){
if (source.location!=nullptr){
location=source.location->Clone();
}
else
location=nullptr;
}
clone_ptr& operator= (const clone_ptr<myClass>& source){
if (&source!=this){
if (source.location!=nullptr){
location=source.location->Clone();
}
else
location=nullptr;
}
return *this;
}
myClass* operator->() { return location; }
myClass& operator*() { return *location; }
private:
myClass* location;
};
I am implementing the Clone() function in the appropriate classes as follows:
class myClass : public parentClass{
...
public:
myClass* Clone()
{
return new myClass(*this);
}

Is the memory layout of C++ single inheritance the same as this C code?

I'm working with a library written in C, which does inheritance like so:
struct Base
{
int exampleData;
int (function1)(struct Base* param1, int param2);
void (function2)(struct Base* param1, float param2);
//...
};
struct Derived
{
struct Base super;
//other data...
};
struct Derived* GetNewDerived(/*params*/)
{
struct Derived* newDerived = malloc(sizeof struct Derived);
newDerived->super.function1 = /*assign function*/;
newDerived->super.function2 = /*assign function*/;
//...
return newDerived;
}
int main()
{
struct Derived *newDerieved = GetNewDerived(/*params*/);
FunctionExpectingBase((struct Base*) newDerived);
free(newDerived);
}
It is my understanding this works because the pointer to Derived is the same as the pointer to the first member of Derived, so casting the pointer type is sufficient to treat an object as its "base class." I can write whatever gets assigned to function1 and function2 to cast incoming pointer from Base* to Derived* to access the new data.
I am extending functionality of code like this, but I am using a C++ compiler. I'm wondering if the below is equivalent to the above.
class MyDerived : public Base
{
int mydata1;
//...
public:
MyDerived(/*params*/)
{
function1 = /*assign function pointer*/;
function2 = /*assign function pointer*/;
//...
}
//...
};
int main()
{
MyDerived *newDerived = new MyDerived(/*params*/);
FunctionExpectingBase( static_cast<Base*>(newDerived) );
delete newDerived;
}
Can I expect the compiler to lay out the memory in MyDerived in the same way so I can do the same pointer cast to pass my object into their code? Or must I continue to write this more like their C architecture, and not take advantage of the C++ compiler doing some of the more tedious bits for me?
I'm only considering single inheritance for the scope of this question.
According to Adding a default constructor to a base class changes sizeof() a derived type and When extending a padded struct, why can't extra fields be placed in the tail padding? memory layout can change even if you just add constructor to MyDerived or make it non POD any other way. So I am afraid there is no such guarantee. Practically you can make it work using proper compile time asserts validating the same memory layout for both structures, but such solution does not seem to be safe and supported by standard C++.
On another side why your C++ wrapper MyDerived cannot inherit from Derived? This would be safe (as it can be safe when Derived is casted to Base and back, but I assume that is out of your control). It may change initialization code in MyDerived::MyDerived() to more verbose, but I guess that is small price for proper solution.
For your problem it does not really matter since the "client" code only cares about having a valid Base* pointer: they aren't going to downcast it in Derived or whatever, or copy it.

Is a derived class destructor definition required if base class destructor is virtual?

I am trying the following example:
class base // base class
{
public:
std::list<base*> values;
base(){}
void initialize(base *b) {
values.push_front(b);
}
virtual ~base()
{
values.clear();
cout<<"base called"<<endl;
}
};
class derived : public base // derived class
{
public:
~derived(){
cout<<"derived called"<<endl;
}
};
int main()
{
derived *d = new derived;
base *b = new base;
b->initialize(static_cast<base *>(d)); /* filling list */
delete b;
return 0;
}
Q.1) Why does destructor of derived class not get called, as in base class destructor I am performing values.clear()?
Q.2) Is derived class destructor definition required if base class destructor is virtual?
Q1. Because you're not deleting an object of type derived. You only do delete b;, which deletes a base. You should also call delete d;.
Also, you should specify what object is responsible for memory management. Your design is prone to error. You're better off using a smart pointer to prevent ambiguity. Also, to behave as you expect it, the destructor should be:
virtual ~base()
{
for ( int i = 0 ; i < values.size() ; i++ )
delete values[i];
values.clear();
cout<<"base called"<<endl;
}
Of course, with this approach, it would be undefined behavior calling delete d; in your main.
Q2. No, the definition is not required.
Why does destructor of derived class is not getting called, as in base class destructor I am performing values.clear();
values.clear() removes all the pointers from this list. It does not delete the objects being pointed to; that would be extremely dangerous, since the list has no way of knowing whether it's responsible for their lifetime, or whether they are just being used to refer to objects managed elsewhere.
If you want the list to own the objects, then you must either delete them yourself when removing them, or store smart pointers such as std::unique_ptr<base>. If your compiler doesn't support the new smart pointers, then you might find Boost's Pointer Container library useful.
Does derived class destructor definition is required. If base class destructor is virtual.
It's only needed if there is something in the derived class that needs cleaning up. There's no need to define an empty one if there's nothing for it to do.
You don't actually delete d, so of course the destructor is not being called. Either make d statically allocated (derived d instead of derived *d = new derived) or call delete d.
If you don't declare the destructor in the derived class a default one will be created. The base class destructor will still be called, see the FAQ (11.12). Note also that since the base class destructor is virtual, the derived class destructor is automatically virtual (whether you define one or not), see FAQ (20.7).
Why do you think the destructor of derived class should be called? You only delete base and it is an instance of the base class.
No the definition of the destructor is not required - you may ommit it.