I was looking at some of the new features in C++11, and due to my current version of GCC I am unable to use constructor delegation. But it got me thinking about replicating the feature like this:
class A
{
public:
A() : num( 42 ) {}
A( int input ) { *this = A(); num *= input; }
int num;
};
It certainly compiles and works fine, the code below:
A a;
cout << "a: " << a.num << endl;
A b( 2 );
cout << "a: " << b.num << endl;
Returns this, which is correct.
42
84
Obviously this is a very trivial example, but other than the memory inefficiencies (two A's created and one overwritten by the other before being destroyed), what problems could arise? It certainly looks like a code smell, but I can't think of a really good reason why.
You are not initializing your object with the integer, but modifying a default initialized object. This might or might not be an issue. Quite often people factor common stuff out in some init() function to have similar functionality as delegating ctors. However, there are some situations in which this is not desired/wrong/impossible:
when you have a reference member, you must initialize this in the ctor, you can not default initialize it and later overwrite. Using a pointer instead can help.
for certain members, default initialization does something, and overwriting does something additional. Performance wise, it would have been more efficient to right away initialize the member. Depending on what the initialization does, this might just be a performance hit, but for certain side-effects of the object, it might even be plane wrong.
The member might not be assignable.
Additionally, this is just considered bad style by some people. I personally consider it bad style because I think you should always initialize instead of assign later, even for simple cases, since one day you forget it for an important case and then the lost performance bites you.
But YMMV.
Your code is not actually C++11 at all. I was thinking whether move constructors might work here as you are essentially moving one A into another then modifying it slightly.
As with C++03, you can optimise the initialisation you want to perform once in all your constructors either by putting them into a sub-class or a base-class (often with protected or private inheritance as it's an implementation detail). Using a base class:
class ABase
{
protected:
int num;
ABase() : num(42) {}
};
class A : protected ABase
{
public:
A() = default; // or in C++03 just {}
explicit A(int input) : ABase()
{
num *= input;
}
};
(You can modify your access permissions to taste). The issue here is I only ever create one "ABase" object and if it has more than just a trivial int member, that might be significant. I quite like the inheritence as I then use it in A as a class member rather than a member of some aggregated object, and I prefer the inheritence protected or private here but sometimes if the base class has members I want to be public, I will use public inheritence but give the base class a protected destructor. This is assuming there is no v-table and thus no further derivation is expected. (You can finalize A here actually by making the inheritance virtual and private but you probably don't want to).
Related
I can understand defaulted constructors, since user defined constructors will disable the compiler generated ones making the object non trivially copyable etc.
In the destructor case, apart from changing access category, what use is there to define a defaulted destructor considering that no user defined member function can disable them (you can't overload destructors anyway) ?
// Which version should I choose ?
struct Example
{
//1. ~Example() = default;
//2. ~Example() {}
//3.
};
Even in the case of virtual destructors, defaulting them would not make them trivial so what good is it doing it?
The exception for trivial destructors omission has to do with the derived class' destructor, not the base one. So virtual ~Foo() = default; is a useful construct to keep the default destructor, but virtualize it.
One use is making the destructor protected or private while potentially keeping the class trivial: just list it after the desired access specifier.
Another: when writing classes, some programmers like to order the class's functions: e.g. constructors, then the destructor, then the non-const "mutating" members, then the const "accessor" members, then static functions. By being able to explicitly = default the destructor, you can list it in the expected order and the reader looking there knows there can't be another misplaced version of it. In large classes, that may have some documentary/safety value.
It also gives you something concrete around which to add comments, which can help some documentation tools realise the comments relate to destruction.
Basically it is about communicating the intent, but pretty redundant.
But in case you're using std::unique_ptr as a member of class you'll need to declare destructor (but only declare) in header. Then you can make it use default implementation in source file like so:
MyClass:~MyClass() = default;
Considering your options I would use first or third one.
As mentioned by Nikos Athanasiou in a comment, a default constructor makes the type trivially destructible, where a user defined one does not. A little code sample will show it:
#include <iostream>
#include <type_traits>
struct A { ~A() = default; };
struct B { ~B() {} };
struct C { ~C() noexcept {} };
int main() {
std::cout
<< std::is_trivially_destructible<A>::value
<< std::is_trivially_destructible<B>::value
<< std::is_trivially_destructible<C>::value
<< std::endl;
return 0;
}
Displays
100
As for virtual destructors, consistency with non-virtual ones and Quentin's answer are appropriate reasons. My personal advice is that you should always use the default when you can, as this is a way to stick to the most canonic behavior.
I'm pretty sure this is dangerous code. However, I wanted to check to see if anyone had an idea of what exactly would go wrong.
Suppose I have this class structure:
class A {
protected:
int a;
public:
A() { a = 0; }
int getA() { return a; }
void setA(int v) { a = v; }
};
class B: public A {
protected:
int b;
public:
B() { b = 0; }
};
And then suppose I want to have a way of automatically extending the class like so:
class Base {
public:
virtual ~Base() {}
};
template <typename T>
class Test: public T, public Base {};
One really important guarantee that I can make is that neither Base nor Test will have any other member variables or methods. They are essentially empty classes.
The (potentially) dangerous code is below:
int main() {
B *b = new B();
// dangerous part?
// forcing Test<B> to point to to an address of type B
Test<B> *test = static_cast<Test<B> *>(b);
//
A *a = dynamic_cast<A *>(test);
a->setA(10);
std::cout << "result: " << a->getA() << std::endl;
}
The rationale for doing something like this is I'm using a class similar to Test, but in order for it to work currently, a new instance T (i.e. Test) has necessarily be made, along with copying the instance passed. It would be really nice if I could just point Test to T's memory address.
If Base did not add a virtual destructor, and since nothing gets added by Test, I would think this code is actually okay. However, the addition of the virtual destructor makes me worried that the type info might get added to the class. If that's the case, then it would potentially cause memory access violations.
Finally, I can say this code works fine on my computer/compiler (clang), though this of course is no guarantee that it's not doing bad things to memory and/or won't completely fail on another compiler/machine.
The virtual destructor Base::~Base will be called when you delete the pointer. Since B doesn't have the proper vtable (none at all in the code posted here) that won't end very well.
It only works in this case because you have a memory leak, you're never deleting test.
Your code produces undefined behaviour, as it violates strict aliasing. Even if it did not, you are invoking UB, as neither B nor A are polymorphic classes, and the object pointed to is not a polymorphic class, therefore dynamic_cast cannot succeed. You're attempting to access a Base object that does not exist to determine the runtime type when using dynamic_cast.
One really important guarantee that I can make is that neither Base
nor Test will have any other member variables or methods. They are
essentially empty classes.
It's not important at all- it's utterly irrelevant. The Standard would have to mandate EBO for this to even begin to matter, and it doesn't.
As long as you perform no operations with Test<B>* and avoid any magic like smart pointers or automatic memory management, you should be fine.
You should be sure to look for obscured code like debug prints or logging that will inspect the object. I've had debuggers crash on me for attempting to look into a value of a pointer set up like this. I will bet this will cause you some pain, but you should be able to make it work.
I think the real problem is maintenance. How long will it be before some developer does an operation on Test<B>*?
I'm wondering if there's any reason to explicitly write code that does the same as default behavior of C++.
Here's some code:
class BaseClass
{
public:
virtual ~BaseClass() {}
virtual void f() { /* do something */ }
};
class ExplicitClass
: public BaseClass
{
public:
ExplicitClass()
: BaseClass() // <-- explicit call of base class constructor
{
// empty function
}
virtual ~ExplicitClass() {} // <-- explicit empty virtual destructor
virtual void f() { BaseClass::f(); } // <-- function just calls base
};
class ImplicitClass
: public BaseClass
{
};
I'm mainly curious in the realm of refactoring and a changing code base. I don't think many coders intend to write code like this, but it can endup looking like this when code changes over time.
Is there any point to leaving the code present in the ExplicitClass? I can see the bonus that it shows you what is happening, but it comes across as lint-prone and risky.
Personally I prefer to remove any code that is default behavior code(like ImplicitClass).
Is there any consensus favoring one way or the other?
There are two approaches to this issue:
Define everything even if compiler generates the same,
Does not define anything which compile will do better.
The believers of (1) are using rules like: "Always define Default c-tor, Copy C-tor, assignment operator and d-tor".
(1) believers think it is safer to have more than to miss something.
Unfortunately (1) is especially loved by our managers - they believe it is better to have than don't have. Sp such rules like this "always define big four" go to "Coding standard" and must be followed.
I believe in (2). And for firms where such coding standards exist, I always put comment "Do not define copy c-tor as compiler does it better"
As the question is about consensus, I can't answer, but I find ildjarn's comment amusing and correct.
As per your question whether there is a reason to write it like that, there is not as the explicit and implicit class behave the same. People sometimes do it for 'maintenance' reasons e.g. if derived f is ever implemented in a different way to remember to call the base class. I personally, don't find this useful.
Either way is fine, so long as you understand what really happens there, and the problems which may arise from not writing the functions yourself.
EXCEPTION-SAFETY:
The compiler will generate the functions implicitly adding the necessary throw conditions. For an implicitly created constructor, this would be every throw conditions from the base classes and the members.
ILL-FORMED CODE
There are some tricky cases where some of the automatically generated members functions will be ill-formed. Here is an example:
class Derived;
class Base
{
public:
virtual Base& /* or Derived& */
operator=( const Derived& ) throw( B1 );
virtual ~Base() throw( B2 );
};
class Member
{
public:
Member& operator=( const Member& ) throw( M1 );
~Member() throw( M2 );
};
class Derived : public Base
{
Member m_;
// Derived& Derived::operator=( const Derived& )
// throw( B1, M1 ); // error, ill-formed
// Derived::~Derived()
// throw( B2, M2 ); // error, ill-formed
};
The operator= is ill-formed because its throw directive should be at least as restrictive as its base class, meaning that it should throw either B1, or nothing at all. This makes sense as a Derived object can also be seen as a Base object.
Note that it is perfectly legal to have an ill-formed function as long as you never invoke it.
I am basically rewriting GotW #69 here, so if you want more details, you can find them here
it depends on how you like to structure and read the programs. of course, there are preferences and reasons for and against each.
class ExplicitClass
: public BaseClass
{
public:
initialization is very important. not initializing a base or member can produce warnings, rightly so or catching a bug in some cases. so this really starts to make sense if that collection of warnings is enabled, you keep the warning levels way up, and the warning counts down. it also helps to demonstrate intention:
ExplicitClass()
: BaseClass() // <-- explicit call of base class constructor
{
// empty function
}
an empty virtual destructor is, IME, statistically the best place to export a virtual (of course, that definition would be elsewhere if visible to more than one translation). you want this exported because there is a ton of rtti and vtable information that could end up as unnecessary bloat in your binary. i actually define empty destructors very regularly for this reason:
virtual ~ExplicitClass() {} // <-- explicit empty virtual destructor
perhaps it is a convention in your group, or it documents that this is exactly what the implementation is designed to do. this can also be helpful (subjective) in large codebases or within complex hierarchies because it can also help remind you of the dynamic interface the type is expected to adopt. some people prefer all declarations in the subclass because they can see all the class' dynamic implementation in one place. so the locality helps them in the event the class hierarchy/interface is larger than the programmer's mental stack. like the destructor, this virtual may also be a good place to export the typeinfo:
virtual void f() { BaseClass::f(); } // <-- function just calls base
};
of course, it gets hard to follow a program or rationale if you define only if qualified. so it can end up with some codebases easier to follow if you just stick to conventions because it is clearer than documenting why an empty destructor is exported at every turn.
a final reason (which swings both ways) is that explicit default definitions can increase and decrease build and link times.
fortunately, it's easier and unambiguous now to specify default and deleted methods and constructors.
First off, I know I can not do it, and I think it's not a duplicate questions (this and this questions deal with the same problem, but they only want an explanation of why it does not work).
So, I have a similar concept of classes and inheritance and I would, somehow, elegantly, want to do something that's forbidden. Here's a very simple code snippet that reflects what I want to do:
#include <iostream>
class A{
protected:
int var;
std::vector <double> heavyVar;
public:
A() {var=1;}
virtual ~A() {}
virtual void func() {
std::cout << "Default behavior" << this->var << std::endl;
}
// somewhere along the way, heavyVar is filled with a lot of stuff
};
class B: public A{
protected:
A* myA;
public:
B(A &a) : A() {
this->myA = &a;
this->var = this->myA->var;
// copy some simple data, e.g. flags
// but don't copy a heavy vector variable
}
virtual ~B() {}
virtual void func() {
this->myA->func();
std::cout << "This class is a decorator interface only" << std::endl;
}
};
class C: public B{
private:
int lotsOfCalc(const std::vector <double> &hv){
// do some calculations with the vector contents
}
public:
C(A &a) : B(a) {
// the actual decorator
}
virtual ~C() {}
virtual void func() {
B::func(); // base functionality
int heavyCalc = lotsOfCalc(this->myA->heavyVar); // illegal
// here, I actually access a heavy object (not int), and thus
// would not like to copy it
std::cout << "Expanded functionality " << heavyCalc << std::endl;
}
};
int main(void){
A a;
B b(a);
C c(a);
a.func();
b.func();
c.func();
return 0;
}
The reason for doing this is that I'm actually trying to implement a Decorator Pattern (class B has the myA inner variable that I want to decorate), but I would also like to use some of the protected members of class A while doing the "decorated" calculations (in class B and all of it's subclasses). Hence, this example is not a proper example of a decorator (not even a simple one). In the example, I only focused on demonstrating the problematic functionality (what I want to use but I can't). Not even all the classes/interfaces needed to implement a Decorator pattern are used in this example (I don't have an abstract base class interface, inherited by concrete base class instances as well as an abstract decorator intreface, to be used as a superclass for concrete decorators). I only mention Decorators for the context (the reason I want a A* pointer).
In this particular case, I don't see much sense in making (my equivalent of) int var public (or even, writing a publicly accessible getter) for two reasons:
the more obvious one, I do not want the users to actually use the information directly (I have some functions that return the information relevant to and/or written in my protected variables, but not the variable value itself)
the protected variable in my case is much more heavy to copy than an int (it's a 2D std::vector of doubles), and copying it in to the instance of a derived class would be unnecessarily time- and memory-consuming
Right now, I have two different ways of making my code do what I want it to do, but I don't like neither of them, and I'm searching for a C++ concept that was actually intended for doing something of this sort (I can't be the first person to desire this behavior).
What I have so far and why I don't like it:
1. declaring all the (relevant) inherited classes friends to the base class:
class A{
....
friend class B;
friend class C;
};
I don't like this solution because it would force me to modify my base class every time I write a new subclass class, and this is exactly what I'm trying to avoid. (I want to use only the 'A' interface in the main modules of the system.)
2. casting the A* pointer into a pointer of the inherited class and working with that
void B::func(){
B *uglyHack = static_cast<B*>(myA);
std::cout << uglyHack->var + 1 << std::endl;
}
The variable name is pretty suggestive towards my feelings of using this approach, but this is the one I am using right now. Since I designed this classes, I know how to be careful and to use only the stuff that is actually implemented in class A while treating it as a class B. But, if somebody else continues the work on my project, he might not be so familiar with the code. Also, casting a variable pointer in to something that I am very well aware that it is not just feels pure evil to me.
I am trying to keep this projects' code as nice and cleanly designed as possible, so if anybody has any suggestions towards a solution that does not require the modification of a base class every now and then or usage of evil concepts, I would very much appreciate it.
I do believe that you might want to reconsider the design, but a solution to the specific question of how can I access the member? could be:
class A{
protected:
int var;
static int& varAccessor( A& a ) {
return a.var;
}
};
And then in the derived type call the protected accessor passing the member object by reference:
varAccessor( this->myA ) = 5;
Now, if you are thinking on the decorator pattern, I don't think this is the way to go.
The source of the confusion is that most people don't realize that a type has two separate interfaces, the public interface towards users and the virtual interface for implementation providers (i.e. derived types) as in many cases functions are both public and virtual (i.e. the language allows binding of the two semantically different interfaces). In the Decorator pattern you use the base interface to provide an implementation. Inheritance is there so that the derived type can provide the operation for the user by means of some actual work (decoration) and then forwarding the work to the actual object. The inheritance relationship is not there for you to access the implementation object in any way through protected elements, and that in itself is dangerous. If you are passed an object of a derived type that has stricter invariants regarding that protected member (i.e. for objects of type X, var must be an odd number), the approach you are taking would let a decorator (of sorts) break the invariants of that X type that should just be decorated.
I can't find any examples of the decorator pattern being used in this way. It looks like in C++ it's used to decorate and then delegate back to the decoratee's public abstract interface and not accessing non-public members from it.
In fact, I don't see in your example decoration happening. You've just changed the behavior in the child class which indicates to me you just want plain inheritance (consider that if you use your B to decorate another B the effects don't end up chaining like it would in a normal decoration).
I think I found a nice way to do what I want in the inheritance structure I have.
Firstly, in the base class (the one that is a base for all the other classes, as well as abstract base class interface in the Decorator Pattern), I add a friend class declaration only for the first subclass (the one that would be acting as abstract decorator interface):
class A{
....
friend class B;
};
Then, I add protected access functions in the subclass for all the interesting variables in the base class:
class B : public A{
...
protected:
A *myA;
int getAVar() {return myA->var;}
std::vector <double> &getAHeavyVar {return myA->heavyVar;}
};
And finally, I can access just the things I need from all the classes that inherit class B (the ones that would be concrete decorators) in a controlled manner (as opposed to static_cast<>) through the access function without the need to make all the subclasses of B friends of class A:
class C : public B{
....
public:
virtual void func() {
B::func(); // base functionality
int heavyCalc = lotsOfCalc(this->getAHeavyVar); // legal now!
// here, I actually access a heavy object (not int), and thus
// would not like to copy it
std::cout << "Expanded functionality " << heavyCalc << std::endl;
std::cout << "And also the int: " << this->getAVar << std::endl;
// this time, completely legal
}
};
I was also trying to give only certain functions in the class B a friend access (declaring them as friend functions) but that did not work since I would need to declare the functions inside of class B before the friend declaration in class A. Since in this case class B inherits class A, that would give me circular dependency (forward declaration of class B is not enough for using only friend functions, but it works fine for a friend class declaration).
I am working on a legacy framework. Lets say 'A' is the base-class and 'B' is the derived class. Both the classes do some critical framework initialization. FWIW, it uses ACE library heavily.
I have a situation wherein; an instance of 'B' is created. But the ctor of 'A' depends on some initialization that can only be performed from 'B'.
As we know when 'B' is instantiated the ctor for 'A' is invoked before that of 'B'. The virtual mechanism dosen't work from ctors, using static functions is ruled-out (due to static-initialization-order-fiasco).
I considered using the CRTP pattern as follows :-
template<class Derived>
class A {
public:
A(){
static_cast<Derived*>(this)->fun();
}
};
class B : public A<B> {
public:
B() : a(0) {
a = 10;
}
void fun() { std::cout << "Init Function, Variable a = " << a << std::endl; }
private:
int a;
};
But the class members that are initialized in the initializer list have undefined values as they are not yet executed (f.e. 'a' in the above case). In my case there a number of such framework-based initialization variables.
Are there any well-known patterns to handle this situation?
Thanks in advance,
Update:
Based on the idea given by dribeas, i conjured-up a temporary solution to this problem (a full-fledged refactoring does not fit my timelines for now). The following code will demonstrate the same:-
// move all A's dependent data in 'B' to a new class 'C'.
class C {
public:
C() : a(10)
{ }
int getA() { return a; }
private:
int a;
};
// enhance class A's ctor with a pointer to the newly split class
class A {
public:
A(C* cptr)
{
std::cout << "O.K. B's Init Data From C:- " << cptr->getA() <<
std::endl;
}
};
// now modify the actual derived class 'B' as follows
class B : public C, public A {
public:
B()
: A(static_cast<C*>(this))
{ }
};
For some more discussion on the same see this link on c.l.c++.m. There is a nice generic solution given by Konstantin Oznobikhin.
Probably the best thing you can do is refactoring. It does not make sense to have a base class depend on one of its derived types.
I have seen this done before, providing quite some pain to the developers: extend the ACE_Task class to provide a periodic thread that could be extended with concrete functionality and activating the thread from the periodic thread constructor only to find out that while in testing and more often than not it worked, but that in some situations the thread actually started before the most derived object was initialized.
Inheritance is a strong relationship that should be used only when required. If you take a look at the boost thread library (just the docs, no need to enter into detail), or the POCO library you will see that they split the problem in two: thread classes control thread execution and call a method that is passed to them in construction: the thread control is separated from the actual code that will be runned, and the fact that the code to be run is received as an argument to the constructor guarantees that it was constructed before the thread constructor was called.
Maybe you could use the same approach in your own code. Divide the functionality in two, whatever the derived class is doing now should be moved outside of the hierarchy (boost uses functors, POCO uses interfaces, use whatever seems to fit you most). Without a better description of what you are trying to do, I cannot really go into more detail.
Another thing you could try (this is fragile and I would recommend against) is breaking the B class into a C class that is independent of A and a B class that inherits from both, first from C then from A (with HUGE warning comments there). This will guarantee that C will be constructed prior to A. Then make the C subobject an argument of A (through an interface or as a template argument). This will probably be the fastest hack, but not a good one. Once you are willing to modify the code, just do it right.
First, I think your design is bad if the constructor of a base class depends on the something done in the constructor in a derived. It really shouldn't be that way. At the time the constructor of the base class run, the object of the derived class basically doesn't exist.
A solution might be to have a helper object passed from the derived class to the constructor of the base class.
Perhaps Lazy Initialization does it for you. Store a flag in A, wether it's initialized or not. Whenever you call a method, check for the flag. if it's false, initialize A (the ctor of B has been run then) and set the flag to true.
It is a bad design and as already said it is UB. Please consider moving such dependencies to some other method say 'initialize' and call this initialize method from your derived class constructor (or anywhere before you actually need the base class data to be initialized)
Hmm. So, if I'm reading into this correctly, "A" is part of the legacy code, and you're pretty damn sure the right answer to some problem is to use a derived class, B.
It seems to me that the simplest solution might be to make a functional (non-OOP) style static factory function;
static B& B::makeNew(...);
Except that you say you run into static initialization order fiasco? I wouldn't think you would with this kind of setup, since there's no initialization going on.
Alright, looking at the problem more, "C" needs to have some setup done by "B" that "A" needs done, only "A" gets first dibs, because you want to have inheritance. So... fake inheritance, in a way that lets you control construction order...?
class A
{
B* pB;
public:
rtype fakeVirtual(params) { return pB->fakeVirtual(params); }
~A()
{
pB->deleteFromA();
delete pB;
//Deletion stuff
}
protected:
void deleteFromB()
{
//Deletion stuff
pB = NULL;
}
}
class B
{
A* pA;
public:
rtype fakeInheritance(params) {return pA->fakeInheritance(params);}
~B()
{
//deletion stuff
pA->deleteFromB();
}
protected:
friend class A;
void deleteFromA()
{
//deletion stuff
pA = NULL;
}
}
While it's verbose, I think this should safely fake inheritance, and allow you to wait to construct A until after B has done it's thing. It's also encapsulated, so when you can pull A you shouldn't have to change anything other than A and B.
Alternatively, you may also want to take a few steps back and ask yourself; what is the functionality that inheritance gives me that I am trying to use, and how might I accomplish that via other means? For instance, CRTP can be used as an alternative to virtual, and policies an alternative to function inheritance. (I think that's the right phrasing of that). I'm using these ideas above, just dropping the templates b/c I'm only expecting A to template on B and vice versa.