For some complicated reasons I want to create default constructor (alongside my normal constructors) that always throws. I want it to be there, but I also want it never to be called. It is pretty obvious that during runtime I can check for that thrown exception and for example terminate program when I catch it, but the ideal solution would be to have it checked during compilation.
So my question is: can I statically assert somehow that a function will never be called? I've looked at functions in <type_traits> but I don't see anything there that would help me. Is there some c++ dark magic that I could use to achieve my goal?
I don't have a code example, because what would even be in there?
PS. Yes. I am sure that I want to have a function and disallow everybody of calling it. As I stated previously reasons for that are complicated and irrelevant to my question.
EDIT. I can't delete this constructor or make it private. It has to be accessible for deriving classes, but they shouldn't call it. I have a case of virtual inheritance and want to "allow" calling this constructor by directly virtually derived classes (they won't call it, but c++ still requires it to be there), but no in any other classes deeper in inheritance chain.
EDIT 2. As requested I give a simplified example of my code.
#include <stdexcept>
class Base {
protected:
Base() { throw std::logic_error{"Can't be called"}; }
Base(int); // proper constructor
private:
// some members initialized by Base(int)
};
class Left: virtual public Base {
protected:
Left(int) {}
// ^ initialize members of Left, but does not call Base()!
// Though it seems that it implicitly does, Base() is never actually called.
};
class Right: virtual public Base {
protected:
Right(int) {} // The same as in Left
};
class Bottom: public Left, public Right {
public:
Bottom(int b, int l, int r): Base{b}, Left{l}, Right{r} {}
// ^ Explicitly calling constructors of Base, Left, Right.
// If I forget about calling Base(int) it silently passes
// and throws during runtime. Can I prevent this?
};
EDIT 3. Added body to Left's and Right's constructors, so that they implicitly "call" Base().
As you've stated in your comments that you never want to instatiate Base, Left or Right object, then you should make them abstract, even by some empty method:
class Base {
private:
// ...
virtual void DefineIfNonAbstract() = 0;
};
class Bottom: public Left, public Right {
void DefineIfNonAbstract() final {};
// ...
};
Trust your compiler. When it sees that DefineIfNonAbstract is private and none of its parents implemented it, it's not going to put it into a vtable.
You're Bottom class is already 16 bytes in your example for both gcc and clang (likely a pointer for each virtual inheritance). Adding the abstract method doesn't change that.
In the comments you expressed concern that this might not be safe, and sent me a link to CppCoreGuidelines:
I.25: Prefer empty abstract classes as interfaces to class hierarchies
Reason
Abstract classes that are empty (have no non-static member data) are
more likely to be stable than base classes with state.
They're referring to design choices here, not whether it causes undefined behaviour or something. In our case we're actually enforcing your design, not changing it.
The whole thing likely needs a serious rework in design. Inheritance in general is rarely a good choice - virtual inheritance even rarer.
If your link time optimization is up to the challenge, you might be able to have the problematic function call a never-defined function.
#include "Base.h"
// An anonymous namespace (or a static function) might be caught as missing
// while compiling this translation unit, so use a suggestive namespace name.
namespace DoNotDefine {
// Declare a function without a definition.
// The name is intended to make the error message easier to digest.
void DisallowedConstruction();
} // namespace DoNotDefine
Base::Base()
{
DisallowedConstruction();
}
In theory – I am not claiming that any particular linker is up to the task – the linker could eliminate unused function definitions before checking for missing definitions.
If nothing actually calls Base::Base() then such a linker would eliminate its definition before complaining that DisallowedConstruction() has no definition. After eliminating Base::Base(), there is no longer a call to DisallowedConstruction() so no problem.
If something did actually call Base::Base() then linking would fail because of there is no definition for DisallowedConstruction().
Again, I am not claiming that this will actually work with your compiler chain, only that it could work in theory.
For lesser compilers, I would suggest defining a pure virtual function in Base. Keep this pure virtual until you hit a class that uses non-default construction for Base. That ensures that no one instantiates a default Base. A bit awkward, but effective.
However, this has the drawback that the compiler cannot enforce this convention. This convention could be innocently broken. For example, someone might change a class from using non-default construction of Base to default construction and forget to remove the definition of the special function.
Related
I got this question when I received a code review comment saying virtual functions need not be inline.
I thought inline virtual functions could come in handy in scenarios where functions are called on objects directly. But the counter-argument came to my mind is -- why would one want to define virtual and then use objects to call methods?
Is it best not to use inline virtual functions, since they're almost never expanded anyway?
Code snippet I used for analysis:
class Temp
{
public:
virtual ~Temp()
{
}
virtual void myVirtualFunction() const
{
cout<<"Temp::myVirtualFunction"<<endl;
}
};
class TempDerived : public Temp
{
public:
void myVirtualFunction() const
{
cout<<"TempDerived::myVirtualFunction"<<endl;
}
};
int main(void)
{
TempDerived aDerivedObj;
//Compiler thinks it's safe to expand the virtual functions
aDerivedObj.myVirtualFunction();
//type of object Temp points to is always known;
//does compiler still expand virtual functions?
//I doubt compiler would be this much intelligent!
Temp* pTemp = &aDerivedObj;
pTemp->myVirtualFunction();
return 0;
}
Virtual functions can be inlined sometimes. An excerpt from the excellent C++ faq:
"The only time an inline virtual call
can be inlined is when the compiler
knows the "exact class" of the object
which is the target of the virtual
function call. This can happen only
when the compiler has an actual object
rather than a pointer or reference to
an object. I.e., either with a local
object, a global/static object, or a
fully contained object inside a
composite."
C++11 has added final. This changes the accepted answer: it's no longer necessary to know the exact class of the object, it's sufficient to know the object has at least the class type in which the function was declared final:
class A {
virtual void foo();
};
class B : public A {
inline virtual void foo() final { }
};
class C : public B
{
};
void bar(B const& b) {
A const& a = b; // Allowed, every B is an A.
a.foo(); // Call to B::foo() can be inlined, even if b is actually a class C.
}
There is one category of virtual functions where it still makes sense to have them inline. Consider the following case:
class Base {
public:
inline virtual ~Base () { }
};
class Derived1 : public Base {
inline virtual ~Derived1 () { } // Implicitly calls Base::~Base ();
};
class Derived2 : public Derived1 {
inline virtual ~Derived2 () { } // Implicitly calls Derived1::~Derived1 ();
};
void foo (Base * base) {
delete base; // Virtual call
}
The call to delete 'base', will perform a virtual call to call correct derived class destructor, this call is not inlined. However because each destructor calls it's parent destructor (which in these cases are empty), the compiler can inline those calls, since they do not call the base class functions virtually.
The same principle exists for base class constructors or for any set of functions where the derived implementation also calls the base classes implementation.
I've seen compilers that don't emit any v-table if no non-inline function at all exists (and defined in one implementation file instead of a header then). They would throw errors like missing vtable-for-class-A or something similar, and you would be confused as hell, as i was.
Indeed, that's not conformant with the Standard, but it happens so consider putting at least one virtual function not in the header (if only the virtual destructor), so that the compiler could emit a vtable for the class at that place. I know it happens with some versions of gcc.
As someone mentioned, inline virtual functions can be a benefit sometimes, but of course most often you will use it when you do not know the dynamic type of the object, because that was the whole reason for virtual in the first place.
The compiler however can't completely ignore inline. It has other semantics apart from speeding up a function-call. The implicit inline for in-class definitions is the mechanism which allows you to put the definition into the header: Only inline functions can be defined multiple times throughout the whole program without a violation any rules. In the end, it behaves as you would have defined it only once in the whole program, even though you included the header multiple times into different files linked together.
Well, actually virtual functions can always be inlined, as long they're statically linked together: suppose we have an abstract class Base with a virtual function F and derived classes Derived1 and Derived2:
class Base {
virtual void F() = 0;
};
class Derived1 : public Base {
virtual void F();
};
class Derived2 : public Base {
virtual void F();
};
An hypotetical call b->F(); (with b of type Base*) is obviously virtual. But you (or the compiler...) could rewrite it like so (suppose typeof is a typeid-like function that returns a value that can be used in a switch)
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // static, inlineable call
case Derived2: b->Derived2::F(); break; // static, inlineable call
case Base: assert(!"pure virtual function call!");
default: b->F(); break; // virtual call (dyn-loaded code)
}
while we still need RTTI for the typeof, the call can effectively be inlined by, basically, embedding the vtable inside the instruction stream and specializing the call for all the involved classes. This could be also generalized by specializing only a few classes (say, just Derived1):
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // hot path
default: b->F(); break; // default virtual call, cold path
}
inline really doesn't do anything - it's a hint. The compiler might ignore it or it might inline a call event without inline if it sees the implementation and likes this idea. If code clarity is at stake the inline should be removed.
Marking a virtual method inline, helps in further optimizing virtual functions in following two cases:
Curiously recurring template pattern (http://www.codeproject.com/Tips/537606/Cplusplus-Prefer-Curiously-Recurring-Template-Patt)
Replacing virtual methods with templates (http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html)
Inlined declared Virtual functions are inlined when called through objects and ignored when called via pointer or references.
With modern compilers, it won't do any harm to inlibe them. Some ancient compiler/linker combos might have created multiple vtables, but I don't believe that is an issue anymore.
A compiler can only inline a function when the call can be resolved unambiguously at compile time.
Virtual functions, however are resolved at runtime, and so the compiler cannot inline the call, since at compile type the dynamic type (and therefore the function implementation to be called) cannot be determined.
In the cases where the function call is unambiguous and the function a suitable candidate for inlining, the compiler is smart enough to inline the code anyway.
The rest of the time "inline virtual" is a nonsense, and indeed some compilers won't compile that code.
It does make sense to make virtual functions and then call them on objects rather than references or pointers. Scott Meyer recommends, in his book "effective c++", to never redefine an inherited non-virtual function. That makes sense, because when you make a class with a non-virtual function and redefine the function in a derived class, you may be sure to use it correctly yourself, but you can't be sure others will use it correctly. Also, you may at a later date use it incorrectly yoruself. So, if you make a function in a base class and you want it to be redifinable, you should make it virtual. If it makes sense to make virtual functions and call them on objects, it also makes sense to inline them.
Actually in some cases adding "inline" to a virtual final override can make your code not compile so there is sometimes a difference (at least under VS2017s compiler)!
Actually I was doing a virtual inline final override function in VS2017 adding c++17 standard to compile and link and for some reason it failed when I am using two projects.
I had a test project and an implementation DLL that I am unit testing. In the test project I am having a "linker_includes.cpp" file that #include the *.cpp files from the other project that are needed. I know... I know I can set up msbuild to use the object files from the DLL, but please bear in mind that it is a microsoft specific solution while including the cpp files is unrelated to build-system and much more easier to version a cpp file than xml files and project settings and such...
What was interesting is that I was constantly getting linker error from the test project. Even if I added the definition of the missing functions by copy paste and not through include! So weird. The other project have built and there are no connection between the two other than marking a project reference so there is a build order to ensure both is always built...
I think it is some kind of bug in the compiler. I have no idea if it exists in the compiler shipped with VS2020, because I am using an older version because some SDK only works with that properly :-(
I just wanted to add that not only marking them as inline can mean something, but might even make your code not build in some rare circumstances! This is weird, yet good to know.
PS.: The code I am working on is computer graphics related so I prefer inlining and that is why I used both final and inline. I kept the final specifier to hope the release build is smart enough to build the DLL by inlining it even without me directly hinting so...
PS (Linux).: I expect the same does not happen in gcc or clang as I routinely used to do these kind of things. I am not sure where this issue comes from... I prefer doing c++ on Linux or at least with some gcc, but sometimes project is different in needs.
I got this question when I received a code review comment saying virtual functions need not be inline.
I thought inline virtual functions could come in handy in scenarios where functions are called on objects directly. But the counter-argument came to my mind is -- why would one want to define virtual and then use objects to call methods?
Is it best not to use inline virtual functions, since they're almost never expanded anyway?
Code snippet I used for analysis:
class Temp
{
public:
virtual ~Temp()
{
}
virtual void myVirtualFunction() const
{
cout<<"Temp::myVirtualFunction"<<endl;
}
};
class TempDerived : public Temp
{
public:
void myVirtualFunction() const
{
cout<<"TempDerived::myVirtualFunction"<<endl;
}
};
int main(void)
{
TempDerived aDerivedObj;
//Compiler thinks it's safe to expand the virtual functions
aDerivedObj.myVirtualFunction();
//type of object Temp points to is always known;
//does compiler still expand virtual functions?
//I doubt compiler would be this much intelligent!
Temp* pTemp = &aDerivedObj;
pTemp->myVirtualFunction();
return 0;
}
Virtual functions can be inlined sometimes. An excerpt from the excellent C++ faq:
"The only time an inline virtual call
can be inlined is when the compiler
knows the "exact class" of the object
which is the target of the virtual
function call. This can happen only
when the compiler has an actual object
rather than a pointer or reference to
an object. I.e., either with a local
object, a global/static object, or a
fully contained object inside a
composite."
C++11 has added final. This changes the accepted answer: it's no longer necessary to know the exact class of the object, it's sufficient to know the object has at least the class type in which the function was declared final:
class A {
virtual void foo();
};
class B : public A {
inline virtual void foo() final { }
};
class C : public B
{
};
void bar(B const& b) {
A const& a = b; // Allowed, every B is an A.
a.foo(); // Call to B::foo() can be inlined, even if b is actually a class C.
}
There is one category of virtual functions where it still makes sense to have them inline. Consider the following case:
class Base {
public:
inline virtual ~Base () { }
};
class Derived1 : public Base {
inline virtual ~Derived1 () { } // Implicitly calls Base::~Base ();
};
class Derived2 : public Derived1 {
inline virtual ~Derived2 () { } // Implicitly calls Derived1::~Derived1 ();
};
void foo (Base * base) {
delete base; // Virtual call
}
The call to delete 'base', will perform a virtual call to call correct derived class destructor, this call is not inlined. However because each destructor calls it's parent destructor (which in these cases are empty), the compiler can inline those calls, since they do not call the base class functions virtually.
The same principle exists for base class constructors or for any set of functions where the derived implementation also calls the base classes implementation.
I've seen compilers that don't emit any v-table if no non-inline function at all exists (and defined in one implementation file instead of a header then). They would throw errors like missing vtable-for-class-A or something similar, and you would be confused as hell, as i was.
Indeed, that's not conformant with the Standard, but it happens so consider putting at least one virtual function not in the header (if only the virtual destructor), so that the compiler could emit a vtable for the class at that place. I know it happens with some versions of gcc.
As someone mentioned, inline virtual functions can be a benefit sometimes, but of course most often you will use it when you do not know the dynamic type of the object, because that was the whole reason for virtual in the first place.
The compiler however can't completely ignore inline. It has other semantics apart from speeding up a function-call. The implicit inline for in-class definitions is the mechanism which allows you to put the definition into the header: Only inline functions can be defined multiple times throughout the whole program without a violation any rules. In the end, it behaves as you would have defined it only once in the whole program, even though you included the header multiple times into different files linked together.
Well, actually virtual functions can always be inlined, as long they're statically linked together: suppose we have an abstract class Base with a virtual function F and derived classes Derived1 and Derived2:
class Base {
virtual void F() = 0;
};
class Derived1 : public Base {
virtual void F();
};
class Derived2 : public Base {
virtual void F();
};
An hypotetical call b->F(); (with b of type Base*) is obviously virtual. But you (or the compiler...) could rewrite it like so (suppose typeof is a typeid-like function that returns a value that can be used in a switch)
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // static, inlineable call
case Derived2: b->Derived2::F(); break; // static, inlineable call
case Base: assert(!"pure virtual function call!");
default: b->F(); break; // virtual call (dyn-loaded code)
}
while we still need RTTI for the typeof, the call can effectively be inlined by, basically, embedding the vtable inside the instruction stream and specializing the call for all the involved classes. This could be also generalized by specializing only a few classes (say, just Derived1):
switch (typeof(b)) {
case Derived1: b->Derived1::F(); break; // hot path
default: b->F(); break; // default virtual call, cold path
}
inline really doesn't do anything - it's a hint. The compiler might ignore it or it might inline a call event without inline if it sees the implementation and likes this idea. If code clarity is at stake the inline should be removed.
Marking a virtual method inline, helps in further optimizing virtual functions in following two cases:
Curiously recurring template pattern (http://www.codeproject.com/Tips/537606/Cplusplus-Prefer-Curiously-Recurring-Template-Patt)
Replacing virtual methods with templates (http://www.di.unipi.it/~nids/docs/templates_vs_inheritance.html)
Inlined declared Virtual functions are inlined when called through objects and ignored when called via pointer or references.
With modern compilers, it won't do any harm to inlibe them. Some ancient compiler/linker combos might have created multiple vtables, but I don't believe that is an issue anymore.
A compiler can only inline a function when the call can be resolved unambiguously at compile time.
Virtual functions, however are resolved at runtime, and so the compiler cannot inline the call, since at compile type the dynamic type (and therefore the function implementation to be called) cannot be determined.
In the cases where the function call is unambiguous and the function a suitable candidate for inlining, the compiler is smart enough to inline the code anyway.
The rest of the time "inline virtual" is a nonsense, and indeed some compilers won't compile that code.
It does make sense to make virtual functions and then call them on objects rather than references or pointers. Scott Meyer recommends, in his book "effective c++", to never redefine an inherited non-virtual function. That makes sense, because when you make a class with a non-virtual function and redefine the function in a derived class, you may be sure to use it correctly yourself, but you can't be sure others will use it correctly. Also, you may at a later date use it incorrectly yoruself. So, if you make a function in a base class and you want it to be redifinable, you should make it virtual. If it makes sense to make virtual functions and call them on objects, it also makes sense to inline them.
Actually in some cases adding "inline" to a virtual final override can make your code not compile so there is sometimes a difference (at least under VS2017s compiler)!
Actually I was doing a virtual inline final override function in VS2017 adding c++17 standard to compile and link and for some reason it failed when I am using two projects.
I had a test project and an implementation DLL that I am unit testing. In the test project I am having a "linker_includes.cpp" file that #include the *.cpp files from the other project that are needed. I know... I know I can set up msbuild to use the object files from the DLL, but please bear in mind that it is a microsoft specific solution while including the cpp files is unrelated to build-system and much more easier to version a cpp file than xml files and project settings and such...
What was interesting is that I was constantly getting linker error from the test project. Even if I added the definition of the missing functions by copy paste and not through include! So weird. The other project have built and there are no connection between the two other than marking a project reference so there is a build order to ensure both is always built...
I think it is some kind of bug in the compiler. I have no idea if it exists in the compiler shipped with VS2020, because I am using an older version because some SDK only works with that properly :-(
I just wanted to add that not only marking them as inline can mean something, but might even make your code not build in some rare circumstances! This is weird, yet good to know.
PS.: The code I am working on is computer graphics related so I prefer inlining and that is why I used both final and inline. I kept the final specifier to hope the release build is smart enough to build the DLL by inlining it even without me directly hinting so...
PS (Linux).: I expect the same does not happen in gcc or clang as I routinely used to do these kind of things. I am not sure where this issue comes from... I prefer doing c++ on Linux or at least with some gcc, but sometimes project is different in needs.
I have a virtual base class function which should never be used in a particular derived class. Is there a way to 'delete' it? I can of course just give it an empty definition but I would rather make its attempted use throw a compile-time error. The C++11 delete specifier seems like what I would want, but
class B
{
virtual void f();
};
class D : public B
{
virtual void f() = delete; //Error
};
won't compile; gcc, at least, explicitly won't let me delete a function that has a non-deleted base version. Is there another way to get the same functionality?
It is not allowed by the standard, however you could use one of the following two workarounds to get a similar behaviour.
The first would be to use using to change the visibility of the method to private, thus preventing others from using it. The problem with that solution is, that calling the method on a pointer of the super-class does not result in a compilation error.
class B
{
public:
virtual void f();
};
class D : public B
{
private:
using B::f;
};
The best solution I have found so far to get a compile-time error when calling Ds method is by using a static_assert with a generic struct that inherits from false_type. As long as noone ever calls the method, the struct stays undefied and the static_assert won't fail.
If the method is called however, the struct is defined and its value is false, so the static_assert fails.
If the method is not called, but you try to call it on a pointer of the super class, then Ds method is not defined and you get an undefined reference compilation error.
template <typename T>
struct fail : std::false_type
{
};
class B
{
public:
virtual void f()
{
}
};
class D : public B
{
public:
template<typename T = bool>
void
f()
{
static_assert (fail<T>::value, "Do not use!");
}
};
Another workaround would be to throw an exception when the method is used, but that would only throw up on run-time.
The standard does not allow you to delete any member of a base-class in a derived class for good reason:
Doing so breaks inheritance, specifically the "is-a" relationship.
For related reasons, it does not allow a derived class to define a function deleted in the base-class:
The hook is not any longer part of the base-class contract, and thus it stops you from relying on previous guarantees which no longer hold.
If you want to get tricky, you can force an error, but it will have to be link-time instead of compile-time:
Declare the member function but don't ever define it (This is not 100% guaranteed to work for virtual functions though).
Better also take a look at the GCC deprecated attribute for earlier warnings __attribute__ ((deprecated)).
For details and similar MS magic: C++ mark as deprecated
"I have a virtual base class function which should never be used in a particular derived class."
In some respects that is a contradiction. The whole point of virtual functions is to provide different implementations of the contract provided by the base class. What you are trying to do is break the contract. The C++ language is designed to prevent you from doing that. This is why it forces you to implement pure virtual functions when you instantiate an object. And that is why it won't let you delete part of the contract.
What is happening is a good thing. It is probably preventing you from implementing an inappropriate design choice.
However:
Sometimes it can be appropriate to have a blank implementation that does nothing:
void MyClass::my_virtual_function()
{
// nothing here
}
Or a blank implementation that returns a "failed" status:
bool MyClass::my_virtual_function()
{
return false;
}
It all depends what you are trying to do. Perhaps if you could give more information as to what you are trying to achieve someone can point you in the right direction.
EDIT
If you think about it, to avoid calling the function for a specific derived type, the caller would need to know what type it is calling. The whole point of calling a base class reference/pointer is that you don't know which derived type will receive the call.
What you can do is simply throwing an exception in the derived implementation. For example, the Java Collections framework does this quite excessively: When an update operation is performed on a collection that is immutable, the corresponding method simply throws an UnsupportedOperationException. You can do the same in C++.
Of course, this will show a malicious use of the function only at runtime; not at compile time. However, with virtual methods, you are unable to catch such errors at compile time anyway because of polymorphism. E.g.:
B* b = new D();
b.f();
Here, you store a D in a B* variable. So, even if there was a way to tell the compiler that you are not allowed to call f on a D, the compiler would be unable to report this error here, because it only sees B.
I have a virtual base class function which should never be used in a particular derived class.
C++11 provides a keyword final which prevents a virtual function being overriden from.
Look: http://en.cppreference.com/w/cpp/language/final .
class B
{
virtual void f() final;
};
class D : public B
{
// virtual void f(); // a compile-time error
// void f() override; // a compile-time error
void f(); // non-virtual function, it's ok
};
I'm wondering if there's any reason to explicitly write code that does the same as default behavior of C++.
Here's some code:
class BaseClass
{
public:
virtual ~BaseClass() {}
virtual void f() { /* do something */ }
};
class ExplicitClass
: public BaseClass
{
public:
ExplicitClass()
: BaseClass() // <-- explicit call of base class constructor
{
// empty function
}
virtual ~ExplicitClass() {} // <-- explicit empty virtual destructor
virtual void f() { BaseClass::f(); } // <-- function just calls base
};
class ImplicitClass
: public BaseClass
{
};
I'm mainly curious in the realm of refactoring and a changing code base. I don't think many coders intend to write code like this, but it can endup looking like this when code changes over time.
Is there any point to leaving the code present in the ExplicitClass? I can see the bonus that it shows you what is happening, but it comes across as lint-prone and risky.
Personally I prefer to remove any code that is default behavior code(like ImplicitClass).
Is there any consensus favoring one way or the other?
There are two approaches to this issue:
Define everything even if compiler generates the same,
Does not define anything which compile will do better.
The believers of (1) are using rules like: "Always define Default c-tor, Copy C-tor, assignment operator and d-tor".
(1) believers think it is safer to have more than to miss something.
Unfortunately (1) is especially loved by our managers - they believe it is better to have than don't have. Sp such rules like this "always define big four" go to "Coding standard" and must be followed.
I believe in (2). And for firms where such coding standards exist, I always put comment "Do not define copy c-tor as compiler does it better"
As the question is about consensus, I can't answer, but I find ildjarn's comment amusing and correct.
As per your question whether there is a reason to write it like that, there is not as the explicit and implicit class behave the same. People sometimes do it for 'maintenance' reasons e.g. if derived f is ever implemented in a different way to remember to call the base class. I personally, don't find this useful.
Either way is fine, so long as you understand what really happens there, and the problems which may arise from not writing the functions yourself.
EXCEPTION-SAFETY:
The compiler will generate the functions implicitly adding the necessary throw conditions. For an implicitly created constructor, this would be every throw conditions from the base classes and the members.
ILL-FORMED CODE
There are some tricky cases where some of the automatically generated members functions will be ill-formed. Here is an example:
class Derived;
class Base
{
public:
virtual Base& /* or Derived& */
operator=( const Derived& ) throw( B1 );
virtual ~Base() throw( B2 );
};
class Member
{
public:
Member& operator=( const Member& ) throw( M1 );
~Member() throw( M2 );
};
class Derived : public Base
{
Member m_;
// Derived& Derived::operator=( const Derived& )
// throw( B1, M1 ); // error, ill-formed
// Derived::~Derived()
// throw( B2, M2 ); // error, ill-formed
};
The operator= is ill-formed because its throw directive should be at least as restrictive as its base class, meaning that it should throw either B1, or nothing at all. This makes sense as a Derived object can also be seen as a Base object.
Note that it is perfectly legal to have an ill-formed function as long as you never invoke it.
I am basically rewriting GotW #69 here, so if you want more details, you can find them here
it depends on how you like to structure and read the programs. of course, there are preferences and reasons for and against each.
class ExplicitClass
: public BaseClass
{
public:
initialization is very important. not initializing a base or member can produce warnings, rightly so or catching a bug in some cases. so this really starts to make sense if that collection of warnings is enabled, you keep the warning levels way up, and the warning counts down. it also helps to demonstrate intention:
ExplicitClass()
: BaseClass() // <-- explicit call of base class constructor
{
// empty function
}
an empty virtual destructor is, IME, statistically the best place to export a virtual (of course, that definition would be elsewhere if visible to more than one translation). you want this exported because there is a ton of rtti and vtable information that could end up as unnecessary bloat in your binary. i actually define empty destructors very regularly for this reason:
virtual ~ExplicitClass() {} // <-- explicit empty virtual destructor
perhaps it is a convention in your group, or it documents that this is exactly what the implementation is designed to do. this can also be helpful (subjective) in large codebases or within complex hierarchies because it can also help remind you of the dynamic interface the type is expected to adopt. some people prefer all declarations in the subclass because they can see all the class' dynamic implementation in one place. so the locality helps them in the event the class hierarchy/interface is larger than the programmer's mental stack. like the destructor, this virtual may also be a good place to export the typeinfo:
virtual void f() { BaseClass::f(); } // <-- function just calls base
};
of course, it gets hard to follow a program or rationale if you define only if qualified. so it can end up with some codebases easier to follow if you just stick to conventions because it is clearer than documenting why an empty destructor is exported at every turn.
a final reason (which swings both ways) is that explicit default definitions can increase and decrease build and link times.
fortunately, it's easier and unambiguous now to specify default and deleted methods and constructors.
I'm working on building Cppcheck on AIX with the xlC compiler (see previous question). Checker classes all derive from a Check class, whose constructor registers each object in a global list:
check.h
class Check {
public:
Check() {
instances().push_back(this);
instances().sort();
}
static std::list<Check *> &instances();
virtual std::string name() const = 0;
private:
bool operator<(const Check *other) const {
return (name() < other->name());
}
};
checkbufferoverrun.h
class CheckBufferOverrun: public Check {
public:
// ...
std::string name() const {
return "Bounds checking";
}
};
The problem I appear to be having is with the instances().sort() call. sort() will call Check::operator<() which calls Check::name() on each pointer in the static instances() list, but the Check instance that was just added to the list has not yet had its constructor fully run (because it's still inside Check::Check()). Therefore, it should be undefined behaviour to call ->name() on such a pointer before the CheckBufferOverrun constructor has completed.
Is this really undefined behaviour, or am I missing a subtlety here?
Note that I don't think the call to sort() is strictly required, but the effect is that Cppcheck runs all its checkers in a deterministic order. This only affects the output in the order in which errors are detected, which causes causes some test cases to fail because they're expecting the output in a particular order.
Update: The question as above still (mostly) stands. However, I think the real reason why the call to sort() in the constructor wasn't causing problems (ie. crashing by calling a pure virtual function) is that the Check::operator<(const Check *) is never actually called by sort()! Rather, sort() appears to compare the pointers instead. This happens in both g++ and xlC, indicating a problem with the Cppcheck code itself.
Yes, it's undefined. The standard specifically says so in 10.4/6
Member functions can be called from a constructor (or destructor) of an abstract class; the effect of making a virtual call (10.3) to a pure virtual function directly or indirectly for the object being created (or destroyed) from such a constructor (or destructor) is undefined.
It is true that calling a pure virtual function from a constructor is always an undefined behaviour.
The virtual pointer can not be assumed to be set until the constructor has run completely (closing "}"), and hence any call to a virtual function (or pure virtual function) has to be setup at the time of compilation itself (statically bound call).
Now, if the virtual function is pure virtual function, the compiler will generally insert its own implementation for such pure virtual function, the default behavior of which is to generate a segmentation fault. The Standard does not dictate what should be the implementation of a pure virtual function, but most of C++ compilers adopt aforesaid style.
If your code is not causing any runtime mischief demeanour, then it is not getting called in the said call sequence. If you could post the implementation code for below 2 functions
instances().push_back(this);
instances().sort();
then maybe it will help to see what's going on.
As long as object construction isn't finished, a pure virtual function may not be called. However, if it's declared pure virtual in a base class A, then defined in B (derived from A), the constructor of C (derived from B) may call it, since B's construction is complete.
In your case, use a static constructor instead:
class check {
private Check () { ... }
public:
static Check* createInstance() {
Check* check = new Check();
instances().push_back(check);
instances().sort();
}
...
}
I think your real problem is that you've conflated two things: the Checker base class, and some mechanism for registering (derived) instances of Check.
Among other things, this isn't particularly robust: I may want to use your Checker classes, but I may want to register them differently.
Maybe you could do something like this: Checker get a protected ctor (it's abstract anyway, and so only derived classes ought to be calling the Checker ctor).
Derived classes also have protected ctors, and a public static method (the "named constructor pattern") to create instances. That creating method news up a Checker subclass, and them passes it (fully created at this point) to a CheckerRegister class (which is also abstract, so users can implemented their own if need be).
You use whatever singleton pattern, or dependency injection mechanism, that you prefer, to instantiate a Checkerregister and make it available to Checker subclasses.
One simple way to do this would be to have a getCheckerRegister static method on Checker.
So a Checker subclass might look like this:
class CheckBufferOverrun: public Check {
protected:
CheckBufferOverrun : Check("Bounds checking") {
// since every derived has a name, why not just pass it as an arg?
}
public:
CheckBufferOverrun makeCheckBufferOverrun() {
CheckBufferOverrun that = new CheckBufferOverrun();
// get the singleton, pass it something fully constructed
Checker.getCheckerRegister.register(that) ;
return that;
}
If it looks like this will end up being a lot of boilerplate code, write a template. If you worry that because each template instance in C++ is a real and unique class, write a non-templated base class that will register any Checker-derived.