Why does std::logic_error not virtually inherit from std::exception? - c++

I'm trying to implement a custom exception hierarchy and allow appropriate std::* to be caught by code.
class my_exception : public virtual std::exception {
};
class my_bad_widget_state : public virtual my_exception, public virtual std::logic_error {
public: my_bad_widget_state() : std::logic_error("widget oops") {}
};
Obviously my_bad_widget_state is a my_exception and is also a std::logic_error, but the compiler rejects this code because std::exception doesn't say virtual when inheriting exception so there's an ambiguity. The compiler is right, but I think the standard library might be wrong, or?
edit:
Obviously my_bad_widget_state is a my_exception so a logic_error and also a std::exception, and when my_bad_widget_state is thrown std::exception is not being caught.
edit:
I am interested in knowing whether the standard library is designed this way for a particular reason that I failed to understand so far (if so, what is that reason please) or is it some kind of an oversight. My research indicates that many people seem to think this is a problem, but I didn't find any reason the inheritance shouldn't be virtual.
Q1: why is the inheritance in the standard library not virtual?
Q2: how can this be implemented correctly? [answered]

[W]hy is the inheritance [w.r.t. exceptions] in the standard library not virtual?
Simply, multiple inheritance, in the standard exception hierarchy, wasn't intended to be supported. It is not virtually derived, and this is, in effect, what it means.
By contrast, where in the standard library is this supported? I/O streams is the first example that comes to mind. In particular the use of basic_ios all the way down the hierarchy to basic_iostream. In this case, it was intended that the base was virtually derived to support the multiple inheritance and that the "diamond problem" was avoided.
So why is this, how should std::exception be used?
std::exception has multiple exceptions that are derived from it, in particular, note the std::logic_error and std::runtime_error. The standard library has already given us a board pattern for classification and organisation of our exceptions, namely;
class logic_error;
Defines a type of object to be thrown as exception. It reports errors that are a consequence of faulty logic within the program such as violating logical preconditions or class invariants and may be preventable.
And
class runtime_error;
Defines a type of object to be thrown as exception. It reports errors that are due to events beyond the scope of the program and can not be easily predicted.
Of course these are not the only two, but they capture and are a base of a significant number of other standard library exceptions.
Where to root the exception hierarchy?
If you wish to use the standard library exception hierarchy, it is better to choose a point at which to extend the hierarchy and work from that point on. Hence, if there is a desire to have a custom root exception, then have std::exception as a base class and derive further custom exceptions from that custom base onwards.
If the custom exceptions are divisible between runtime and logic errors, then derive the custom exception hierarchy from that level onwards.
Using a custom exception hierarchy rooted somewhere in the standard library exceptions is generally a good idea. At what point that root(s) should be is dependent on the actual intended use of the code. See here for a broader Q&A on this.
What about boost exceptions?
Boost uses virtual inheritance, they do this to exactly support the multiple inheritance that the standard library does not support. It also supports some additional features not found in the standard library.
That said, boost still uses the std::exception as a base class.
Ultimately this becomes a design decision based on the inheritance structures you wish to support in the hierarchy.

std::logic_error cannot be inherited without declaring a constructor. If you are using C++11, you can inherit the base class constructor by utilizing using:
class MyException : public std::logic_error {
public:
using std::logic_error::logic_error;
};
In C++0x, you just have to explicitly write a constructor that takes an std::string and forwards it to the base-class constructor like so:
class MyException : public std::logic_error {
public:
MyException(std::string const& msg) : std::logic_error(msg) { }
};

Virtual inheritance is rather awkward to use in a concrete hierarchy, because you need to initialize a virtual base in all descendant classes (children, grandchildren, ...)
If you want to add functionality to all standard exception classes, you can do this
class my_exception_additions {
// no inheritance from std::exception
};
template <class E>
class my_exception : public E,
public my_exception_additions {
...
};
...
throw my_exception<std::logic_error>("oops");
Of course the template will need to forward constructors to E.
Now if you want two separate hierarchies, like std::exception and your sql_exception from the comments, the template machinery becomes too complicated and it's better to resort to manually defining all classes:
class abstract_sql_exception {...};
class sql_exception : public abstract_sql_exception,
public std::exception {...};
class abstract_sql_disconnected : public abstract_sql_exception {...};
class sql_disconnected : public abstract_sql_disconnected,
public std::runtime_error {...};
class abstract_sql_invalid_input : public abstract_sql_exception {...};
class sql_invalid_input : public abstract_sql_invalid_input,
public std::logic_error {...};
Here, the abstract_sql hierarchy exists completely independently from the std:: hierarchy. Only concrete leaf classes tie the two together.
I must say that this is a (more or less ugly) workaround, not an ideal solution. The standard should have probably specified virtual inheritance throughout the exception hierarchy.

std::logic_error doesn't inherit virtually from std::exception because the standard doesn't say it does. The reason for that is likely that it is largely unneeded to express how the standard uses exceptions. Virtual inheritance also adds complexity and cost (albeit insignificant compared to exception handling)
You can certainly do what you want by not inheriting virtually with the caveat that you'd have two base std::exception objects in my_bad_widget_state. The primary issue with that is that you then can't catch a my_bad_widget_state exception by catch (std::exception& e) ... because the conversion to std::exception is ambiguous.
My advice is to not to use virtual inheritance and instead either stick to the exception classes (logic_error, runtime_error, etc.) or have all your exceptions inherit exclusively from my_exception. If you are pursuing this model because of shared functionality in my_exception, you'd probably opt for the latter.

Related

Intruding privacy - How does the C++ standard handle it?

Consider the below code snippet.
The method Sayhi() is having public access in class Base.
Sayhi() has been overridden as a private method by the class Derived.
In this way, we can intrude into someone's privacy and C++ has no way to detect it because things happen during run-time.
I understand it is "purely" compile-time check. But when using some thick inheritance hierarchy, programmers may incorrectly change access specifiers. Shouldn't the standard have some say atleast? some kind of warning message.
Why doesn't the compiler issue atleast a warning message whenever access specifier of overridden or virtual functions differ?
Q1. Does C++ standard has any say about such run-time anomalies?
Q2. I want to understand from C++ standard's perspective, why wouldn't standard enforce compiler implementors to have warning diagnostics?
#include <iostream>
class Base {
public:
virtual void Sayhi() { std::cout<<"hi from Base"<<std::endl; }
};
class Derived : public Base
{
private:
virtual void Sayhi() { std::cout<<"hi from Derived"<<std::endl; }
};
int main() {
Base *pb = new Derived;
// private method Derived::Sayhi() invoked.
// May affect the object state!
pb->Sayhi();
return 0;
}
Does C++ standard has any say about such run-time anomalies?
No. Access control is purely compile-time, and affects which names may be used, not which functions may be called.
So in your example, you can access the name Base::Sayhi, but not Derived::Sayhi; and access to Base::Sayhi allows you to virtually call any function that overrides it.
Why wouldn't standard enforce compiler implementors to have warning diagnostics?
The standard has nothing to say about warnings at all; it just defines the behaviour of well-formed code. It's up to compiler writers to decide what warnings might be useful; and warning about all private overrides just in case you didn't mean them to be overrides sounds like it would generate a lot of false positives.
Access specification cannot be loosened it can only be tightened up.
Sayhi() is public in Base class so basically all classes deriving and overidding from it should expect the method to be public, there is no intrusion. The access specification for overidding functions is well specified since the method was declared public to begin with.
Even though your question has been answered by now, I would like to add a note.
While you consider this as an "anomaly" and would like to have diagnostics, this is actually useful: You can ensure that your implementation can only be used polymorpically. The derived class should only have a public ctor and no other public functions, all the re-implemented member functions should be private.

How often is non-public C++ inheritance used in practice? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
When should I use C++ private inheritance?
I wanted to make this community-wiki but don't see the button... can someone add it?
I can't think of any case I've derived from a class in a non-public way, and I can't recall off-hand seeing code which does this.
I'd like to hear real-world examples and patterns where it is useful.
Your mileage may vary...
The hard-core answer would be that non-public inheritance is useless.
Personally, I use it in either of two cases:
I would like to trigger the Empty Base Optimization if possible (usually, in template code with predicates passed as parameters)
I would like to override a virtual function in the class
In either cases, I thus use private inheritance because the inheritance itself is an implementation detail.
I have seen people using private inheritance more liberally, and near systematically, instead of composition when writing wrappers or extending behaviors. C++ does not provide an "easy" delegate syntax, so doing so allow you to write using Base::method; to immediately provide the method instead of writing a proper forwarding call (and all its overloads). I would argue it is bad form, although it does save time.
If you chose inheritance for developing a wrapper, private inheritance is the way to go. You no longer need or want access to your base class' methods and members from outside your wrapper class.
class B;
class A
{
public:
A();
void foo(B b);
};
class BWrap;
class AWrap : private A
{
public:
AWrap();
void foo(BWrap b);
};
//no longer want A::foo to be accessible by mistake here, so make it private
Since private inheritance has as its only known use implementation inheritance, and since this could always be done using containment instead (which is less simple to use, but better encapsulates the relationship), I'd say it's used too often.
(Since nobody ever told me what protected inheritance means, let's assume nobody knows what it is and pretend it doesn't exist.)
Sometimes inheriting the classes which are neither having any virtual functions nor a virtual destructor (e.g. STL containers), you may have to go for non-public inheritance. e.g.
template<typename T>
struct MyVector : private std::vector<T>
{ ... };
This will disallow, handles (pointer or reference) of base (vector<>) to get hold of derived class (MyVector<>):
vector<int> *p = new MyVector<int>; // compiler error
...
delete p; // undefined behavior: ~vector() is not 'virtual'!
Since, we get compiler error at the first line itself, we will be saved from the undefined behavior in the subsequent line.
If you are deriving from a class without a virtual destructor then Public inheritance leads to a chance that users of the class can call delete on a pointer-to-base, which leads to undefined behaviour.
In such an scenario it makes sense to use private Inheritance.
Most common example of this is to derive privately from STL containers which do not have virtual destructors.
C++FAQ has an excellent example of Private Inheritance which extends to many real live scenarios.
A legitimate, long-term use for private inheritance is when you want to build a class Fred that uses code in a class Wilma, and the code from class Wilma needs to invoke member functions from your new class, Fred. In this case, Fred calls non-virtuals in Wilma, and Wilma calls (usually pure virtuals) in itself, which are overridden by Fred. This would be much harder to do with composition.
Code Example:
class Wilma {
protected:
void fredCallsWilma()
{
std::cout << "Wilma::fredCallsWilma()\n";
wilmaCallsFred();
}
virtual void wilmaCallsFred() = 0; // A pure virtual function
};
class Fred : private Wilma {
public:
void barney()
{
std::cout << "Fred::barney()\n";
Wilma::fredCallsWilma();
}
protected:
virtual void wilmaCallsFred()
{
std::cout << "Fred::wilmaCallsFred()\n";
}
};
Non-public (almost always private) inheritance is used when inheriting
(only) behavior, and not interface. I've used it mostly, but not
exclusively, in mixins.
For a good discussion on the topic, you might want to read Barton and
Nackman (Scientific and Engineering C++: An Introduction with Advanced
Techniques and Examples, ISBN 0-201-53393-6. Despite the name, large parts of the book are
applicable to all C++, not just scientific and engineering
applications. And despite its date, it's still worth reading.)

typedef std::runtime_error MyError vs class MyError : public std::runtime_error

I'm currently implementing some custom exceptions in a project and can't decide wither to typedef my exceptions or to derived a new class for every exception. Interested in the potential pros and cons of each and if one is preferable?
Pros in deriving your own class are simple: you can dispatch on it in catch handler.
If you decide not to implement your own class, I still doubt you need this typedef: what are you abstracting from? You may as well use plain std::runtime_error.
Also you might be interested in David Abrahams'es article about exception handling and implementing your own exception class.
Derive a new class - then you can distinguish in between them. If you typedef, there's no way for the exception handler to know which typedef was used in the throw.
I'm having a bit of difficult understanding your question, but if you're making your own custom exceptions I think I would use a derived class in case you want to change the behavior later on.

Where to define exception classes, inside classes or on a higher level?

Should exception classes be part of the class which may throw them or should they exist on a higher level?
For example :
class Test
{
public:
class FooException: public ExceptionBase { };
void functionThrowingFooException();
};
or
class FooException: public ExceptionBase { };
class Test
{
public:
void functionThrowingFooException();
};
(functionThrowingFooException() is the only function to ever throw a FooException)
Exceptions really model error conditions. Are those specific to each class? What if you need to raise an exception from a free function?
If you go the route of providing different exception types for various problem states - analyze/list those error states, name exceptions after them, put whatever state-specific data into those exception types, derive from a subclass of std::exception, then declare in a separate header.
I think it's all a matter of personnal taste (a form of coding convention, really).
I prefer to put them outside the class definition, but that's probably because that's how it's done in .NET's BCL.
Then again, if that exception is used only there... Is there any reason to throw an exception?

Is it possible to forbid deriving from a class at compile time?

I have a value class according to the description in "C++ Coding Standards", Item 32. In short, that means it provides value semantics and does not have any virtual methods.
I don't want a class to derive from this class. Beside others, one reason is that it has a public nonvirtual destructor. But a base class should have a destructor that is public and virtual or protected and nonvirtual.
I don't know a possibility to write the value class, such that it is not possible to derive from it. I want to forbid it at compile time. Is there perhaps any known idiom to do that? If not, perhaps there are some new possibilities in the upcoming C++0x? Or are there good reasons that there is no such possibility?
Bjarne Stroustrup has written about this here.
The relevant bit from the link:
Can I stop people deriving from my class?
Yes, but why do you want to? There are two common answers:
for efficiency: to avoid my function
calls being virtual.
for safety: to ensure that my class is not used as a
base class (for example, to be sure
that I can copy objects without fear
of slicing)
In my experience, the efficiency reason is usually misplaced fear. In C++, virtual function calls are so fast that their real-world use for a class designed with virtual functions does not to produce measurable run-time overheads compared to alternative solutions using ordinary function calls. Note that the virtual function call mechanism is typically used only when calling through a pointer or a reference. When calling a function directly for a named object, the virtual function class overhead is easily optimized away.
If there is a genuine need for "capping" a class hierarchy to avoid virtual function calls, one might ask why those functions are virtual in the first place. I have seen examples where performance-critical functions had been made virtual for no good reason, just because "that's the way we usually do it".
The other variant of this problem, how to prevent derivation for logical reasons, has a solution. Unfortunately, that solution is not pretty. It relies on the fact that the most derived class in a hierarchy must construct a virtual base. For example:
class Usable;
class Usable_lock {
friend class Usable;
private:
Usable_lock() {}
Usable_lock(const Usable_lock&) {}
};
class Usable : public virtual Usable_lock {
// ...
public:
Usable();
Usable(char*);
// ...
};
Usable a;
class DD : public Usable { };
DD dd; // error: DD::DD() cannot access
// Usable_lock::Usable_lock(): private member
(from D&E sec 11.4.3).
If you are willing to only allow the class to be created by a factory method you can have a private constructor.
class underivable {
underivable() { }
underivable(const underivable&); // not implemented
underivable& operator=(const underivable&); // not implemented
public:
static underivable create() { return underivable(); }
};
Even if the question is not marked for C++11, for people who get here it should be mentioned that C++11 supports new contextual identifier final. See wiki page
Take a good look here.
It's really cool but it's a hack.
Wonder for yourself why stdlib doesn't do this with it's own containers.
Well, i had a similar problem. This is posted here on SO. The problem was other way around; i.e. only allow those classes to be derived that you permit. Check if it solves your problem.
This is done at compile-time.
I would generally achieve this as follows:
// This class is *not* suitable for use as a base class
The comment goes in the header and/or in the documentation. If clients of your class don't follow the instructions on the packet, then in C++ they can expect undefined behavior. Deriving without permission is just a special case of this. They should use composition instead.
Btw, this is slightly misleading: "a base class should have a destructor that is public and virtual or protected and nonvirtual".
That's true for classes which are to be used as bases for runtime polymorphism. But it's not necessary if derived classes are never going to be referenced via pointers to the base class type. It might be reasonable to have a value type which is used only for static polymorphism, for instance with simulated dynamic binding. The confusion is that inheritance can be used for different purposes in C++, requiring different support from the base class. It means that although you don't want dynamic polymorphism with your class, it might nevertheless be fine to create derived classes provided they're used correctly.
This solution doesn't work, but I leave it as an example of what not to do.
I haven't used C++ for a while now, but as far as I remember, you get what you want by making destructor private.
UPDATE:
On Visual Studio 2005 you'll get either a warning or an error. Check up the following code:
class A
{
public:
A(){}
private:
~A(){}
};
class B : A
{
};
Now,
B b;
will produce an error "error C2248: 'A::~A' : cannot access private member declared in class 'A'"
while
B *b = new B();
will produce warning "warning C4624: 'B' : destructor could not be generated because a base class destructor is inaccessible".
It looks like a half-solutiom, BUT as orsogufo pointed, doing so makes class A unusable. Leaving answers