C++ Design issues: Map with various abstract base classes - c++

I'm facing design problems and could do with some external input. I am trying to avoid abstract base class casting (Since I've heard that's bad).
The issues are down to this structure:
class entity... (base with pure virtual functions)
class hostile : public entity... (base with pure virtual functions)
class friendly : public entity... (base with pure virtual functions)
// Then further derived classes using these last base classes...
Initially I thought I'd get away with:
const enum class FactionType : unsigned int
{ ... };
std::unordered_map<FactionType, std::vector<std::unique_ptr<CEntity>>> m_entitys;
And... I did but this causes me problems because I need to access "unique" function from say hostile or friendly specifically.
I have disgracefully tried (worked but don't like it nor does it feel safe):
// For-Each Loop: const auto& friendly : m_entitys[FactionType::FRIENDLY]
CFriendly* castFriendly = static_cast<CFriendly*>(&*friendly);
I was hoping/trying to maintain the unordered_map design that uses FactionType as a key for the base abstract class type... Anyway, input is greatly appreciated.
If there are any syntactical errors, I apologise.

About casting I agree with with #rufflewind. The casts mean different thing and are useful at different times.
To coerce a region of memory at compile time (the decision of typing happen at compile time anyway) use static_cast. The amount of memory on the other end of the T* equal to sizeof(T) will be interpreted as a T regardless of correct behavior.
The decisions for dynamic_cast are made entirely at runtime, sometimes requiring RTTI (Run Time Type Information). It makes a decision and it will either return a null pointer or a valid pointer to a T if one can be made.
The decision goes further than just the types of casts though. Using a data structure to look up types and methods (member functions) imposes the time constraints that would not otherwise exist when compared to the relatively fast and mandatory casts. There is a way to skip the data structures, but not the casting without major refactoring (with major refactoring you can do anything).
You can move the casts into the entity class, get them done right and just leave them encapsulated there.
class entity
{
// Previous code
public:
// This will be overridden in hostiles to return a valid
// pointer and nullptr or 0 in other types of entities
virtual hostile* cast_to_hostile() = 0
virtual const hostile* cast_to_hostile() const = 0
// This will be overridden in friendlies to return a valid
// pointer and nullptr or 0 in other types of entities
virtual friendly* cast_to_friendly() = 0
virtual const friendly* cast_to_friendly() const = 0
// The following helper methods are optional but
// can make it easier to write streamlined code in
// calling classes with a little trouble.
// Hostile and friendly can each implement this to return
// The appropriate enum member. This would useful for making
// decision about friendlies and hostiles
virtual FactionType entity_type() const = 0;
// These two method delegate storage of the knowledge
// of hostility or friendliness to the derived classes.
// These are implemented here as non-virtual functions
// because they shouldn't need to be overridden, but
// could be made virtual at the cost of a pointer
// indirection and sometimes, but not often a cache miss.
bool is_friendly() const
{
return entity_type() == FactionType_friendly;
}
bool is_hostile() const
{
return entity_type() == FactionType_hostile;
}
}
This strategy is good and bad for a variety of reasons.
Pros:
It is conceptually simple. This is easy to understand quickly if you understand polymorphism.
It seems similar to your existing code seems superficially similar to your existing code making migration easier. There is a reason hostility and friendliness is encoded in your types, this preserves that reason.
You can use static_casts safely because all the casts exist in the class they are used in, and therefor won't normally get called unless valid.
You can return shared_ptr or other custom smart pointers instead of raw pointers. And you probably should.
This avoids a potentially costly refactor that completely avoids casting. Casting is there to be used as a tool.
Cons:
It is conceptually simple. This does not provide a strong set of vocabulary (methods, classes and patterns) for building a smart set of tools for building advanced type mechanics.
Likely whether or not something is hostile should be a data member or implemented as series of methods controlling instance behavior.
Someone might think that the pointers this returns convey ownership and delete them.
Every caller must check pointers for validity prior to use. Or you can add methods to check, but then callers will need to call methods to check before the cast. Checks like these are surprising for users of the class and make it harder to use correctly.
It is polymorphism dense. This will perplex people who are uncomfortable with polymorphism. Even today there are many who are not comfortable with polymorphism.
A refactor that completely avoids casting is possible. Casting is dangerous and not a tool to use lightly.

Related

Mimicing C# 'new' (hiding a virtual method) in a C++ code generator

I'm developing a system which takes a set of compiled .NET assemblies and emits C++ code which can then be compiled to any platform having a C++ compiler. Of course, this involves some extensive trickery due to various things .NET does that C++ doesn't.
One such situation is the ability to hide virtual methods, such as the following in C#:
class A
{
virtual void MyMethod()
{ ... }
}
class B : A
{
override void MyMethod()
{ ... }
}
class C : B
{
new virtual void MyMethod()
{ ... }
}
class D : C
{
override void MyMethod()
{ ... }
}
I came up with a solution to this that seemed clever and did work, as in the following example:
namespace impdetails
{
template<class by_type>
struct redef {};
}
struct A
{
virtual void MyMethod( void );
};
struct B : A
{
virtual void MyMethod( void );
};
struct C : B
{
virtual void MyMethod( impdetails::redef<C> );
};
struct D : C
{
virtual void MyMethod( impdetails::redef<D> );
};
This does of course require that all the call sites for C::MyMethod and D::MyMethod construct and pass the dummy object, as in this example:
C *c_d = &d;
c_d->MyMethod( impdetails::redef<C>() );
I'm not worried about this extra source code overhead; the output of this system is mainly not intended for human consumption.
Unfortunately, it turns out this actually causes runtime overhead. Intuitively, one would expect that because impdetails::redef<> is empty, it would take no space and passing it would involve no code.
However, the C++ standard, for reasons I understand but don't totally agree with, mandates that objects cannot have zero size. This leaves us with a situation where the compiler actually emits code to create and pass the object.
In fact, at least on VC2008, I found that it even went to the trouble of zeroing the dummy byte, even in release builds! I'm not sure why that was necessary, but it makes me even more not want to do it this way.
If all else fails I could always change the actual name of the function, such as perhaps having MyMethod, MyMethod$1, and MyMethod$2. However, this causes more problems. For instance, $ is actually not legal in C++ identifiers (although compilers I've tested will allow it.) A totally acceptable identifier in the output program could also be an identifier in the input program, which suggests a more complex approach would be needed, making this a less attractive option.
It also so turns out that there are other situations in this project where it would be nice to be able to modify method signatures using arbitrary type arguments similar to how I'm passing a type to impdetails::redef<>.
Is there any other clever way to get around this, or am I stuck between adding overhead at every call site or mangling names?
After considering some other aspects of the system as well such as interfaces in .NET, I am starting to think maybe it's better - perhaps even more-or-less necessary - to not even use the C++ virtual calling mechanism at all. The more I consider, the messier using that mechanism is getting.
In this approach, each user object class would have a separate struct for the vtable (perhaps kept in a separate namespace like vtabletype::. The generated class would have a pointer member that would be initialized through some trickery to point to a static instance of the vtable. Virtual calls would explicitly use a member pointer from that vtable.
If done properly this should have the same performance as the compiler's own implementation would. I've confirmed it does on VC2008. (By contrast, just using straight C, which is what I was planning on earlier, would likely not perform as well, since compilers often optimize this into a register.)
It would be hellish to write code like this manually, but of course this isn't a concern for a generator. This approach does have some advantages in this application:
Because it's a much more explicit approach, one can be more sure that it's doing exactly what .NET specifies it should be doing with respect to newslot as well as selection of interface implementations.
It might be more efficient (depending on some internal details) than a more traditional C++ approach to interfaces, which would tend to invoke multiple inheritance.
In .NET, objects are considered to be fully constructed when their .ctor runs. This impacts how virtual functions behave. With explicit knowledge of the vtables, this could be achieved by writing it in during allocation. (Although putting the .ctor code into a normal member function is another option.)
It might avoid redundant data when implementing reflection.
It provides better control and knowledge of object layout, which could be useful for the garbage collector.
On the downside, it totally loses the C++ compiler's overloading feature with regard to the vtable entries: those entries are data members, not functions, so there is no overloading. In this case it would be tempting to just number the members (say _0, _1...) This may not be so bad when debugging, since once the pointer is followed, you'll see an actual, properly-named member function anyway.
I think I may end up doing it this way but by all means I'd like to hear if there are better options, as this is admittedly a rather complex approach (and problem.)

Base class "Object", lazy/smart or stupid?

What I'm wondering is: if having a base class, that every* other class inherits from is a good idea or not in C++. Basically, it has the same interface as C#'s Object, which is:
*except for straight interfaces and data structures
class Object
{
public:
virtual ~Object() {}
virtual std::string toString() const = 0;
virtual Object* copy() const = 0;
virtual void release() = 0;
private:
// This operator overload calls toString() to print it out to the stream.
friend std::ostream& operator<<(std::ostream& output, const Object& object);
};
Is this a good thing to do, or am I better off just making seperate interfaces if I want the class to be copied, or converted to a string.
For example
class Copyable
{
public:
virtual ~Copyable() {}
virtual Copyable* copy() const = 0;
};
I'm not sure about this at all, and it's doing my head in. :(
I wouldn't do it. Your Object forces each and every class to implement those pure virtual methods. What if you don't really need them?
C++ has multiple inheritance, so you can have a separate class for each purpose, and let the derived classes decide which traits do they need.
Having those virtual function also adds an overhead, as it adds a vptr to every instance of every class. Might not have a horrible effect, but it's not the C++ spirit IMO.
Lastely, C# and Java's Object have some useful methods because they have a lot more information on the type at runtime. This makes having a single root for all types reasonable. Some C++ frameworks have it as well (MFC's CObject comes to mind), but providing useful facilities at that level is not trivial in C++. You'll have to do more than just offer pure virtual methods - the major gain from having a single root is getting shared implementation via inheritance, not polymorphism. Using your Object in a polymorphic manner just breaks static typing, and in your case, you don't even get any code reuse.
C++ does not have common base class for all objects primarily for perf reasons, especially because of the VMT (virtual methods table). VMT is a pointer that is present in every object that has at least one virtual method. Authors of C++ wanted to support simple objects (with body consisting of one int for example). This is valid and reasonable goal.
This only works if enforced at the language level (as in Java.) In C++ you'd have to deal with non-Object objects anyway. There is no way to force libraries to inherit from Object. Once you instantiate anything from std::, you have a non-Object object in your code.
It is difficult to give a one size fits all answer.
Think through what benefits it is going to give you.
Now think through how much extra work it is going to cause e.g. multiple inheritance complications etc.
Generally doing this is going to be more work than not doing it so you need to make sure that it is going to give you a substantial benefit or you are wasting time.

Why should one not derive from c++ std string class?

I wanted to ask about a specific point made in Effective C++.
It says:
A destructor should be made virtual if a class needs to act like a polymorphic class. It further adds that since std::string does not have a virtual destructor, one should never derive from it. Also std::string is not even designed to be a base class, forget polymorphic base class.
I do not understand what specifically is required in a class to be eligible for being a base class (not a polymorphic one)?
Is the only reason that I should not derive from std::string class is it does not have a virtual destructor? For reusability purpose a base class can be defined and multiple derived class can inherit from it. So what makes std::string not even eligible as a base class?
Also, if there is a base class purely defined for reusability purpose and there are many derived types, is there any way to prevent client from doing Base* p = new Derived() because the classes are not meant to be used polymorphically?
I think this statement reflects the confusion here (emphasis mine):
I do not understand what specifically is required in a class to be eligible for being a base clas (not a polymorphic one)?
In idiomatic C++, there are two uses for deriving from a class:
private inheritance, used for mixins and aspect oriented programming using templates.
public inheritance, used for polymorphic situations only. EDIT: Okay, I guess this could be used in a few mixin scenarios too -- such as boost::iterator_facade -- which show up when the CRTP is in use.
There is absolutely no reason to publicly derive a class in C++ if you're not trying to do something polymorphic. The language comes with free functions as a standard feature of the language, and free functions are what you should be using here.
Think of it this way -- do you really want to force clients of your code to convert to using some proprietary string class simply because you want to tack on a few methods? Because unlike in Java or C# (or most similar object oriented languages), when you derive a class in C++ most users of the base class need to know about that kind of a change. In Java/C#, classes are usually accessed through references, which are similar to C++'s pointers. Therefore, there's a level of indirection involved which decouples the clients of your class, allowing you to substitute a derived class without other clients knowing.
However, in C++, classes are value types -- unlike in most other OO languages. The easiest way to see this is what's known as the slicing problem. Basically, consider:
int StringToNumber(std::string copyMeByValue)
{
std::istringstream converter(copyMeByValue);
int result;
if (converter >> result)
{
return result;
}
throw std::logic_error("That is not a number.");
}
If you pass your own string to this method, the copy constructor for std::string will be called to make a copy, not the copy constructor for your derived object -- no matter what child class of std::string is passed. This can lead to inconsistency between your methods and anything attached to the string. The function StringToNumber cannot simply take whatever your derived object is and copy that, simply because your derived object probably has a different size than a std::string -- but this function was compiled to reserve only the space for a std::string in automatic storage. In Java and C# this is not a problem because the only thing like automatic storage involved are reference types, and the references are always the same size. Not so in C++.
Long story short -- don't use inheritance to tack on methods in C++. That's not idiomatic and results in problems with the language. Use non-friend, non-member functions where possible, followed by composition. Don't use inheritance unless you're template metaprogramming or want polymorphic behavior. For more information, see Scott Meyers' Effective C++ Item 23: Prefer non-member non-friend functions to member functions.
EDIT: Here's a more complete example showing the slicing problem. You can see it's output on codepad.org
#include <ostream>
#include <iomanip>
struct Base
{
int aMemberForASize;
Base() { std::cout << "Constructing a base." << std::endl; }
Base(const Base&) { std::cout << "Copying a base." << std::endl; }
~Base() { std::cout << "Destroying a base." << std::endl; }
};
struct Derived : public Base
{
int aMemberThatMakesMeBiggerThanBase;
Derived() { std::cout << "Constructing a derived." << std::endl; }
Derived(const Derived&) : Base() { std::cout << "Copying a derived." << std::endl; }
~Derived() { std::cout << "Destroying a derived." << std::endl; }
};
int SomeThirdPartyMethod(Base /* SomeBase */)
{
return 42;
}
int main()
{
Derived derivedObject;
{
//Scope to show the copy behavior of copying a derived.
Derived aCopy(derivedObject);
}
SomeThirdPartyMethod(derivedObject);
}
To offer the counter side to the general advice (which is sound when there are no particular verbosity/productivity issues evident)...
Scenario for reasonable use
There is at least one scenario where public derivation from bases without virtual destructors can be a good decision:
you want some of the type-safety and code-readability benefits provided by dedicated user-defined types (classes)
an existing base is ideal for storing the data, and allows low-level operations that client code would also want to use
you want the convenience of reusing functions supporting that base class
you understand that any any additional invariants your data logically needs can only be enforced in code explicitly accessing the data as the derived type, and depending on the extent to which that will "naturally" happen in your design, and how much you can trust client code to understand and cooperate with the logically-ideal invariants, you may want members functions of the derived class to reverify expectations (and throw or whatever)
the derived class adds some highly type-specific convenience functions operating over the data, such as custom searches, data filtering / modifications, streaming, statistical analysis, (alternative) iterators
coupling of client code to the base is more appropriate than coupling to the derived class (as the base is either stable or changes to it reflect improvements to functionality also core to the derived class)
put another way: you want the derived class to continue to expose the same API as the base class, even if that means the client code is forced to change, rather than insulating it in some way that allows the base and derived APIs to grow out of sync
you're not going to be mixing pointers to base and derived objects in parts of the code responsible for deleting them
This may sound quite restrictive, but there are plenty of cases in real world programs matching this scenario.
Background discussion: relative merits
Programming is about compromises. Before you write a more conceptually "correct" program:
consider whether it requires added complexity and code that obfuscates the real program logic, and is therefore more error prone overall despite handling one specific issue more robustly,
weigh the practical costs against the probability and consequences of issues, and
consider "return on investment" and what else you could be doing with your time.
If the potential problems involve usage of the objects that you just can't imagine anyone attempting given your insights into their accessibility, scope and nature of usage in the program, or you can generate compile-time errors for dangerous use (e.g. an assertion that derived class size matches the base's, which would prevent adding new data members), then anything else may be premature over-engineering. Take the easy win in clean, intuitive, concise design and code.
Reasons to consider derivation sans virtual destructor
Say you have a class D publicly derived from B. With no effort, the operations on B are possible on D (with the exception of construction, but even if there are a lot of constructors you can often provide effective forwarding by having one template for each distinct number of constructor arguments: e.g. template <typename T1, typename T2> D(const T1& x1, const T2& t2) : B(t1, t2) { }. Better generalised solution in C++0x variadic templates.)
Further, if B changes then by default D exposes those changes - staying in sync - but someone may need to review extended functionality introduced in D to see if it remains valid, and the client usage.
Rephrasing this: there is reduced explicit coupling between base and derived class, but increased coupling between base and client.
This is often NOT what you want, but sometimes it is ideal, and other times a non issue (see next paragraph). Changes to the base force more client code changes in places distributed throughout the code base, and sometimes the people changing the base may not even have access to the client code to review or update it correspondingly. Sometimes it is better though: if you as the derived class provider - the "man in the middle" - want base class changes to feed through to clients, and you generally want clients to be able - sometimes forced - to update their code when the base class changes without you needing to be constantly involved, then public derivation may be ideal. This is common when your class is not so much an independent entity in its own right, but a thin value-add to the base.
Other times the base class interface is so stable that the coupling may be deemed a non issue. This is especially true of classes like Standard containers.
Summarily, public derivation is a quick way to get or approximate the ideal, familiar base class interface for the derived class - in a way that's concise and self-evidently correct to both the maintainer and client coder - with additional functionality available as member functions (which IMHO - which obviously differs with Sutter, Alexandrescu etc - can aid usability, readability and assist productivity-enhancing tools including IDEs)
C++ Coding Standards - Sutter & Alexandrescu - cons examined
Item 35 of C++ Coding Standards lists issues with the scenario of deriving from std::string. As scenarios go, it's good that it illustrates the burden of exposing a large but useful API, but both good and bad as the base API is remarkably stable - being part of the Standard Library. A stable base is a common situation, but no more common than a volatile one and a good analysis should relate to both cases. While considering the book's list of issues, I'll specifically contrast the issues' applicability to the cases of say:
a) class Issue_Id : public std::string { ...handy stuff... }; <-- public derivation, our controversial usage
b) class Issue_Id : public string_with_virtual_destructor { ...handy stuff... }; <- safer OO derivation
c) class Issue_Id { public: ...handy stuff... private: std::string id_; }; <-- a compositional approach
d) using std::string everywhere, with freestanding support functions
(Hopefully we can agree the composition is acceptable practice, as it provides encapsulation, type safety as well as a potentially enriched API over and above that of std::string.)
So, say you're writing some new code and start thinking about the conceptual entities in an OO sense. Maybe in a bug tracking system (I'm thinking of JIRA), one of them is say an Issue_Id. Data content is textual - consisting of an alphabetic project id, a hyphen, and an incrementing issue number: e.g. "MYAPP-1234". Issue ids can be stored in a std::string, and there will be lots of fiddly little text searches and manipulation operations needed on issue ids - a large subset of those already provided on std::string and a few more for good measure (e.g. getting the project id component, providing the next possible issue id (MYAPP-1235)).
On to Sutter and Alexandrescu's list of issues...
Nonmember functions work well within existing code that already manipulates strings. If instead you supply a super_string, you force changes through your code base to change types and function signatures to super_string.
The fundamental mistake with this claim (and most of the ones below) is that it promotes the convenience of using only a few types, ignoring the benefits of type safety. It's expressing a preference for d) above, rather than insight into c) or b) as alternatives to a). The art of programming involves balancing the pros and cons of distinct types to achieve reasonable reuse, performance, convenience and safety. The paragraphs below elaborate on this.
Using public derivation, the existing code can implicitly access the base class string as a string, and continue to behave as it always has. There's no specific reason to think that the existing code would want to use any additional functionality from super_string (in our case Issue_Id)... in fact it's often lower-level support code pre-existing the application for which you're creating the super_string, and therefore oblivious to the needs provided for by the extended functions. For example, say there's a non-member function to_upper(std::string&, std::string::size_type from, std::string::size_type to) - it could still be applied to an Issue_Id.
So, unless the non-member support function is being cleaned up or extended at the deliberate cost of tightly coupling it to the new code, then it needn't be touched. If it is being overhauled to support issue ids (for example, using the insight into the data content format to upper-case only leading alpha characters), then it's probably a good thing to ensure it really is being passed an Issue_Id by creating an overload ala to_upper(Issue_Id&) and sticking to either the derivation or compositional approaches allowing type safety. Whether super_string or composition is used makes no difference to effort or maintainability. A to_upper_leading_alpha_only(std::string&) reusable free-standing support function isn't likely to be of much use - I can't recall the last time I wanted such a function.
The impulse to use std::string everywhere isn't qualitatively different to accepting all your arguments as containers of variants or void*s so you don't have to change your interfaces to accept arbitrary data, but it makes for error prone implementation and less self-documenting and compiler-verifiable code.
Interface functions that take a string now need to: a) stay away from super_string's added functionality (unuseful); b) copy their argument to a super_string (wasteful); or c) cast the string reference to a super_string reference (awkward and potentially illegal).
This seems to be revisiting the first point - old code that needs to be refactored to use the new functionality, albeit this time client code rather than support code. If the function wants to start treating its argument as an entity for which the new operations are relevant, then it should start taking its arguments as that type and the clients should generate them and accept them using that type. The exact same issues exists for composition. Otherwise, c) can be practical and safe if the guidelines I list below are followed, though it is ugly.
super_string's member functions don't have any more access to string's internals than nonmember functions because string probably doesn't have protected members (remember, it wasn't meant to be derived from in the first place)
True, but sometimes that's a good thing. A lot of base classes have no protected data. The public string interface is all that's needed to manipulate the contents, and useful functionality (e.g. get_project_id() postulated above) can be elegantly expressed in terms of those operations. Conceptually, many times I've derived from Standard containers, I've wanted not to extend or customise their functionality along the existing lines - they're already "perfect" containers - rather I've wanted to add another dimension of behaviour that's specific to my application, and requires no private access. It's because they're already good containers that they're good to reuse.
If super_string hides some of string's functions (and redefining a nonvirtual function in a derived class is not overriding, it's just hiding), that could cause widespread confusion in code that manipulates strings that started their life converted automatically from super_strings.
True for composition too - and more likely to happen as the code doesn't default to passing things through and hence staying in sync, and also true in some situations with run-time polymorphic hierarchies as well. Samed named functions that behave differently in classes that initial appear interchangeable - just nasty. This is effectively the usual caution for correct OO programming, and again not a sufficient reason to abandon the benefits in type safety etc..
What if super_string wants to inherit from string to add more state [explanation of slicing]
Agreed - not a good situation, and somewhere I personally tend to draw the line as it often moves the problems of deletion through a pointer to base from the realm of theory to the very practical - destructors aren't invoked for additional members. Still, slicing can often do what's wanted - given the approach of deriving super_string not to change its inherited functionality, but to add another "dimension" of application-specific functionality....
Admittedly, it's tedious to have to write passthrough functions for the member functions you want to keep, but such an implementation is vastly better and safer than using public or nonpublic inheritance.
Well, certainly agree about the tedium....
Guidelines for successful derivation sans virtual destructor
ideally, avoid adding data members in derived class: variants of slicing can accidentally remove data members, corrupt them, fail to initialise them...
even more so - avoid non-POD data members: deletion via base-class pointer is technically undefined behaviour anyway, but with non-POD types failing to run their destructors is more likely to have non-theoretical problems with resource leaks, bad reference counts etc.
honour the Liskov Substitution Principal / you can't robustly maintain new invariants
for example, in deriving from std::string you can't intercept a few functions and expect your objects to remain uppercase: any code that accesses them via a std::string& or ...* can use std::string's original function implementations to change the value)
derive to model a higher level entity in your application, to extend the inherited functionality with some functionality that uses but doesn't conflict with the base; do not expect or try to change the basic operations - and access to those operations - granted by the base type
be aware of the coupling: base class can't be removed without affecting client code even if the base class evolves to have inappropriate functionality, i.e. your derived class's usability depends on the ongoing appropriateness of the base
sometimes even if you use composition you'll need to expose the data member due to performance, thread safety issues or lack of value semantics - so the loss of encapsulation from public derivation isn't tangibly worse
the more likely people using the potentially-derived class will be unaware of its implementation compromises, the less you can afford to make them dangerous
therefore, low-level widely deployed libraries with many ad-hoc casual users should be more wary of dangerous derivation than localised use by programmers routinely using the functionality at application level and/or in "private" implementation / libraries
Summary
Such derivation is not without issues so don't consider it unless the end result justifies the means. That said, I flatly reject any claim that this can't be used safely and appropriately in particular cases - it's just a matter of where to draw the line.
Personal experience
I do sometimes derive from std::map<>, std::vector<>, std::string etc - I've never been burnt by the slicing or delete-via-base-class-pointer issues, and I've saved a lot of time and energy for more important things. I don't store such objects in heterogeneous polymorphic containers. But, you need to consider whether all the programmers using the object are aware of the issues and likely to program accordingly. I personally like to write my code to use heap and run-time polymorphism only when needed, while some people (due to Java backgrounds, their prefered approach to managing recompilation dependencies or switching between runtime behaviours, testing facilities etc.) use them habitually and therefore need to be more concerned about safe operations via base class pointers.
If you really want to derive from it (not discussing why you want to do it) I think you can prevent Derived class direct heap instantiation by making it's operator new private:
class StringDerived : public std::string {
//...
private:
static void* operator new(size_t size);
static void operator delete(void *ptr);
};
But this way you restrict yourself from any dynamic StringDerived objects.
Not only is the destructor not virtual, std::string contains no virtual functions at all, and no protected members. That makes it very hard for the derived class to modify its functionality.
Then why would you derive from it?
Another problem with being non-polymorphic is that if you pass your derived class to a function expecting a string parameter, your extra functionality will just be sliced off and the object will be seen as a plain string again.
Why should one not derive from c++ std string class?
Because it is not necessary. If you want to use DerivedString for functionality extension; I don't see any problem in deriving std::string. The only thing is, you should not interact between both classes (i.e. don't use string as a receiver for DerivedString).
Is there any way to prevent client from doing Base* p = new Derived()
Yes. Make sure that you provide inline wrappers around Base methods inside Derived class. e.g.
class Derived : protected Base { // 'protected' to avoid Base* p = new Derived
const char* c_str () const { return Base::c_str(); }
//...
};
There are two simple reasons for not deriving from a non-polymorphic class:
Technical: it introduces slicing bugs (because in C++ we pass by value unless otherwise specified)
Functional: if it is non-polymorphic, you can achieve the same effect with composition and some function forwarding
If you wish to add new functionalities to std::string, then first consider using free functions (possibly templates), like the Boost String Algorithm library does.
If you wish to add new data members, then properly wrap the class access by embedding it (Composition) inside a class of your own design.
EDIT:
#Tony noticed rightly that the Functional reason I cited was probably meaningless to most people. There is a simple rule of thumb, in good design, that says that when you can pick a solution among several, you should consider the one with the weaker coupling. Composition has weaker coupling that Inheritance, and thus should be preferred, when possible.
Also, composition gives you the opportunity to nicely wrap the original's class method. This is not possible if you pick inheritance (public) and the methods are not virtual (which is the case here).
The C++ standard states that If Base class destructor is not virtual and you delete an object of Base class that points to the object of an derived class then it causes an undefined Behavior.
C++ standard section 5.3.5/3:
if the static type of the operand is different from its dynamic type, the static type shall be a base class of the operand’s dynamic type and the static type shall have a virtual destructor or the behavior is undefined.
To be clear on the Non-polymorphic class & need of virtual destructor
The purpose of making a destructor virtual is to facilitate the polymorphic deletion of objects through delete-expression. If there is no polymorphic deletion of objects, then you don't need virtual destructor's.
Why not to derive from String Class?
One should generally avoid deriving from any standard container class because of the very reason that they don' have virtual destructors, which make it impossible to delete objects polymorphically.
As for the string class, the string class doesn't have any virtual functions so there is nothing that you can possibly override. The best you can do is hide something.
If at all you want to have a string like functionality you should write a class of your own rather than inherit from std::string.
As soon as you add any member (variable) into your derived std::string class, will you systematically screw the stack if you attempt to use the std goodies with an instance of your derived std::string class? Because the stdc++ functions/members have their stack pointers[indexes] fixed [and adjusted] to the size/boundary of the (base std::string) instance size.
Right?
Please, correct me if I am wrong.

C++ checking the type of reference

Is it bad design to check if an object is of a particular type by having some sort of ID data member in it?
class A
{
private:
bool isStub;
public:
A(bool isStubVal):isStub(isStubVal){}
bool isStub(){return isStub;}
};
class A1:public A
{
public:
A1():A(false){}
};
class AStub:public A
{
public:
AStub():A(true){}
};
EDIT 1:
Problem is A holds a lot of virtual functions, which A1 doesn't override but the stub needs to, for indidicating that you are working on a stub instead of an actual object. Here maintainability is the question, for every function that i add to A, i need to override it in stub. forgetting it means dangerous behaviour as A's virtual function gets executed with stub's data. Sure I can add an abstract class ABase and let A and Astub inherit from them. But the design has become rigid enough to allow this refactor.
A reference holder to A is held in another class B. B is initialized with the stub reference, but later depending on some conditions, the reference holder in B is reinitialized with the A1,A2 etc.. So when i do this BObj.GetA(), i can check in GetA() if the refholder is holding a stub and then give an error in that case. Not doing that check means, i would have to override all functions of A in AStub with the appropriate error conditions.
Generally, yes. You're half OO, half procedural.
What are you going to do once you determine the object type? You probably should put that behavior in the object itself (perhaps in a virtual function), and have different derived classes implement that behavior differently. Then you have no reason to check the object type at all.
In your specific example you have a "stub" class. Instead of doing...
if(!stub)
{
dosomething;
}
Just call
object->DoSomething();
and have the implemention in AStub be a empty
Generally yes. Usually you want not to query the object, but to expect it to BEHAVE the proper way. What you suggest is basically a primitive RTTI, and this is generally frowned upon, unless there are better options.
The OO way would be to Stub the functionality, not check for it. However, in the case of a lot of functions to "stub" this may not seem optimal.
Hence, this depends on what you want the class to really do.
Also note, that in this case you don't waste space:
class A
{
public:
virtual bool isStub() = 0;
};
class A1:public A
{
public:
virtual bool isStub() { return false; };
};
class AStub:public A
{
public:
virtual bool isStub() { return true; };
};
... buuut you have a virtual function -- what usually is not a problem, unless it's a performance bottleneck.
If you want to find out the type of object at runtime you can use a dynamic_cast. You must have a pointer or reference to the object, and then check the result of the dynamic_cast. If it is not NULL, then the object is the correct type.
With polymorphic classes you can use the typeofoperator to perform RTTI. Most of the time you shouldn't need to. Without polymorphism, there's no language facility to do so, but you should need to even less often.
One caveat. Obviously your type is going to be determined at construction time. If your determination of 'type' is a dynamic quantity you can't solve this problem with the C++ type system. In that case you need to have some function. But in this case it is better to use the overridable/dynamic behavior as Terry suggested.
Can you provide some better information as what you are trying to accomplish?
This sort of thing is fine. It's generally better to put functionality in the object, so that there's no need to switch on type -- this makes the calling code simpler and localises future changes -- but there's a lot to be said for being able to check the types.
There will always be exceptions to the general case, even with the best will in the world, and being able to quickly check for the odd specific case can make the difference between having something fixed by one change in one place, a quick project-specific hack in the project-specific code, and having to make more invasive, wide-reaching changes (extra functions in the base class at the very least) -- possibly pushing project-specific concerns into shared or framework code.
For a quick solution to the problem, use dynamic_cast. As others have noted, this lets one check that an object is of a given type -- or a type derived from that (an improvement over the straightforward "check IDs" approach). For example:
bool IsStub( const A &a ) {
return bool( dynamic_cast< const AStub * >( &a ) );
}
This requires no setup, and without any effort on one's part the results will be correct. It is also template-friendly in a very straightforward and obvious manner.
Two other approaches may also suit.
If the set of derived types is fixed, or there are a set of derived types that get commonly used, one might have some functions on the base class that will perform the cast. The base class implementations return NULL:
class A {
virtual AStub *AsStub() { return NULL; }
virtual OtherDerivedClass *AsOtherDerivedClass() { return NULL; }
};
Then override as appropriate, for example:
class AStub : public A {
AStub *AsStub() { return this; }
};
Again, this allows one to have objects of a derived type treated as if they were their base type -- or not, if that would be preferable. A further advantage of this is that one need not necessarily return this, but could return a pointer to some other object (a member variable perhaps). This allows a given derived class to provide multiple views of itself, or perhaps change its role at runtime.
This approach is not especially template friendly, though. It would require a bit of work, with the result either being a bit more verbose or using constructs with which not everybody is familiar.
Another approach is to reify the object type. Have an actual object that represents the type, that can be retrieved by both a virtual function and a static function. For simple type checking, this is not much better than dynamic_cast, but the cost is more predictable across a wide range of compilers, and the opportunities for storing useful data (proper class name, reflection information, navigable class hierarchy information, etc.) are much greater.
This requires a bit of infrastructure (a couple of macros, at least) to make it easy to add the virtual functions and maintain the hierarchy data, but it provides good results. Even if this is only used to store class names that are guaranteed to be useful, and to check for types, it'll pay for itself.
With all this in place, checking for a particular type of object might then go something like this example:
bool IsStub( const A &a ) {
return a.GetObjectType().IsDerivedFrom( AStub::GetClassType() );
}
(IsDerivedFrom might be table-driven, or it could simply loop through the hierarchy data. Either of these may or may not be more efficient than dynamic_cast, but the approximate runtime cost is at least predictable.)
As with dynamic_cast, this approach is also obviously amenable to automation with templates.
In the general case it might not be a good design, but in some specific cases it is a reasonable design choice to provide an isStub() method for the use of a specific client that would otherwise need to use RTTI. One such case is lazy loading:
class LoadingProxy : IInterface
{
private:
IInterface m_delegate;
IInterface loadDelegate();
public:
LoadingProxy(IInterface delegate) : m_delegate(delegate){}
int useMe()
{
if (m_delegate.isStub())
{
m_delegate = loadDelegate();
}
return m_delegate.useMe();
}
};
The problem with RTTI is that it is relatively expensive (slow) compared with a virtual method call, so that if your useMe() function is simple/quick, RTTI determines the performance. On one application that I worked on, using RTTI tests to determine if lazy loading was needed was one of the performance bottlenecks identified by profiling.
However, as many other answers have said, the application code should not need to worry about whether it has a stub or a usable instance. The test should be in one place/layer in the application. Unless you might need multiple LoadingProxy implementations there might be a case for making isStub() a friend function.

Pimpl idiom vs Pure virtual class interface

I was wondering what would make a programmer to choose either Pimpl idiom or pure virtual class and inheritance.
I understand that pimpl idiom comes with one explicit extra indirection for each public method and the object creation overhead.
The Pure virtual class in the other hand comes with implicit indirection(vtable) for the inheriting implementation and I understand that no object creation overhead.
EDIT: But you'd need a factory if you create the object from the outside
What makes the pure virtual class less desirable than the pimpl idiom?
When writing a C++ class, it's appropriate to think about whether it's going to be
A Value Type
Copy by value, identity is never important. It's appropriate for it to be a key in a std::map. Example, a "string" class, or a "date" class, or a "complex number" class. To "copy" instances of such a class makes sense.
An Entity type
Identity is important. Always passed by reference, never by "value". Often, doesn't make sense to "copy" instances of the class at all. When it does make sense, a polymorphic "Clone" method is usually more appropriate. Examples: A Socket class, a Database class, a "policy" class, anything that would be a "closure" in a functional language.
Both pImpl and pure abstract base class are techniques to reduce compile time dependencies.
However, I only ever use pImpl to implement Value types (type 1), and only sometimes when I really want to minimize coupling and compile-time dependencies. Often, it's not worth the bother. As you rightly point out, there's more syntactic overhead because you have to write forwarding methods for all of the public methods. For type 2 classes, I always use a pure abstract base class with associated factory method(s).
Pointer to implementation is usually about hiding structural implementation details. Interfaces are about instancing different implementations. They really serve two different purposes.
The pimpl idiom helps you reduce build dependencies and times especially in large applications, and minimizes header exposure of the implementation details of your class to one compilation unit. The users of your class should not even need to be aware of the existence of a pimple (except as a cryptic pointer to which they are not privy!).
Abstract classes (pure virtuals) is something of which your clients must be aware: if you try to use them to reduce coupling and circular references, you need to add some way of allowing them to create your objects (e.g. through factory methods or classes, dependency injection or other mechanisms).
I was searching an answer for the same question.
After reading some articles and some practice I prefer using "Pure virtual class interfaces".
They are more straight forward (this is a subjective opinion). Pimpl idiom makes me feel I'm writing code "for the compiler", not for the "next developer" that will read my code.
Some testing frameworks have direct support for Mocking pure virtual classes
It's true that you need a factory to be accessible from the outside.
But if you want to leverage polymorphism: that's also "pro", not a "con". ...and a simple factory method does not really hurts so much
The only drawback (I'm trying to investigate on this) is that pimpl idiom could be faster
when the proxy-calls are inlined, while inheriting necessarily need an extra access to the object VTABLE at runtime
the memory footprint the pimpl public-proxy-class is smaller (you can do easily optimizations for faster swaps and other similar optimizations)
I hate pimples! They do the class ugly and not readable. All methods are redirected to pimple. You never see in headers, what functionalities has the class, so you can not refactor it (e. g. simply change the visibility of a method). The class feels like "pregnant". I think using iterfaces is better and really enough to hide the implementation from the client. You can event let one class implement several interfaces to hold them thin. One should prefer interfaces!
Note: You do not necessary need the factory class. Relevant is that the class clients communicate with it's instances via the appropriate interface.
The hiding of private methods I find as a strange paranoia and do not see reason for this since we hav interfaces.
There's a very real problem with shared libraries that the pimpl idiom circumvents neatly that pure virtuals can't: you cannot safely modify/remove data members of a class without forcing users of the class to recompile their code. That may be acceptable under some circumstances, but not e.g. for system libraries.
To explain the problem in detail, consider the following code in your shared library/header:
// header
struct A
{
public:
A();
// more public interface, some of which uses the int below
private:
int a;
};
// library
A::A()
: a(0)
{}
The compiler emits code in the shared library that calculates the address of the integer to be initialized to be a certain offset (probably zero in this case, because it's the only member) from the pointer to the A object it knows to be this.
On the user side of the code, a new A will first allocate sizeof(A) bytes of memory, then hand a pointer to that memory to the A::A() constructor as this.
If in a later revision of your library you decide to drop the integer, make it larger, smaller, or add members, there'll be a mismatch between the amount of memory user's code allocates, and the offsets the constructor code expects. The likely result is a crash, if you're lucky - if you're less lucky, your software behaves oddly.
By pimpl'ing, you can safely add and remove data members to the inner class, as the memory allocation and constructor call happen in the shared library:
// header
struct A
{
public:
A();
// more public interface, all of which delegates to the impl
private:
void * impl;
};
// library
A::A()
: impl(new A_impl())
{}
All you need to do now is keep your public interface free of data members other than the pointer to the implementation object, and you're safe from this class of errors.
Edit: I should maybe add that the only reason I'm talking about the constructor here is that I didn't want to provide more code - the same argumentation applies to all functions that access data members.
We must not forget that inheritance is a stronger, closer coupling than delegation. I would also take into account all the issues raised in the answers given when deciding what design idioms to employ in solving a particular problem.
Although broadly covered in the other answers maybe I can be a bit more explicit about one benefit of pimpl over virtual base classes:
A pimpl approach is transparent from the user view point, meaning you can e.g. create objects of the class on the stack and use them directly in containers. If you try to hide the implementation using an abstract virtual base class, you will need to return a shared pointer to the base class from a factory, complicating it's use. Consider the following equivalent client code:
// Pimpl
Object pi_obj(10);
std::cout << pi_obj.SomeFun1();
std::vector<Object> objs;
objs.emplace_back(3);
objs.emplace_back(4);
objs.emplace_back(5);
for (auto& o : objs)
std::cout << o.SomeFun1();
// Abstract Base Class
auto abc_obj = ObjectABC::CreateObject(20);
std::cout << abc_obj->SomeFun1();
std::vector<std::shared_ptr<ObjectABC>> objs2;
objs2.push_back(ObjectABC::CreateObject(13));
objs2.push_back(ObjectABC::CreateObject(14));
objs2.push_back(ObjectABC::CreateObject(15));
for (auto& o : objs2)
std::cout << o->SomeFun1();
In my understanding these two things serve completely different purposes. The purpose of the pimple idiom is basically give you a handle to your implementation so you can do things like fast swaps for a sort.
The purpose of virtual classes is more along the line of allowing polymorphism, i.e. you have a unknown pointer to an object of a derived type and when you call function x you always get the right function for whatever class the base pointer actually points to.
Apples and oranges really.
The most annoying problem about the pimpl idiom is it makes it extremely hard to maintain and analyse existing code. So using pimpl you pay with developer time and frustration only to "reduce build dependencies and times and minimize header exposure of the implementation details". Decide yourself, if it is really worth it.
Especially "build times" is a problem you can solve by better hardware or using tools like Incredibuild ( www.incredibuild.com, also already included in Visual Studio 2017 ), thus not affecting your software design. Software design should be generally independent of the way the software is built.