Polymorphism Requires Allocation - c++

Using polymorphism in C++ usually requires dynamic allocation, use of the factory pattern, etc. Is that not a true statement? Sure, I can instantiate a derived type on the stack if I really try, but is that every day code or an academic exercise?
Some other object orientated languages allocate every user made type on the heap. However, any allocation in C++ us likely to raise debates over performance with your peers.
How then, are you to use polymorphism while keeping allocation to a minimum?
Also, are we really writing real world code while using polymorphism without any dynamic allocation? Are we to forget we ever learned the factory pattern?
Edit:
It seems to me in this Q&A that we have identified a difference between scenarios where the type is known at compile time or it isn't.
It has been brought up that the use of streams are an example of polymorphism without dynamic allocation. However, when you are using streams, you know the type you need as you are typing out your program.
On the other hand, there are the scenarios where you don't know the type at compile time. This is where I reach (and have been taught to reach) for the factory pattern. Given some input, decide what concrete type to instantiate. I don't see any alternative to this. Is there one?
--
Let's try to use a scenario that came up in real world code.
Let us assume a stock trading system.
Your job is to store orders that arrive over the network from customers.
Your input is JSON text.
Your output should be a collection of data structures representing the orders
An order could be a vanilla stock purchase, an Option, or a Future.
You do not know what customers are ordering until you parse the JSON
Naturally, I'd come up with something like this, super simplified for purposes of example, domain:
class Order
{
protected:
double m_price;
unsigned m_size;
};
class Option : public Order
{
protected:
string m_expirationDate;
};
class Future : public Order
{
protected:
string m_expirationDate;
};
And then I'd come up with some factory that parses the JSON and spits out an order:
class OrderFactory
{
Order * CreateOrder(const std::string & json);
};
The factory allocates. Therefore, your peers are likely to point out that it's slow in a system that receives millions of orders per second.
I suppose we could convert our domain to some C like monstrosity:
struct Order
{
enum OrderType
{
ORDER_TYPE_VANILLA,
ORDER_TYPE_OPTION,
ORDER_TYPE_FUTURE
}
OrderType m_type;
double m_price;
unsigned m_size;
std::string m_expirationDate; // empty means it isnt used
int m_callOrPut // Encoder rings are great for code!
// -1 - not used
// 0 - Put
// 1 - Call
};
but then we are just ditching polymorphism, and what I think are good OO principles, altogether. We'd still, most likely be allocating these and storing them as they came in too. Unless we want to declare some statically sized container for them and mark elements used or un-used....(yet more C)
Is there some alternative that would not allocate that I am not aware of?

Using polymorphism in C++ usually requires dynamic allocation, use of
the factory pattern, etc. Is that not a true statement? Sure, I can
instantiate a derived type on the stack if I really try, but is that
every day code or an academic exercise?
"Usually" is a bit meaningless; it's a fuzzy comparison on which there are no metrics to produce statistics. Is it possible to use polymorphism without dynamic allocation? Yes, trivially. Consider this case:
struct A{};
struct B : A;
void foo(A& a) {};
void foo(A* a) {};
void bar() {
B b;
foo(b);
foo(&b);
}
and no dynamic memory used.

First off, in C++ people often distinguish compile-time and run-time polymorphis, I take it, your question is about run-time polymorphic behavior.
Yes, there are ways to enjoy polymorphism without using dynamic allocation. A good example of such are streams in STL. (Although, a lot of people question their design, but that's beside the point).
There are people who say that unless you have a container of (pointers to) polymorphic objects of different dynamic types, run-time polymorphism is really not needed and templates would work better - but since templates comes at their own cost, sometimes run-time polymorphism is suited better.

Related

C++ Design issues: Map with various abstract base classes

I'm facing design problems and could do with some external input. I am trying to avoid abstract base class casting (Since I've heard that's bad).
The issues are down to this structure:
class entity... (base with pure virtual functions)
class hostile : public entity... (base with pure virtual functions)
class friendly : public entity... (base with pure virtual functions)
// Then further derived classes using these last base classes...
Initially I thought I'd get away with:
const enum class FactionType : unsigned int
{ ... };
std::unordered_map<FactionType, std::vector<std::unique_ptr<CEntity>>> m_entitys;
And... I did but this causes me problems because I need to access "unique" function from say hostile or friendly specifically.
I have disgracefully tried (worked but don't like it nor does it feel safe):
// For-Each Loop: const auto& friendly : m_entitys[FactionType::FRIENDLY]
CFriendly* castFriendly = static_cast<CFriendly*>(&*friendly);
I was hoping/trying to maintain the unordered_map design that uses FactionType as a key for the base abstract class type... Anyway, input is greatly appreciated.
If there are any syntactical errors, I apologise.
About casting I agree with with #rufflewind. The casts mean different thing and are useful at different times.
To coerce a region of memory at compile time (the decision of typing happen at compile time anyway) use static_cast. The amount of memory on the other end of the T* equal to sizeof(T) will be interpreted as a T regardless of correct behavior.
The decisions for dynamic_cast are made entirely at runtime, sometimes requiring RTTI (Run Time Type Information). It makes a decision and it will either return a null pointer or a valid pointer to a T if one can be made.
The decision goes further than just the types of casts though. Using a data structure to look up types and methods (member functions) imposes the time constraints that would not otherwise exist when compared to the relatively fast and mandatory casts. There is a way to skip the data structures, but not the casting without major refactoring (with major refactoring you can do anything).
You can move the casts into the entity class, get them done right and just leave them encapsulated there.
class entity
{
// Previous code
public:
// This will be overridden in hostiles to return a valid
// pointer and nullptr or 0 in other types of entities
virtual hostile* cast_to_hostile() = 0
virtual const hostile* cast_to_hostile() const = 0
// This will be overridden in friendlies to return a valid
// pointer and nullptr or 0 in other types of entities
virtual friendly* cast_to_friendly() = 0
virtual const friendly* cast_to_friendly() const = 0
// The following helper methods are optional but
// can make it easier to write streamlined code in
// calling classes with a little trouble.
// Hostile and friendly can each implement this to return
// The appropriate enum member. This would useful for making
// decision about friendlies and hostiles
virtual FactionType entity_type() const = 0;
// These two method delegate storage of the knowledge
// of hostility or friendliness to the derived classes.
// These are implemented here as non-virtual functions
// because they shouldn't need to be overridden, but
// could be made virtual at the cost of a pointer
// indirection and sometimes, but not often a cache miss.
bool is_friendly() const
{
return entity_type() == FactionType_friendly;
}
bool is_hostile() const
{
return entity_type() == FactionType_hostile;
}
}
This strategy is good and bad for a variety of reasons.
Pros:
It is conceptually simple. This is easy to understand quickly if you understand polymorphism.
It seems similar to your existing code seems superficially similar to your existing code making migration easier. There is a reason hostility and friendliness is encoded in your types, this preserves that reason.
You can use static_casts safely because all the casts exist in the class they are used in, and therefor won't normally get called unless valid.
You can return shared_ptr or other custom smart pointers instead of raw pointers. And you probably should.
This avoids a potentially costly refactor that completely avoids casting. Casting is there to be used as a tool.
Cons:
It is conceptually simple. This does not provide a strong set of vocabulary (methods, classes and patterns) for building a smart set of tools for building advanced type mechanics.
Likely whether or not something is hostile should be a data member or implemented as series of methods controlling instance behavior.
Someone might think that the pointers this returns convey ownership and delete them.
Every caller must check pointers for validity prior to use. Or you can add methods to check, but then callers will need to call methods to check before the cast. Checks like these are surprising for users of the class and make it harder to use correctly.
It is polymorphism dense. This will perplex people who are uncomfortable with polymorphism. Even today there are many who are not comfortable with polymorphism.
A refactor that completely avoids casting is possible. Casting is dangerous and not a tool to use lightly.

C++ should all member variable use accessors and mutator

I have about 15~20 member variables which needs to be accessed, I was wondering
if it would be good just to let them be public instead of giving every one of them
get/set functions.
The code would be something like
class A { // a singleton class
public:
static A* get();
B x, y, z;
// ... a lot of other object that should only have one copy
// and doesn't change often
private:
A();
virtual ~A();
static A* a;
};
I have also thought about putting the variables into an array, but I don't
know the best way to do a lookup table, would it be better to put them in an array?
EDIT:
Is there a better way than Singleton class to put them in a collection
The C++ world isn't quite as hung up on "everything must be hidden behind accessors/mutators/whatever-they-decide-to-call-them-todays" as some OO-supporting languages.
With that said, it's a bit hard to say what the best approach is, given your limited description.
If your class is simply a 'bag of data' for some other process, than using a struct instead of a class (the only difference is that all members default to public) can be appropriate.
If the class actually does something, however, you might find it more appropriate to group your get/set routines together by function/aspect or interface.
As I mentioned, it's a bit hard to tell without more information.
EDIT: Singleton classes are not smelly code in and of themselves, but you do need to be a bit careful with them. If a singleton is taking care of preference data or something similar, it only makes sense to make individual accessors for each data element.
If, on the other hand, you're storing generic input data in a singleton, it might be time to rethink the design.
You could place them in a POD structure and provide access to an object of that type :
struct VariablesHolder
{
int a;
float b;
char c[20];
};
class A
{
public:
A() : vh()
{
}
VariablesHolder& Access()
{
return vh;
}
const VariablesHolder& Get() const
{
return vh;
}
private:
VariablesHolder vh;
};
No that wouldn't be good. Image you want to change the way they are accessed in the future. For example remove one member variable and let the get/set functions compute its value.
It really depends on why you want to give access to them, how likely they are to change, how much code uses them, how problematic having to rewrite or recompile that code is, how fast access needs to be, whether you need/want virtual access, what's more convenient and intuitive in the using code etc.. Wanting to give access to so many things may be a sign of poor design, or it may be 100% appropriate. Using get/set functions has much more potential benefit for volatile (unstable / possibly subject to frequent tweaks) low-level code that could be used by a large number of client apps.
Given your edit, an array makes sense if your client is likely to want to access the values in a loop, or a numeric index is inherently meaningful. For example, if they're chronologically ordered data samples, an index sounds good. Summarily, arrays make it easier to provide algorithms to work with any or all of the indices - you have to consider whether that's useful to your clients; if not, try to avoid it as it may make it easier to mistakenly access the wrong values, particularly if say two people branch some code, add an extra value at the end, then try to merge their changes. Sometimes it makes sense to provide arrays and named access, or an enum with meaningful names for indices.
This is a horrible design choice, as it allows any component to modify any of these variables. Furthermore, since access to these variables is done directly, you have no way to impose any invariant on the values, and if suddenly you decide to multithread your program, you won't have a single set of functions that need to be mutex-protected, but rather you will have to go off and find every single use of every single data member and individually lock those usages. In general, one should:
Not use singletons or global variables; they introduce subtle, implicit dependencies between components that allow seemingly independent components to interfere with each other.
Make variables const wherever possible and provide setters only where absolutely required.
Never make variables public (unless you are creating a POD struct, and even then, it is best to create POD structs only as an internal implementation detail and not expose them in the API).
Also, you mentioned that you need to use an array. You can use vector<B> or vector<B*> to create a dynamically-sized array of objects of type B or type B*. Rather than using A::getA() to access your singleton instance; it would be better to have functions that need type A to take a parameter of type const A&. This will make the dependency explicit, and it will also limit which functions can modify the members of that class (pass A* or A& to functions that need to mutate it).
As a convention, if you want a data structure to hold several public fields (plain old data), I would suggest using a struct (and use in tandem with other classes -- builder, flyweight, memento, and other design patterns).
Classes generally mean that you're defining an encapsulated data type, so the OOP rule is to hide data members.
In terms of efficiency, modern compilers optimize away calls to accessors/mutators, so the impact on performance would be non-existent.
In terms of extensibility, methods are definitely a win because derived classes would be able to override these (if virtual). Another benefit is that logic to check/observe/notify data can be added if data is accessed via member functions.
Public members in a base class is generally a difficult to keep track of.

C++ checking the type of reference

Is it bad design to check if an object is of a particular type by having some sort of ID data member in it?
class A
{
private:
bool isStub;
public:
A(bool isStubVal):isStub(isStubVal){}
bool isStub(){return isStub;}
};
class A1:public A
{
public:
A1():A(false){}
};
class AStub:public A
{
public:
AStub():A(true){}
};
EDIT 1:
Problem is A holds a lot of virtual functions, which A1 doesn't override but the stub needs to, for indidicating that you are working on a stub instead of an actual object. Here maintainability is the question, for every function that i add to A, i need to override it in stub. forgetting it means dangerous behaviour as A's virtual function gets executed with stub's data. Sure I can add an abstract class ABase and let A and Astub inherit from them. But the design has become rigid enough to allow this refactor.
A reference holder to A is held in another class B. B is initialized with the stub reference, but later depending on some conditions, the reference holder in B is reinitialized with the A1,A2 etc.. So when i do this BObj.GetA(), i can check in GetA() if the refholder is holding a stub and then give an error in that case. Not doing that check means, i would have to override all functions of A in AStub with the appropriate error conditions.
Generally, yes. You're half OO, half procedural.
What are you going to do once you determine the object type? You probably should put that behavior in the object itself (perhaps in a virtual function), and have different derived classes implement that behavior differently. Then you have no reason to check the object type at all.
In your specific example you have a "stub" class. Instead of doing...
if(!stub)
{
dosomething;
}
Just call
object->DoSomething();
and have the implemention in AStub be a empty
Generally yes. Usually you want not to query the object, but to expect it to BEHAVE the proper way. What you suggest is basically a primitive RTTI, and this is generally frowned upon, unless there are better options.
The OO way would be to Stub the functionality, not check for it. However, in the case of a lot of functions to "stub" this may not seem optimal.
Hence, this depends on what you want the class to really do.
Also note, that in this case you don't waste space:
class A
{
public:
virtual bool isStub() = 0;
};
class A1:public A
{
public:
virtual bool isStub() { return false; };
};
class AStub:public A
{
public:
virtual bool isStub() { return true; };
};
... buuut you have a virtual function -- what usually is not a problem, unless it's a performance bottleneck.
If you want to find out the type of object at runtime you can use a dynamic_cast. You must have a pointer or reference to the object, and then check the result of the dynamic_cast. If it is not NULL, then the object is the correct type.
With polymorphic classes you can use the typeofoperator to perform RTTI. Most of the time you shouldn't need to. Without polymorphism, there's no language facility to do so, but you should need to even less often.
One caveat. Obviously your type is going to be determined at construction time. If your determination of 'type' is a dynamic quantity you can't solve this problem with the C++ type system. In that case you need to have some function. But in this case it is better to use the overridable/dynamic behavior as Terry suggested.
Can you provide some better information as what you are trying to accomplish?
This sort of thing is fine. It's generally better to put functionality in the object, so that there's no need to switch on type -- this makes the calling code simpler and localises future changes -- but there's a lot to be said for being able to check the types.
There will always be exceptions to the general case, even with the best will in the world, and being able to quickly check for the odd specific case can make the difference between having something fixed by one change in one place, a quick project-specific hack in the project-specific code, and having to make more invasive, wide-reaching changes (extra functions in the base class at the very least) -- possibly pushing project-specific concerns into shared or framework code.
For a quick solution to the problem, use dynamic_cast. As others have noted, this lets one check that an object is of a given type -- or a type derived from that (an improvement over the straightforward "check IDs" approach). For example:
bool IsStub( const A &a ) {
return bool( dynamic_cast< const AStub * >( &a ) );
}
This requires no setup, and without any effort on one's part the results will be correct. It is also template-friendly in a very straightforward and obvious manner.
Two other approaches may also suit.
If the set of derived types is fixed, or there are a set of derived types that get commonly used, one might have some functions on the base class that will perform the cast. The base class implementations return NULL:
class A {
virtual AStub *AsStub() { return NULL; }
virtual OtherDerivedClass *AsOtherDerivedClass() { return NULL; }
};
Then override as appropriate, for example:
class AStub : public A {
AStub *AsStub() { return this; }
};
Again, this allows one to have objects of a derived type treated as if they were their base type -- or not, if that would be preferable. A further advantage of this is that one need not necessarily return this, but could return a pointer to some other object (a member variable perhaps). This allows a given derived class to provide multiple views of itself, or perhaps change its role at runtime.
This approach is not especially template friendly, though. It would require a bit of work, with the result either being a bit more verbose or using constructs with which not everybody is familiar.
Another approach is to reify the object type. Have an actual object that represents the type, that can be retrieved by both a virtual function and a static function. For simple type checking, this is not much better than dynamic_cast, but the cost is more predictable across a wide range of compilers, and the opportunities for storing useful data (proper class name, reflection information, navigable class hierarchy information, etc.) are much greater.
This requires a bit of infrastructure (a couple of macros, at least) to make it easy to add the virtual functions and maintain the hierarchy data, but it provides good results. Even if this is only used to store class names that are guaranteed to be useful, and to check for types, it'll pay for itself.
With all this in place, checking for a particular type of object might then go something like this example:
bool IsStub( const A &a ) {
return a.GetObjectType().IsDerivedFrom( AStub::GetClassType() );
}
(IsDerivedFrom might be table-driven, or it could simply loop through the hierarchy data. Either of these may or may not be more efficient than dynamic_cast, but the approximate runtime cost is at least predictable.)
As with dynamic_cast, this approach is also obviously amenable to automation with templates.
In the general case it might not be a good design, but in some specific cases it is a reasonable design choice to provide an isStub() method for the use of a specific client that would otherwise need to use RTTI. One such case is lazy loading:
class LoadingProxy : IInterface
{
private:
IInterface m_delegate;
IInterface loadDelegate();
public:
LoadingProxy(IInterface delegate) : m_delegate(delegate){}
int useMe()
{
if (m_delegate.isStub())
{
m_delegate = loadDelegate();
}
return m_delegate.useMe();
}
};
The problem with RTTI is that it is relatively expensive (slow) compared with a virtual method call, so that if your useMe() function is simple/quick, RTTI determines the performance. On one application that I worked on, using RTTI tests to determine if lazy loading was needed was one of the performance bottlenecks identified by profiling.
However, as many other answers have said, the application code should not need to worry about whether it has a stub or a usable instance. The test should be in one place/layer in the application. Unless you might need multiple LoadingProxy implementations there might be a case for making isStub() a friend function.

put different class in hierarchy in one container in C++

Some times we have to put different objects in the same hierarchy in one container. I read some article saying there are some tricks and traps. However, I have no big picture about this question. Actually, this happens a lot in the real word.
For example, a parking lot has to contain different types of cars; a zoo has to contain different types of animals; a book store has to contain different types of books.
I remember that one article saying neither of the following is a good design, but I forgot where it is.
vector<vehicle> parking_lot;
vector<*vehicle> parking_lot;
Can anybody offer some basic rules for this kind of question?
I learnt a lot writing my reply to a similar question by the same author, so I couldn't resist to do the same here. In short, I've written a benchmark to compare the following approaches to the problem of storing heterogeneous elements in a standard container:
Make a class for each type of element and have them all inherit from a common base and store polymorphic base pointers in a std::vector<boost::shared_ptr<Base> >. This is probably the more general and flexible solution:
struct Shape {
...
};
struct Point : public Shape {
...
};
struct Circle : public Shape {
...
};
std::vector<boost::shared_ptr<Shape> > shapes;
shapes.push_back(new Point(...));
shapes.push_back(new Circle(...));
shapes.front()->draw(); // virtual call
Same as (1) but store the polymorphic pointers in a boost::ptr_vector<Base>. This is a bit less general because the elements are owned exclusively by the vector, but it should suffice most of the times. One advantage of boost::ptr_vector is that it has the interface of a std::vector<Base> (without the *), so its simpler to use.
boost::ptr_vector<Shape> shapes;
shapes.push_back(new Point(...));
shapes.push_back(new Circle(...));
shapes.front().draw(); // virtual call
Use a C union that can contain all possible elements and then use a std::vector<UnionType>. This is not very flexible as we need to know all element types in advance (they are hard-coded into the union) and also unions are well known for not interacting nicely with other C++ constructs (for example, the stored types can't have constructors).
struct Point {
...
};
struct Circle {
...
};
struct Shape {
enum Type { PointShape, CircleShape };
Type type;
union {
Point p;
Circle c;
} data;
};
std::vector<Shape> shapes;
Point p = { 1, 2 };
shapes.push_back(p);
if(shapes.front().type == Shape::PointShape)
draw_point(shapes.front());
Use a boost::variant that can contain all possible elements and then use a std::vector<Variant>. This is not very flexible like the union but the code to deal with it is much more elegant.
struct Point {
...
};
struct Circle {
...
};
typedef boost::variant<Point, Circle> Shape;
std::vector<Shape> shapes;
shapes.push_back(Point(1,2));
draw_visitor(shapes.front()); // use boost::static_visitor
Use boost::any (which can contain anything) and then a std::vector<boost::any>. That is very flexible but the interface is a little clumsy and error prone.
struct Point {
...
};
struct Circle {
...
};
typedef boost::any Shape;
std::vector<Shape> shapes;
shapes.push_back(Point(1,2));
if(shapes.front().type() == typeid(Point))
draw_point(shapes.front());
This is the code of the full benchmark program (doesn't run on codepad for some reason). And here are my performance results:
time with hierarchy and boost::shared_ptr: 0.491 microseconds
time with hierarchy and boost::ptr_vector: 0.249 microseconds
time with union: 0.043 microseconds
time with boost::variant: 0.043 microseconds
time with boost::any: 0.322 microseconds
My conclusions:
Use vector<shared_ptr<Base> > only if you need the flexibility provided by runtime polymorphism and if you need shared ownership. Otherwise you'll have significant overhead.
Use boost::ptr_vector<Base> if you need runtime polymorphism but don't care about shared ownership. It will be significantly faster than the shared_ptr counterpart and the interface will be more friendly (stored elements not presented like pointers).
Use boost::variant<A, B, C> if you don't need much flexibility (i.e. you have a small set of types which will not grow). It will be lighting fast and the code will be elegant.
Use boost::any if you need total flexibility (you want to store anything).
Don't use unions. If you really need speed then boost::variant is as fast.
Before I finish I want to mention that a vector of std::unique_ptr will be a good option when it becomes widely available (I think it's already in VS2010)
The problem with vector<vehicle> is that the object only holds vehicles. The problem with vector<vehicle*> is that you need to allocate and, more importantly, free the pointers appropriately.
This might be acceptable, depending on your project, etc...
However, one usually uses some kind of smart-ptr in the vector (vector<boost::shared_ptr<vehicle>> or Qt-something, or one of your own) that handles deallocation, but still permits storing different types objects in the same container.
Update
Some people have, in other answers/comments, also mentioned boost::ptr_vector. That works well as a container-of-ptr's too, and solves the memory deallocation problem by owning all the contained elements. I prefer vector<shared_ptr<T>> as I can then store objects all over the place, and move them using in and out of containers w/o issues. It's a more generic usage model that I've found is easier for me and others to grasp, and applies better to a larger set of problems.
The problems are:
You cannot place polymorphic objects into a container since they may differ in size --i.e., you must use a pointer.
Because of how containers work a normal pointer / auto pointer is not suitable.
The solution is :
Create a class hierarchy, and use at least one virtual function in your base class (if you can't think of any function to virtualize, virtualize the destructor -- which as Neil pointed out is generally a must).
Use a boost::shared_pointer (this will be in the next c++ standard) -- shared_ptr handles being copied around ad hoc, and containers may do this.
Build a class hierarchy and allocate your objects on the heap --i.e., by using new. The pointer to the base class must be encapsulated by a shared_ptr.
place the base class shared_pointer into your container of choice.
Once you understand the "whys and hows" of the points above look at the boost ptr_containers -- thanks to Manual for the tip.
Say vehicle is a base class, that has certain properties, then, inheriting from it you have say a car, and a truck. Then you can just do something like:
std::vector<vehicle *> parking_lot;
parking_lot.push_back(new car(x, y));
parking_lot.push_back(new truck(x1, y1));
This would be perfectly valid, and in fact very useful sometimes. The only requirement for this type of object handling is sane hierarchy of objects.
Other popular type of objects that can be used like that are e.g. people :) you see that in almost every programming book.
EDIT:
Of course that vector can be packed with boost::shared_ptr or std::tr1::shared_ptr instead of raw pointers for ease of memory management. And in fact that's something I would recommend to do by all means possible.
EDIT2:
I removed a not very relevant example, here's a new one:
Say you are to implement some kind of AV scanning functionality, and you have multiple scanning engines. So you implement some kind of engine management class, say scan_manager which can call bool scan(...) function of those. Then you make an engine interface, say engine. It would have a virtual bool scan(...) = 0; Then you make a few engines like my_super_engine and my_other_uber_engine, which both inherit from engine and implement scan(...). Then your engine manager would somewhere during initialization fill that std::vector<engine *> with instances of my_super_engine and my_other_uber_engine and use them by calling bool scan(...) on them either sequentially, or based on whatever types of scanning you'd like to perform. Obviously what those engines do in scan(...) remains unknown, the only interesting bit is that bool, so the manager can use them all in the same way without any modification.
Same can be applied to various game units, like scary_enemies and those would be orks, drunks and other unpleasant creatures. They all implement void attack_good_guys(...) and your evil_master would make many of them and call that method.
This is indeed a common practice, and I would hardly call it bad design for as long as all those types actually are related.
You can refer to this Stroustrup's answer to the question Why can't I assign a vector< Apple*> to a vector< Fruit*>?.

What are some 'good use' examples of dynamic casting?

We often hear/read that one should avoid dynamic casting. I was wondering what would be 'good use' examples of it, according to you?
Edit:
Yes, I'm aware of that other thread: it is indeed when reading one of the first answers there that I asked my question!
This recent thread gives an example of where it comes in handy. There is a base Shape class and classes Circle and Rectangle derived from it. In testing for equality, it is obvious that a Circle cannot be equal to a Rectangle and it would be a disaster to try to compare them. While iterating through a collection of pointers to Shapes, dynamic_cast does double duty, telling you if the shapes are comparable and giving you the proper objects to do the comparison on.
Vector iterator not dereferencable
Here's something I do often, it's not pretty, but it's simple and useful.
I often work with template containers that implement an interface,
imagine something like
template<class T>
class MyVector : public ContainerInterface
...
Where ContainerInterface has basic useful stuff, but that's all. If I want a specific algorithm on vectors of integers without exposing my template implementation, it is useful to accept the interface objects and dynamic_cast it down to MyVector in the implementation. Example:
// function prototype (public API, in the header file)
void ProcessVector( ContainerInterface& vecIfce );
// function implementation (private, in the .cpp file)
void ProcessVector( ContainerInterface& vecIfce)
{
MyVector<int>& vecInt = dynamic_cast<MyVector<int> >(vecIfce);
// the cast throws bad_cast in case of error but you could use a
// more complex method to choose which low-level implementation
// to use, basically rolling by hand your own polymorphism.
// Process a vector of integers
...
}
I could add a Process() method to the ContainerInterface that would be polymorphically resolved, it would be a nicer OOP method, but I sometimes prefer to do it this way. When you have simple containers, a lot of algorithms and you want to keep your implementation hidden, dynamic_cast offers an easy and ugly solution.
You could also look at double-dispatch techniques.
HTH
My current toy project uses dynamic_cast twice; once to work around the lack of multiple dispatch in C++ (it's a visitor-style system that could use multiple dispatch instead of the dynamic_casts), and once to special-case a specific subtype.
Both of these are acceptable, in my view, though the former at least stems from a language deficit. I think this may be a common situation, in fact; most dynamic_casts (and a great many "design patterns" in general) are workarounds for specific language flaws rather than something that aim for.
It can be used for a bit of run-time type-safety when exposing handles to objects though a C interface. Have all the exposed classes inherit from a common base class. When accepting a handle to a function, first cast to the base class, then dynamic cast to the class you're expecting. If they passed in a non-sensical handle, you'll get an exception when the run-time can't find the rtti. If they passed in a valid handle of the wrong type, you get a NULL pointer and can throw your own exception. If they passed in the correct pointer, you're good to go.
This isn't fool-proof, but it is certainly better at catching mistaken calls to the libraries than a straight reinterpret cast from a handle, and waiting until some data gets mysteriously corrupted when you pass the wrong handle in.
Well it would really be nice with extension methods in C#.
For example let's say I have a list of objects and I want to get a list of all ids from them. I can step through them all and pull them out but I would like to segment out that code for reuse.
so something like
List<myObject> myObjectList = getMyObjects();
List<string> ids = myObjectList.PropertyList("id");
would be cool except on the extension method you won't know the type that is coming in.
So
public static List<string> PropertyList(this object objList, string propName) {
var genList = (objList.GetType())objList;
}
would be awesome.
It is very useful, however, most of the times it is too useful: if for getting the job done the easiest way is to do a dynamic_cast, it's more often than not a symptom of bad OO design, what in turn might lead to trouble in the future in unforeseen ways.