I'm trying to program a genetic algorithm for a project and am having difficulty keeping different functions separate. I've been reading up on policy-based design, and this seems like a solution to the problem, but I don't really understand how to implement it.
I've got an OptimizerHost, which inherits from a SelectionPolicy (to determine what solutions are evaluated) and a FitnessPolicy (to determine the fitness of any given solution). The problem is I can't figure out how the two policies can communicate with one another. The bulk of the algorithm is implemented in the SelectionPolicy, but it still needs to be able to check the fitness of its solutions. The only thing I can think of is to implement the SelectionPolicy algorithm in the OptimizerHost itself, so then it will inherit the things it needs from the FitnessPolicy. But that seems like its missing the point of using policies in the first place. Am I misunderstanding something?
I'm not very familiar with the Policy-Based design principles (sorry) but when I read your problem, I felt like you need something like pure virtual classes (as interfaces) to help you through it.
The thing is, you cannot use something from the other, if it's not previously declared: this is the basic rule. Thus, you need to use and virtual interface to say SelectPolicy that FitnessPolicy has some members to be used. Please follow the example, and change it accordingly to your algortihms-needs.
First: create the interfaces for the SelectionPolicy and the FitnessPolicy
template <class T> class FitnessPolicyBase
{
public:
virtual int Fitness(T fitnessSet); // assuming you have implemented the required classes etc. here - return value can be different of course
...
} // write your other FitnessPolicy stuff here
template <class T> class SelectionPolicyBase
{
public:
virtual T Selector(FitnessPolicyBase<Solution> evaluator, Set<T> selectionSet); // assuming such a set exists here
...
} // write your other selectionpolicy interface here
Now, since we made these classes pure virtual (they have nothing but virtual functions) we cannot use them but only inherit from them. This is precisely what we'll do: The SelectionPolicy class and the FitnessPolicy class will be inheriting from them, respectively:
class SelectionPolicy: public SelectionPolicyBase<Solution> // say, our solutions are of Solution Type...
{
public:
virtual Solution Selector(FitnessPolicyBase<Solution> evaluator, Set<Solution> selectionSet); // return your selected item in this function
...
}
class FitnessPolicy : public FitnessPolicy Base<Solution> // say, our solutions are of SolutionSet Type...
{
public:
virtual int Fitness(Solution set); // return the fitness score here
...
}
Now, our algortihm can run with two types of parameters: SolutionSetBase and FitnessSetBase. Did we really need the xxxBase types at all? Not actually, as long as we have the public interfaces of the SolutionPolicy and FitnessPolicy classes, we could use them; but using this way, we kinda seperated the `logic' from the problem.
Now, our Selection Policy algorithm can take references to the policy classes and then call the required function. Note here that, policy classes can call each others' classes as well. So this is a valid situation now:
virtual Solution SelectionPolicy::Selector(FitnessPolicyBase<Solution> evaluator, Set<T> selectionSet)
{
int score = evaluator.Fitness(selectionSet[0]); //assuming an array type indexing here. Change accordingly to your implementation and comparisons etc.
}
Now, in order for this to work, though, you must have initialized a FitnessPolicy object and pass it to this Selector. Due to upcasting and virtual functions, it will work properly.
Please forgive me if I've been overcomplicating things - I've been kinda afar from C++ lately (working on C# recently) thus might have mistaken the syntax an stuff, but logic should be the same anyway.
Related
I hesitate to ask this question, because it's deceitfully simple one. Except I fail to see a solution.
I recently made an attempt to write a simple program that would be somewhat oblivious to what engine renders its UI.
Everything looks great on paper, but in fact, theory did not get me far.
Assume my tool cares to have an IWindow with IContainer that hosts an ILabel and IButton. That's 4 UI elements. Abstacting each one of these is a trivial task. I can create each of these elements with Qt, Gtk, motif - you name it.
I understand that in order for implementation (say, QtWindow with QtContainer) to work, the abstraction (IWindow along with IContainer) have to work, too: IWindow needs to be able to accept IContainer as its child: That requires either that
I can add any of the UI elements to container, or
all the UI elements inherit from a single parent
That is theory which perfectly solves the abstraction issue. Practice (or implementation) is a whole other story. In order to make implementation to work along with abstraction - the way I see it I can either
pollute the abstraction with ugly calls exposing the implementation (or giving hints about it) - killing the concept of abstraction, or
add casting from the abstraction to something that the implementation understands (dynamic_cast<>()).
add a global map pool of ISomething instances to UI specific elements (map<IElement*, QtElement*>()) which would be somewhat like casting, except done by myself.
All of these look ugly. I fail to see other alternatives here - is this where data abstraction concept actually fails? Is casting the only alternative here?
Edit
I have spent some time trying to come up with optimal solution and it seems that this is something that just can't be simply done with C++. Not without casting, and not with templates as they are.
The solution that I eventually came up with (after messing a lot with interfaces and how these are defined) looks as follows:
1. There needs to be a parametrized base interface that defines the calls
The base interface (let's call it TContainerBase for Containers and TElementBase for elements) specifies methods that are expected to be implemented by containers or elements. That part is simple.
The definition would need to look something along these lines:
template <typename Parent>
class TElementBase : public Parent {
virtual void DoSomething() = 0;
};
template <typename Parent>
class TContainerBase : public Parent {
virtual void AddElement(TElementBase<Parent>* element) = 0;
};
2. There needs to be a template that specifies inheritance.
That is where the first stage of separation (engine vs ui) comes. At this point it just wouldn't matter what type of backend is driving the rendering. And here's the interesting part: as I think about it, the only language successfully implementing this is Java. The template would have to look something along these lines:
General:
template<typename Engine>
class TContainer : public TContainerBase<Parent> {
void AddElement(TElementBase<Parent>* element) {
// ...
}
};
template<typename Engine>
class TElement : public TElementBase<Parent> {
void DoSomething() {
// ...
}
};
3. UI needs to be able to accept just TContainers or TElements
that is, it would have to ignore what these elements derive from. That's the second stage of separation; after all everything it cares about is the TElementBase and TContainerBase interfaces. In Java that has been solved with introduction of question mark. In my case, I could simply use in my UI:
TContainer<?> some_container;
TElement<?> some_element;
container.AddElement(&element);
There's no issues with virtual function calls in vtable, as they are exactly where the compiler would expect them to be. The only issue would be here ensuring that the template parameters are same in both cases. Assuming the backend is a single library - that would work just fine.
The three above steps would allow me to write my code disregarding backend entirely (and safely), while backends could implement just about anything there was a need for.
I tried this approach and it turns to be pretty sane. The only limitation was the compiler. Instantiating class and casting them back and forth here is counter-intuitive, but, unfortunately, necessary, mostly because with template inheritance you can't extract just the base class itself, that is, you can't say any of:
class IContainerBase {};
template <typename Parent>
class TContainerBase : public (IContainerBase : public Parent) {}
nor
class IContainerBase {};
template <typename Parent>
typedef class IContainerBase : public Parent TContainerBase;
(note that in all the above solutions it feels perfectly natural and sane just to rely on TElementBase and TContainerBase - and the generated code works perfectly fine if you cast TElementBase<Foo> to TElementBase<Bar> - so it's just language limitation).
Anyway, these final statements (typedef of class A inheriting from B and class X having base class A inheriting from B) are just rubbish in C++ (and would make the language harder than it already is), hence the only way out is to follow one of the supplied solutions, which I'm very grateful for.
Thank you for all help.
You're trying to use Object Orientation here. OO has a particular method of achieving generic code: by type erasure. The IWindow base class interface erases the exact type, which in your example would be a QtWindow. In C++ you can get back some erased type information via RTTI, i.e. dynamic_cast.
However, in C++ you can also use templates. Don't implement IWindow and QtWindow, but implement Window<Qt>. This allows you to state that Container<Foo> accepts a Window<Foo> for any possible Foo window library. The compiler will enforce this.
If I understand your question correctly, this is the kind of situation the Abstract Factory Pattern is intended to address.
The abstract factory pattern provides a way to encapsulate a group of individual factories that have a common theme without specifying their concrete classes. In normal usage, the client software creates a concrete implementation of the abstract factory and then uses the generic interface of the factory to create the concrete objects that are part of the theme. The client doesn't know (or care) which concrete objects it gets from each of these internal factories, since it uses only the generic interfaces of their products. This pattern separates the details of implementation of a set of objects from their general usage and relies on object composition, as object creation is implemented in methods exposed in the factory interface.
Creating a wrapper capable of abstracting libraries like Qt and Gtk doesn't seems a trivial tasks to me. But talking more generally about your design problem, maybe you could use templates to do the mapping between the abstract interface and a specific implementation. For example:
Abstract interface IWidget.h
template<typename BackendT>
class IWidget
{
public:
void doSomething()
{
backend.doSomething();
}
private:
BackendT backend;
};
Qt implementation QtWidget.h:
class QtWidget
{
public:
void doSomething()
{
// qt specifics here
cout << "qt widget" << endl;
}
};
Gtk implementation GtkWidget.h:
class GtkWidget
{
public:
void doSomething()
{
// gtk specifics here
cout << "gtk widget" << endl;
}
};
Qt backend QtBackend.h:
#include "QtWidget.h"
// include all the other gtk classes you implemented...
#include "IWidget.h"
typedef IWidget<QtWidget> Widget;
// map all the other classes...
Gtk backend GtkBackend.h:
#include "GtkWidget.h"
// include all the other gtk classes you implemented...
#include "IWidget.h"
typedef IWidget<GtkWidget> Widget;
// map all the other classes...
Application:
// Choose the backend here:
#include "QtBackend.h"
int main()
{
Widget* w = new Widget();
w->doSomething();
return 0;
}
When implementing policies, one needs to follow a specific interface. From what I understand the policies have to be able to replace each other. In modern c++ book, Ch 1.5 3 policies have the same interface "T* Create() {}". Why is there no need to abstract it. It would be important if there are number of interfaces that policies should have. from I understand abstract class gives a recipe for which interfaces should be in concrete classes (the policy classes). In the Wikipedia example "using" defines which interfaces the policy should have but it's not through an abstract class. Isn't the point of abstract class to make sure that derived classes have the required interfaces?
what am I missing?
There is a difference in that an interface using an abstract base class has virtual functions providing runtime polymorphism.
The policies are used to provide compile time polymorphism for the templates. The compiler will notice if your policy class has a T* Create() or not. If it doesn't, you will get a compile time error when trying to use it.
I've never actually used policy-based design in practice and it's ages since I coded in C++, but here's my interpretation. As you've pointed out, the host class can enforce constraints on the policies used with it, either through interfaces, or through something like using output_policy::Print;, as the wiki example depicts.
An advantage (or difference) of the using method is that it's less proactively restrictive and less rigid, as policies have an implied contract which is represented directly by the code which uses them. In the using example, given the current state of the code, the output_policy implementation need only implement a method called Print which returns anything and takes whatever language_policy::Message() returns (in this case, all language_policies return a std::string). This is a little closer to duck typing.
One disadvantage is that the implied contract disappears once the code goes away. Another disadvantage is that policies have some level of dependency on each other. As a very contrived example, if one output_policy has a non-generic Print method which prints only strings, it cannot be used with a language_policy which prints only integers.
I don't see why you can't add policy interfaces if needed. One example is where the HelloWorld class might want to constrain the output_policy so that it prints strings and nothing else. You could achieve this by coding something like the below - note that you'd have to use SFINAE to enforce that output_policy<std::string> actually implements OutputPolicyInterface<std::string>.
template<typename message_type>
class OutputPolicyInterface
{
virtual void Print( message_type message ) = 0;
};
template <template<class> class output_policy, typename language_policy>
class HelloWorld : public output_policy<std::string>, public language_policy
{
public:
void Run()
{
Print( Message() );
//Print(2); won't work anymore
}
};
While designing an interface for a class I normally get caught in two minds whether should I provide member functions which can be calculated / derived by using combinations of other member functions. For example:
class DocContainer
{
public:
Doc* getDoc(int index) const;
bool isDocSelected(Doc*) const;
int getDocCount() const;
//Should this method be here???
//This method returns the selected documents in the contrainer (in selectedDocs_out)
void getSelectedDocs(std::vector<Doc*>& selectedDocs_out) const;
};
Should I provide this as a class member function or probably a namespace where I can define this method? Which one is preferred?
In general, you should probably prefer free functions. Think about it from an OOP perspective.
If the function does not need access to any private members, then why should it be given access to them? That's not good for encapsulation. It means more code that may potentially fail when the internals of the class is modified.
It also limits the possible amount of code reuse.
If you wrote the function as something like this:
template <typename T>
bool getSelectedDocs(T& container, std::vector<Doc*>&);
Then the same implementation of getSelectedDocs will work for any class that exposes the required functions, not just your DocContainer.
Of course, if you don't like templates, an interface could be used, and then it'd still work for any class that implemented this interface.
On the other hand, if it is a member function, then it'll only work for this particular class (and possibly derived classes).
The C++ standard library follows the same approach. Consider std::find, for example, which is made a free function for this precise reason. It doesn't need to know the internals of the class it's searching in. It just needs some implementation that fulfills its requirements. Which means that the same find() implementation can work on any container, in the standard library or elsewhere.
Scott Meyers argues for the same thing.
If you don't like it cluttering up your main namespace, you can of course put it into a separate namespace with functionality for this particular class.
I think its fine to have getSelectedDocs as a member function. It's a perfectly reasonable operation for a DocContainer, so makes sense as a member. Member functions should be there to make the class useful. They don't need to satisfy some sort of minimality requirement.
One disadvantage to moving it outside the class is that people will have to look in two places when the try to figure out how to use a DocContainer: they need to look in the class and also in the utility namespace.
The STL has basically aimed for small interfaces, so in your case, if and only if getSelectedDocs can be implemented more efficiently than a combination of isDocSelected and getDoc it would be implemented as a member function.
This technique may not be applicable anywhere but it's a good rule of thumbs to prevent clutter in interfaces.
I agree with the answers from Konrad and jalf. Unless there is a significant benefit from having "getSelectedDocs" then it clutters the interface of DocContainer.
Adding this member triggers my smelly code sensor. DocContainer is obviously a container so why not use iterators to scan over individual documents?
class DocContainer
{
public:
iterator begin ();
iterator end ();
// ...
bool isDocSelected (Doc *) const;
};
Then, use a functor that creates the vector of documents as it needs to:
typedef std::vector <Doc*> DocVector;
class IsDocSelected {
public:
IsDocSelected (DocContainer const & docs, DocVector & results)
: docs (docs)
, results (results)
{}
void operator()(Doc & doc) const
{
if (docs.isDocSelected (&doc))
{
results.push_back (&doc);
}
}
private:
DocContainer const & docs;
DocVector & results;
};
void foo (DocContainer & docs)
{
DocVector results;
std :: for_each (docs.begin ()
, docs.end ()
, IsDocSelected (docs, results));
}
This is a bit more verbose (at least until we have lambdas), but an advantage to this kind of approach is that the specific type of filtering is not coupled with the DocContainer class. In the future, if you need a new list of documents that are "NotSelected" there is no need to change the interface to DocContainer, you just write a new "IsDocNotSelected" class.
The answer is proabably "it depends"...
If the class is part of a public interface to a library that will be used by many different callers then there's a good argument for providing a multitude of functionality to make it easy to use, including some duplication and/or crossover. However, if the class is only being used by a single upstream caller then it probably doesn't make sense to provide multiple ways to achieve the same thing. Remember that all the code in the interface has to be tested and documented, so there is always a cost to adding that one last bit of functionality.
I think this is perfectly valid if the method:
fits in the class responsibilities
is not too specific to a small part of the class clients (like at least 20%)
This is especially true if the method contains complex logic/computation that would be more expensive to maintain in many places than only in the class.
If I want to make a class adaptable, and make it possible to select different algorithms from the outside -- what is the best implementation in C++?
I see mainly two possibilities:
Use an abstract base class and pass concrete object in
Use a template
Here is a little example, implemented in the various versions:
Version 1: Abstract base class
class Brake {
public: virtual void stopCar() = 0;
};
class BrakeWithABS : public Brake {
public: void stopCar() { ... }
};
class Car {
Brake* _brake;
public:
Car(Brake* brake) : _brake(brake) { brake->stopCar(); }
};
Version 2a: Template
template<class Brake>
class Car {
Brake brake;
public:
Car(){ brake.stopCar(); }
};
Version 2b: Template and private inheritance
template<class Brake>
class Car : private Brake {
using Brake::stopCar;
public:
Car(){ stopCar(); }
};
Coming from Java, I am naturally inclined to always use version 1, but the templates versions seem to be preferred often, e.g. in STL code? If that's true, is it just because of memory efficiency etc (no inheritance, no virtual function calls)?
I realize there is not a big difference between version 2a and 2b, see C++ FAQ.
Can you comment on these possibilities?
This depends on your goals. You can use version 1 if you
Intend to replace brakes of a car (at runtime)
Intend to pass Car around to non-template functions
I would generally prefer version 1 using the runtime polymorphism, because it is still flexible and allows you to have the Car still have the same type: Car<Opel> is another type than Car<Nissan>. If your goals are great performance while using the brakes frequently, i recommend you to use the templated approach. By the way, this is called policy based design. You provide a brake policy. Example because you said you programmed in Java, possibly you are not yet too experienced with C++. One way of doing it:
template<typename Accelerator, typename Brakes>
class Car {
Accelerator accelerator;
Brakes brakes;
public:
void brake() {
brakes.brake();
}
}
If you have lots of policies you can group them together into their own struct, and pass that one, for example as a SpeedConfiguration collecting Accelerator, Brakes and some more. In my projects i try to keep a good deal of code template-free, allowing them to be compiled once into their own object files, without needing their code in headers, but still allowing polymorphism (via virtual functions). For example, you might want to keep common data and functions that non-template code will probably call on many occasions in a base-class:
class VehicleBase {
protected:
std::string model;
std::string manufacturer;
// ...
public:
~VehicleBase() { }
virtual bool checkHealth() = 0;
};
template<typename Accelerator, typename Breaks>
class Car : public VehicleBase {
Accelerator accelerator;
Breaks breaks;
// ...
virtual bool checkHealth() { ... }
};
Incidentally, that is also the approach that C++ streams use: std::ios_base contains flags and stuff that do not depend on the char type or traits like openmode, format flags and stuff, while std::basic_ios then is a class template that inherits it. This also reduces code bloat by sharing the code that is common to all instantiations of a class template.
Private Inheritance?
Private inheritance should be avoided in general. It is only very rarely useful and containment is a better idea in most cases. Common case where the opposite is true when size is really crucial (policy based string class, for example): Empty Base Class Optimization can apply when deriving from an empty policy class (just containing functions).
Read Uses and abuses of Inheritance by Herb Sutter.
The rule of thumb is:
1) If the choice of the concrete type is made at compile time, prefer a template. It will be safer (compile time errors vs run time errors) and probably better optimized.
2) If the choice is made at run-time (i.e. as a result of a user's action) there is really no choice - use inheritance and virtual functions.
Other options:
Use the Visitor Pattern (let external code work on your class).
Externalize some part of your class, for example via iterators, that generic iterator-based code can work on them. This works best if your object is a container of other objects.
See also the Strategy Pattern (there are c++ examples inside)
Templates are a way to let a class use a variable of which you don't really care about the type. Inheritance is a way to define what a class is based on its attributes. Its the "is-a" versus "has-a" question.
Most of your question has already been answered, but I wanted to elaborate on this bit:
Coming from Java, I am naturally
inclined to always use version 1, but
the templates versions seem to be
preferred often, e.g. in STL code? If
that's true, is it just because of
memory efficiency etc (no inheritance,
no virtual function calls)?
That's part of it. But another factor is the added type safety. When you treat a BrakeWithABS as a Brake, you lose type information. You no longer know that the object is actually a BrakeWithABS. If it is a template parameter, you have the exact type available, which in some cases may enable the compiler to perform better typechecking. Or it may be useful in ensuring that the correct overload of a function gets called. (if stopCar() passes the Brake object to a second function, which may have a separate overload for BrakeWithABS, that won't be called if you'd used inheritance, and your BrakeWithABS had been cast to a Brake.
Another factor is that it allows more flexibility. Why do all Brake implementations have to inherit from the same base class? Does the base class actually have anything to bring to the table? If I write a class which exposes the expected member functions, isn't that good enough to act as a brake? Often, explicitly using interfaces or abstract base classes constrain your code more than necessary.
(Note, I'm not saying templates should always be the preferred solution. There are other concerns that might affect this, ranging from compilation speed to "what programmers on my team are familiar with" or just "what I prefer". And sometimes, you need runtime polymorphism, in which case the template solution simply isn't possible)
this answer is more or less correct. When you want something parametrized at compile time - you should prefer templates. When you want something parametrized at runtime, you should prefer virtual functions being overridden.
However, using templates does not preclude you from doing both (making the template version more flexible):
struct Brake {
virtual void stopCar() = 0;
};
struct BrakeChooser {
BrakeChooser(Brake *brake) : brake(brake) {}
void stopCar() { brake->stopCar(); }
Brake *brake;
};
template<class Brake>
struct Car
{
Car(Brake brake = Brake()) : brake(brake) {}
void slamTheBrakePedal() { brake.stopCar(); }
Brake brake;
};
// instantiation
Car<BrakeChooser> car(BrakeChooser(new AntiLockBrakes()));
That being said, I would probably NOT use templates for this... But its really just personal taste.
Abstract base class has on overhead of virtual calls but it has an advantage that all derived classes are really base classes. Not so when you use templates – Car<Brake> and Car<BrakeWithABS> are unrelated to each other and you'll have to either dynamic_cast and check for null or have templates for all the code that deals with Car.
Use interface if you suppose to support different Break classes and its hierarchy at once.
Car( new Brake() )
Car( new BrakeABC() )
Car( new CoolBrake() )
And you don't know this information at compile time.
If you know which Break you are going to use 2b is right choice for you to specify different Car classes. Brake in this case will be your car "Strategy" and you can set default one.
I wouldn't use 2a. Instead you can add static methods to Break and call them without instance.
Personally I would allways prefer to use Interfaces over templates because of several reasons:
Templates Compiling&linking errors are sometimes cryptic
It is hard to debug a code that based on templates (at least in visual studio IDE)
Templates can make your binaries bigger.
Templates require you to put all its code in the header file , that makes the template class a bit harder to understand.
Templates are hard to maintained by novice programmers.
I Only use templates when the virtual tables create some kind of overhead.
Ofcourse , this is only my self opinion.
It looks like I had a fundamental misunderstanding about C++ :<
I like the polymorphic container solution. Thank you SO, for bringing that to my attention :)
So, we have a need to create a relatively generic container type object. It also happens to encapsulate some business related logic. However, we need to store essentially arbitrary data in this container - everything from primitive data types to complex classes.
Thus, one would immediately jump to the idea of a template class and be done with it. However, I have noticed C++ polymorphism and templates do not play well together. Being that there is some complex logic that we are going to have to work, I would rather just stick with either templates OR polymorphism, and not try to fight C++ by making it do both.
Finally, given that I want to do one or the other, I would prefer polymorphism. I find it much easier to represent constraints like "this container contains Comparable types" - a la java.
Bringing me to the topic of question: At the most abstract, I imagine that I could have a "Container" pure virtual interface that has something akin to "push(void* data) and pop(void* data)" (for the record, I am not actually trying to implement a stack).
However, I don't really like void* at the top level, not to mention the signature is going to change every time I want to add a constraint to the type of data a concrete container can work with.
Summarizing: We have relatively complex containers that have various ways to retrieve elements. We want to be able to vary the constraints on the elements that can go into the containers. Elements should work with multiple kinds of containers (so long as they meet the constraints of that particular container).
Edit: I should also mention that the containers themselves need to be polymorphic. That is my primary reason for not wanting to use templated C++.
So - should I drop my love for Java type interfaces and go with templates? Should I use void* and statically cast everything? Or should I go with an empty class definition "Element" that declares nothing and use that as my top level class in the "Element" hierarchy?
One of the reasons why I love stack overflow is that many of the responses provide some interesting insight on other approaches that I hadn't not have even considered. So thank you in advance for your insights and comments.
You can look at using a standard container of boost::any if you are storing truly arbitrary data into the container.
It sounds more like you would rather have something like a boost::ptr_container where anything that can be stored in the container has to derive from some base type, and the container itself can only give you reference's to the base type.
The simple thing is to define an abstract base class called Container, and subclass it for each kind of item you may wish to store. Then you can use any standard collection class (std::vector, std::list, etc.) to store pointers to Container. Keep in mind, that since you would be storing pointers, you would have to handle their allocation/deallocation.
However, the fact that you need a single collection to store objects of such wildly different types is an indication that something may be wrong with the design of your application. It may be better to revisit the business logic before you implement this super-generic container.
Polymorphism and templates do play very well together, if you use them correctly.
Anyway, I understand that you want to store only one type of objects in each container instance. If so, use templates. This will prevent you from storing the wrong object type by mistake.
As for container interfaces: Depending on your design, maybe you'll be able to make them templated, too, and then they'll have methods like void push(T* new_element). Think of what you'll know about the object when you want to add it to a container (of an unknown type). Where will the object come from in the first place? A function that returns void*? Do you know that it'll be Comparable? At least, if all stored object classes are defined in your code, you can make them all inherit from a common ancestor, say, Storable, and use Storable* instead of void*.
Now if you see that objects will always be added to a container by a method like void push(Storable* new_element), then really there will be no added value in making the container a template. But then you'll know it should store Storables.
Can you not have a root Container class that contains elements:
template <typename T>
class Container
{
public:
// You'll likely want to use shared_ptr<T> instead.
virtual void push(T *element) = 0;
virtual T *pop() = 0;
virtual void InvokeSomeMethodOnAllItems() = 0;
};
template <typename T>
class List : public Container<T>
{
iterator begin();
iterator end();
public:
virtual void push(T *element) {...}
virtual T* pop() { ... }
virtual void InvokeSomeMethodOnAllItems()
{
for(iterator currItem = begin(); currItem != end(); ++currItem)
{
T* item = *currItem;
item->SomeMethod();
}
}
};
These containers can then be passed around polymorphically:
class Item
{
public:
virtual void SomeMethod() = 0;
};
class ConcreteItem
{
public:
virtual void SomeMethod()
{
// Do something
}
};
void AddItemToContainer(Container<Item> &container, Item *item)
{
container.push(item);
}
...
List<Item> listInstance;
AddItemToContainer(listInstance, new ConcreteItem());
listInstance.InvokeSomeMethodOnAllItems();
This gives you the Container interface in a type-safe generic way.
If you want to add constraints to the type of elements that can be contained, you can do something like this:
class Item
{
public:
virtual void SomeMethod() = 0;
typedef int CanBeContainedInList;
};
template <typename T>
class List : public Container<T>
{
typedef typename T::CanBeContainedInList ListGuard;
// ... as before
};
First, of all, templates and polymorphism are orthogonal concepts and they do play well together. Next, why do you want a specific data structure? What about the STL or boost data structures (specifically pointer containter) doesn't work for you.
Given your question, it sounds like you would be misusing inheritance in your situation. It's possible to create "constraints" on what goes in your containers, especially if you are using templates. Those constraints can go beyond what your compiler and linker will give you. It's actually more awkward to that sort of thing with inheritance and errors are more likely left for run time.
Using polymorphism, you are basically left with a base class for the container, and derived classes for the data types. The base class/derived classes can have as many virtual functions as you need, in both directions.
Of course, this would mean that you would need to wrap the primitive data types in derived classes as well. If you would reconsider the use of templates overall, this is where I would use the templates. Make one derived class from the base which is a template, and use that for the primitive data types (and others where you don't need any more functionality than is provided by the template).
Don't forget that you might make your life easier by typedefs for each of the templated types -- especially if you later need to turn one of them into a class.
You might also want to check out The Boost Concept Check Library (BCCL) which is designed to provide constraints on the template parameters of templated classes, your containers in this case.
And just to reiterate what others have said, I've never had a problem mixing polymorphism and templates, and I've done some fairly complex stuff with them.
You could not have to give up Java-like interfaces and use templates as well. Josh's suggestion of a generic base template Container would certainly allow you do polymorphically pass Containers and their children around, but additionally you could certainly implement interfaces as abstract classes to be the contained items. There's no reason you couldn't create an abstract IComparable class as you suggested, such that you could have a polymorphic function as follows:
class Whatever
{
void MyPolymorphicMethod(Container<IComparable*> &listOfComparables);
}
This method can now take any child of Container that contains any class implementing IComparable, so it would be extremely flexible.