I have the following existing classes:
class Gaussian {
public:
virtual Vector get_mean() = 0;
virtual Matrix get_covariance() = 0;
virtual double calculate_likelihood(Vector &data) = 0;
};
class Diagonal_Gaussian : public Gaussian {
public:
virtual Vector get_mean();
virtual Matrix get_covariance();
virtual double calculate_likelihood(Vector &data);
private:
Vector m_mean;
Vector m_covariance;
};
class FullCov_Gaussian : public Gaussian {
public:
virtual Vector get_mean();
virtual Matrix get_covariance();
virtual double calculate_likelihood(Vector &data);
private:
Vector m_mean;
Matrix m_covariance;
};
As you see, the class Gaussian acts as an interface but doesn't have any implementation. This is all working fine.
Now I want to make an class "AdaptedGaussian" where the data vector provided to the calculated_likelihood will be changed before the likelihood is calculated.
Some requirements:
The AdaptedGaussian must be a child-class of Gaussian
AdaptedGaussian must be able to "wrap" or "be an instance of" every possible Gaussian class
AdaptedGaussian must be constructed from an already existing Gaussian Object
The idea I have now is:
class Adapted_Gaussian : public Gaussian {
private:
Gaussian* m_g;
public:
virtual Vector get_mean() { return m_g->get_mean(); }
virtual Matrix get_covariance() { return m_g->get_covariance(); }
virtual double calculate_likelihood(Vector &data)
{
//do something with data
return g->calculate_likelihood(Vector &data);
}
}
There are maybe some disadvantages:
For every method (and there are more than showed here) a dummy method must be written in the new class
If Gaussian is ever extended, and this class would be forgotten, nasty bugs can appear.
Am I doing this in the right way? Or are there better methods to implement this?
Is there maybe a good way to standard delegate every non-implemented method to the same named method of m_g?
Looks good, I think this is a pretty classic implementation of the Adapter pattern. Just don't forget to declare a virtual destructor for your Gaussian class. As for the disadvantages.
The way Java class library deal with the dummy method problem is to create a dummy class that provides empty implementation for every single method. All classes that do not want to implement every single method can just inherit from this dummy class and selectively override methods that interests them.
If you extend your Gaussian class with few more methods, as long as you declare them as pure virtual method you will get a compiler error in your child class file anyway.
As you point out writing a lot of basic pass-through functions is tedious and adds an implied maintenance overhead. Also, having a pointer member implies extra (albeit simple) lifetime management issues of the owned pointer. Probably the simplest way to address these issues is to make AdaptedGaussian a template, templated on the specific instance of Gaussian to be adapted.
template<class BaseGaussian> class AdaptedGaussian : public BaseGaussian
{
virtual double calculate_likelihood(Vector &data)
{
// do something with data
return BaseGaussian::calculate_likelihood(Vector &data);
}
};
This does rely on all adapted instances of Gaussian being default constructible, or at least conforming to a common constructor signature.
If you want to construct an AdaptedGaussian from an existing XXXGaussian, then so long as the XXXGaussian is itself copyable you can add a suitable constructor:
template<class BaseGaussian> class AdaptedGaussian : public BaseGaussian
{
public:
AdaptedGaussian(const BaseGaussian& other) : BaseGaussian(other)
{
}
// ...
};
This could maybe also be solved by a Strategy Pattern.
It seams to me, that duffymo also was thinking in this direction with "composition". Change the design in that way, that the base class calls some method of an other object it contains. This object contains the coding for calculate_likelihood. Either the whole method can be deferred or only the modifications (in the second case, the default would be to just do nothing).
E.g.: (corrected version)
class Gaussian {
private:
Cl_Strategy* m_cl_strategy;
public:
Gaussian(Cl_Strategy* cl_strategy) {
m_cl_strategy = cl_strategy;
};
virtual Vector get_mean() = 0;
virtual Matrix get_covariance() = 0;
virtual double _calc_likelihood(Vector &data) = 0;
virtual double calculate_likelihood(Vector &data) {
m_cl_strategy->do_your_worst(this, data);
return _calc_likelihood(data);
};
};
I hope, I got that one right, my C++ is a little bit dusted ...
_calc_likelihood must be implemented by subclasses and calculate_likelihood binds all together.
Of course, this solution adds a little overhead, but in some situations, the overhead might be OK.
In Java, it's common to have both an interface and an abstract class that implements it to provide default behavior for all methods. (See Joshua Bloch's design of the Collections API in java.util package.) Perhaps that can help you here as well. You'll give clients a choice of using either the interface or the abstract class.
You can also try composition. Pass an instance of an adapted Gaussian to subclasses and defer behavior to it.
Related
Recently, I've learnt about composite pattern. I want to use it in my assignment which I have to implement File and Folder classes. I realize that sub-classes like CFile and Cfolder got to have the same attributes (name and size). So is it alright for me to put the attributes into the interface? As far as I know, it is not good practice to do so. However, I don't understand why I shouldn't. Or is there any other solutions?
I would say its not a problem. Th difference is that instead of a pure interface class you have an abstract base class. However, if you want to retain the flexibility to use the interface for implementations that are not tied down to those specific member variables then you can always create an interface class as well as an abstract base class for full flexibility. Though that may be getting overly complex overly soon, you can always split the interface from the abstract base later if you need to.
using CItemUPtr = std::unique_ptr<class CItem>;
/**
* Interface class
*/
class CItem
{
public:
virtual ~CItem() {}
virtual CItemUPtr findByName(std::string const& name) = 0;
virtual void setHidden(bool a, bool b) = 0;
};
/**
* Abstract base class
*/
class AbstractCItem
: public CItem
{
protected:
std::string name;
std::size_t size;
};
class CFile
: public AbstractCItem
{
public:
CItemUPtr findByName(std::string const& name) override
{
// stuff
return {};
}
void setHidden(bool a, bool b) override {}
};
It's not really a question of "is it a good practice". By creating an interface, you're defining a standard. The question is, do you NEED the implementation of the interface to contain those data members? You are in the best position to understand your implementation, so you're really the only one who can answer this.
As a general rule, the class implementing the interface should be a black box, and the outside world shouldn't have access to any internals (including member data). Interfaces define common functionality that is required to be present to be able to support the interface, and I'd expect those implementation details to be buried in the underlying implementation of the class only, as a general rule. YMMV.
The design principle for a class should be:
'It is impossible to break the class invariant from the outside'
If the constructor(s) set up the class invariant, and all members
uphold the class invariant, this is achieved.
However, if the class does not have a class invariant, having
public members achieves the same thing.
// in C++, this is a perfectly fine, first order class
struct Pos
{
int x,y;
Pos& operator+=(const Pos&);
};
also see https://en.wikipedia.org/wiki/Class_invariant
I have somewhat complicated inheritance structure which is mainly there to avoid code-duplicating and to facilitate common interface for various classes. It relies on virtual and non-virtual inheritance and looks more or less like this:
class AbstractItem
{
//bunch of abstract methods
};
class AbstractNode : virtual public AbstractItem
{
//some more virtual abstract methods
};
class AbstractEdge : virtual public AbstractItem
{
//yet some different virtual abstract methods
};
and then some "real" classes like this
class Item : virtual public AbstractItem
{
//implements AbstractItem
};
class Node : public Item, public AbstractNode
{
//implements AbstractNode
};
class Edge : public Item, public AbstractEdge
{
//implemetns AbstractEdge
};
and this is packed into a graph model class so that:
class AbstractGraph
{
virtual QList<AbstractNode*> nodes() const = 0;
virtual QList<AbstractEdge*> edges() const = 0;
};
class GraphModel : public AbstractGraph
{
public:
virtual QList<AbstractNode*> nodes() const override; //this converts m_Nodes to a list of AbstractNode*
virtual QList<AbstractEdge*> edges() const override; //dtto
private:
QList<Node*> m_Nodes;
QList<Edge*> m_Edge;
};
The reason for this convoluted structure is that there are different classes implementing AbstractGraph such as sorting models, filtering and those come in different variants - some store their data just as the model shown and have their own sets of AbstractItem/Node/Edge derived classes, others are dynamic and rely on the data of underlying graph/model without data of their own. Example:
class FilterNode : public AbstractNode
{
//access the data in the m_Item via AbstractItem interface and implements differently AbstractNode interface
private:
AbstractItem *m_Item = nullptr; //this variable holds pointer to some real item with actual data such as the one from GraphModel
};
class GraphFilter : public AbstractGraph
{
//implements the interface differently to the GraphModel
private:
QList<FilterNode*> m_Nodes;
AbstractGraph *m_Source = nullptr; //source graph...
};
I have second thoughts about this because it relies on (virtual)inheritance, relies on abstract methods called through base etc. Is the overhead from this all that significant?
The alternative would be either:
a) Copy-paste lots of code to avoid virtual methods and most of the inheritance but it would be code maintanance nightmare. Plus no common interfaces...
b) Template it all out somehow... this I am somewhat unsure about and I do not know whether it is even possible. I do use them at few places in this already to avoid code duplication.
So does it seem reasonable or like an overkill? I might add that in some cases I will call the methods directly (inside the models) bypassing the virtual calls but on the outside it will pretty much always called via the abstract base.
Trying to implement generic graph algorithms using dynamic polymorphism with C++ makes things
Unnecessary hard.
Unnecessary slow.
The virtual function overhead stands out more significantly the simpler the functions are. In the quoted interface you also do return containers from various functions. Even if these are COW container, there is some work involved and accessing the sequence casually may easily unshare (i.e., copy) the represetation.
In the somewhat distant past (roughly 1990 to 1996) I have experimented with a dynamic polymorphism based generic implementtion of graph algorithms and was struggling with various problems to make it work. When I read first about STL it turned out that most of the problems can be addressed via a similar abstraction (although one key idea was still missing: property maps; see the reference to BGL below for details).
I found it preferable to implement graph algorithms in terms of an STL-like abstraction. The algorithms are function template implemented in terms of specific concepts which are sort of like base classes except for two key differences:
There are no virtual function calls involved in the abstraction and functions can normally be inlined.
The types returned from functions only need to model an appropriate concept rather than having to be compatible via some form of inheritance to some specific interface.
Admittedly I'm biased because I wrote my diploma thesis on this topic. For an [independently develop] application of this approach have a look at the Boost Graph Library (BGL).
For some performance measurements comparing different function call approaches, have a look at the function call benchmarks. They are modelled after the performance measurements for function calls from the Performance TR.
I have a class hierarchy that I designed for a project of mine, but I am not sure how to go about implement part of it.
Here is the class hierarchy:
class Shape { };
class Colored { // Only pure virtual functions
};
class Square : public Shape { };
class Circle : public Shape { };
class ColoredSquare : public Square, public Colored { };
class ColoredCircle : public Circle, public Colored { };
In part of my project, I have a std::vector of different type shapes. In order to run an algorithm though, I need to put them in a std::vector of colored objects (all of which are derived types of different concrete shapes, so I need a method to cast a Square into a ColoredSquare and a Circle into a ColoredCircle at runtime.
The tricky thing is that the 'shape' classes are in a different library than the 'colored' classes.
What is the best method to acomplish this? I have thought about doing a dynamic_cast check, but if there is a better way, I would rather go with that.
Edit 1:
Here's a bit better of an Example:
class Traceable {
public:
// All virtual functions
virtual bool intersect(const Ray& r) = 0;
// ...
};
class TraceableSphere : public Sphere, public Traceable {
};
class IO {
public:
// Reads shapes from a file, constructs new concrete shapes, and returns them to
// whatever class needs them.
std::vector<Shape*> shape_reader(std::string file_name);
};
class RayTracer {
public:
void init(const std::vector<Shape*>& shapes);
void run();
private:
std::vector<Traceable*> traceable_shapes;
};
void RayTracer::init(const std::vector<Shape*>& shapes) {
// ??? traceable_shapes <- shapes
}
void RayTracer::run() {
// Do algorithm
}
You could use the decorator pattern:
class ColorDecorator public Colored
{
ColorDecorator(Shape* shape): m_shape(shape) {}
... //forward/implement whatever you want
};
If you want to store a Square in a Colored vector, wrap it in such a decorator.
Whether this makes sense is questionable though, it depends on your design and the alternatives. Just in case, also check out the visitor pattern (aka double dispatch) which you could use to just visit a subset of objects in a container or treat them differently depending on their type.
Looks like you are going to design the class library in a "is-a" style, welcome to the Inheritance-Hell.
Can you elaborate a bit about your "algorithm" ?
Typically it is bad design if you need to "type-test" on objects, since that is what you want to avoid with polymorphism. So the object should provide the proper implementation the algorithm uses (design-pattern: "strategy"), advanced concepts utilize "policy-based class design".
With careful design, you can avoid casting. In particular, care for SRP. Implement methods carefully so that they use a single Interface to achieve a single goal/fulfill a single responsibility. You have not posted anything about the algorithms or how the objects will be used. Below is a hypothetical sample design:
class A {
public:
void doSomeThing();
};
class B{
public:
void doSomeOtherThing();
};
class C:public A,public B{};
void f1( A* a){
//some operation
a->doSomeThing();
//more operation
}
void f2(B* b){
//some operation
b->doSomeOtherThing();
//more operation
}
int main(int argc, char* argv[])
{
C c;
f1(&c);
f2(&c);
return 0;
}
Note using the object c in different context. The idea is to use only the interface of C that is relevant for a specific purpose. This example can have classes instead of the functions f or f2. For example, you have some Algorithms classes that do some operation using the objects in the inheritance hierarchy, you should create the classes such that they perform a single responsibility, which most of the time requires a single interface to use, and then you can create/pass objects as instance of that interface only.
Object-oriented programming only makes sense if all implementations of an interface implement the same operations in a different way. Object-orientation is all about operations. You have not shown us any operations, so we cannot tell you if object-orientation even makes sense for your problem at all. You do not have to use object-oriented programming if it doesn't make sense, especially in C++, which offers a few other ways to manage code.
As for dynamic_cast -- in well-designed object-oriented code, it should be rare. If you really need to know the concrete type in some situation (and there are such situations in real-life software engineering, especially when you maintain legacy code), then it's the best tool for the job, and much cleaner than trying to reimplement the wheel by putting something like virtual Concrete* ToConcrete() in the base class.
I think the simplest & cleanest solution for you would be something like the following similar to what Chris suggests at the end.
class Shape {
virtual Colored *getColored() {
return NULL;
}
};
class Colored { // Only pure virtual functions
};
class Square : public Shape { };
class Circle : public Shape { };
class ColoredSquare : public Square, public Colored {
virtual Colored *getColored() {
return this;
}
};
class ColoredCircle : public Circle, public Colored {
virtual Colored *getColored() {
return this;
}
};
I do not completely understand this statement though
" The tricky thing is that the 'shape' classes are in a different library than the 'colored' classes."
How does this not allow you to do what's being suggested here (but still allow you to create a class ColoredSquare) ?
I am a bit weak with designing and I wonder whether it's a good design to have simple virtual methods (not only pure virtual) in an interface? I have a class that is some kind of interface:
class IModel {
void initialize(...);
void render(...);
int getVertexCount() const;
int getAnotherField() const;
};
the initialize and render methods need to be reimplemented for sure, so they are good candidates for pure virtual methods. However, the two last methods are very simple and practically always with the same implementation (just returning some field). Can I leave them as virtual methods with default implementation or is it better to have it pure virtual that needs to be reimplemented, because it's an interface?
We have to point out some differences:
there is no such thing as "some kind of Interface", is this class supposed to be an Interface or an Abstract Class?
If it's supposed to be an Interface then the answer is: all its methods must be pure virtual (no implementation) and it must not contain fields, not even one. The most you can (must, actually) do is, like jaunchopanza said, giving an empty body to the virtual destructor, thus allowing the derived classes to be destructed accordingly.
If, instead, it's supposed to be an Abstract Class then you're free to add the fields m_vertexCount and m_anotherField (I suppose) and implement getVertexCount() and ՝getAnotherField()՝ as you please. However, you should not name it IModel, because the I prefix should be used only for Interfaces.
Edit: I think I'm one of those "Believers" of which Bo Persson is talking about :)
You are facing a trade-off between code repetition and readability. The reader of your code will derive good help from every pure interface and from every non-overridden method. However, the default implementation wil be duplicated by every subclass. Whether or not you should provide a default implementation depends on the likelihood that the default implementation will change and will then need to be changed all over the place.
Without knowing these details, a hard yes-or-no answer cannot be given.
One thing you could do is make IModel be an interface and provide base class, eg ModelBase that implements common/repeating functionality.
class IModel
{
virtual void initialize(...) = 0;
virtual void render(...) = 0
virtual int getVertexCount() const = 0;
virtual int getAnotherField() const = 0;
};
class ModelBase : public IModel
{
// common functions
virtual int getVertexCount() const override { return vertexCount_; }
virtual int getAnotherField() const override { return anotherField_; }
protected:
int vertexCount_ = 0, anotherField_ = 0;
};
class MyModel : public ModelBase
{
virtual void initialize(...) override { ... }
virtual void render(...) override { ... }
};
The one downside of this approach is that there will be some (probably negligible) performance penalty due to extra virtual functions and loss of optimizations by the compiler.
I'm developing a GUI library with a friend and we faced the problem of how to determine whether a certain element should be clickable or not (Or movable, or etc.).
We decided to just check if a function exists for a specific object, all gui elements are stored in a vector with pointers to the base class.
So for example if I have
class Base {};
class Derived : public Base
{
void example() {}
}
vector<Base*> objects;
How would I check if a member of objects has a function named example.
If this isn't possible than what would be a different way to implement optional behaviour like clicking and alike.
You could just have a virtual IsClickable() method in your base class:
class Widget {
public:
virtual bool IsClickable(void) { return false; }
};
class ClickableWidget : public Widget
{
public:
virtual bool IsClickable(void) { return true; }
}
class SometimesClickableWidget : public Widget
{
public:
virtual bool IsClickable(void);
// More complex logic punted to .cc file.
}
vector<Base*> objects;
This way, objects default to not being clickable. A clickable object either overrides IsClickable() or subclasses ClickableWidget instead of Widget. No fancy metaprogramming needed.
EDIT: To determine if something is clickable:
if(object->IsClickable()) {
// Hey, it's clickable!
}
The best way to do this is to use mixin multiple inheritance, a.k.a. interfaces.
class HasExample // note no superclass here!
{
virtual void example() = 0;
};
class Derived : public Base, public HasExample
{
void example()
{
printf("example!\n");
}
}
vector<Base*> objects;
objects.push_back(new Derived());
Base* p = objects[0];
HasExample* he = dynamic_cast<HasExample*>(p);
if (he)
he->example();
dynamic_class<>() does a test at runtime whether a given object implements HasExample, and returns either a HasExample* or NULL. However, if you find yourself using HasExample* it's usually a sign you need to rethink your design.
Beware! When using multiple inheritance like this, then (HasExample*)ptr != ptr. Casting a pointer to one of its parents might cause the value of the pointer to change. This is perfectly normal, and inside the method this will be what you expect, but it can cause problems if you're not aware of it.
Edit: Added example of dynamic_cast<>(), because the syntax is weird.
If you're willing to use RTTI . . .
Instead of checking class names, you should create Clickable, Movable, etc classes. Then you can use a dynamic_cast to see if the various elements implement the interface that you are interested in.
IBM has a brief example program illustrating dynamic_cast here.
I would create an interface, make the method(s) part of the interface, and then implement that Interface on any class that should have the functionality.
That would make the most sense when trying to determine if an Object implements some set of functionality (rather than checking for the method name):
class IMoveable
{
public:
virtual ~IMoveable() {}
virtual void Move() = 0;
};
class Base {};
class Derived : public Base, public IMoveable
{
public:
virtual void Move()
{
// Implementation
}
}
Now you're no longer checking for method names, but casting to the IMoveable type and calling Move().
I'm not sure it is easy or good to do this by reflection. I think a better way would be to have an interface (somethign like GUIElement) that has a isClickable function. Make your elements implement the interface, and then the ones that are clickable will return true in their implementation of the function. All others will of course return false. When you want to know if something's clickable, just call it's isClickable function. This way you can at runtime change elements from being clickable to non-clickable - if that makes sense in your context.