Versions of the same class with different inheritance - c++

I am working with a mathematical software framework that has two large inheritance trees. This is inspired conceptually: One is for general functions (they inherit from Func) and one is for normalized probability density functions (they inherit from Pdf).
However, there are some classes that should in principle exist in both hierarchies, for example, a Gauss function.
At the moment, there is a GaussFunc and a GaussPdf class with identical source code implementations, except for the class name and the inheritance from Func and Pdf, respectively.
I would like to improve this situation by getting rid of the second copy of the source code. I can think of several ways to solve this without messing up the entire inheritance tree, for example using preprocessor macros in combination with #include statements, or maybe templates, but I'm not sure as to what is the most advisable thing to do in this situation.
Any suggestion on how to proceed in this situation is highly welcome. However, please note that I cannot restructure the whole software project to avoid this problem a-priori (which is certainly the most sane approach, but not possible within the timeframe of my work and not within the range of things I'm allowed to decide about).

Just templatize the class you want to inherit from, like this:
template<typename Base>
class MetaGauss : public Base
{
...
};
typedef MetaGauss<Func> GaussFunc;
typedef MetaGauss<Pdf> GaussPdf;
Here is a live demo:
http://ideone.com/XD4E6y

Related

C++ Class Superclass

I'm currently making a C++ version of python's svg.path. There are multiple types of paths, like a Line, CubicBezier, etc. which are separate classes (with no inheritance, except for Line and Close which are inherited from Linear but that can be removed if necessary). There's also a Path class, which in python has a list of segments. But I'm not sure how to have a vector of segments in C++.
So something like this:
class Line {};
class CubicBezier {};
class Arc {};
class Path {
// Segment should be able to store any type of Segment like Line, Arc, etc.
vector<Segment> segments;
};
Currently the best thing I can think of is where I have a Segment class that stores all the segment types and has various setters and getters for each of them, but that seems tedious and annoying, as well as inefficient.
Also, if there's a better way to do this (and there almost certainly is), please explain how to do that, and I'll try it.
If needed, I can post the python code from svg.path.
Since you're not using inheritance, you'll need a different tool. It appears you want something like std::variant<Line, CubicBezier, Arc>.
The downside of this approach is that you'll need to handle all the different cases yourself, since there's no common base class interface.

Inheritance in C++ multiple class issue

just a very simple question regarding inheritance in c++.
let's say I have a few classes.
Class A inherits from class B and from class C.
I want to make class D inherit from class A , but the functionality of class C is breaking my code.
Is it possible to somehow exclude class C when I inheriting from class A in class D?
edit:
#Quentin
I'm using SFML and class A inherits from the sf::NonCopyable class. Class A is the SceneNode class on which the hierarchy for all entities/objects in the game world is based. I was making a "TileEngine" class that produces instances of "TileLayer" objects and I wanted the TileLayers to inherit from SceneNode so that I can pass drawing calls onto them through the hierarchy but since they're non copyable I can't fit them into a container and iterate through them in the TileEngine class.
But I think you're right, it doesn't truly break the code. I think I'll just need to add a few variables and come up with a book keeping system to make it work.
I was just curious if what I asked was possible since it'd be an easy solution and I don't know all the ins and outs of using inheritance yet, so even though it seemed unlikely I decided to check. Thx for the replies I think I'll be able to adapt the code on my own.
Nope.
Your A is both a B and a C.
If D cannot be a C, then it cannot be an A either.
Maybe use composition instead?
Update based on your specific case: There are a couple of ways that you can sort this out.
First off, does a SceneNode really need to be non-copyable, and if so, why? If this is a pure design decision, it is now apparent that it was the wrong one, since you're now in need of a copyable SceneNode. If the decision is technical (for example, there is bookkeeping data that is hard to clone correctly), you can try solving that problem. Failing that...
Could your SceneNode be movable instead? Move semantics are generally simpler than copy semantics to implement, and standard containers are perfectly happy with movable-only values. But even in that case...
Could your SceneNode be a simple interface instead? You only mention being able to call a drawing function. This does not sound related to any copying business, so maybe an interface with a pure virtual draw function is all you need. Otherwise...
If you really can't budge these requirements (at which point I would be surprised, but let's pretend), you can simply use a container of std::unique_ptr<TileLayer>. These don't require anything from their pointee, and can be stored in containers at will.
And then there's a whole 'nother batch of techniques that could fit you case. Don't forget that OOP and inheritance are just one way to crack that nut, but C++ offers many more tools and techniques besides it. But first, make it work :)

What are the cons of using inheritance to reduce duplicate code?

I'm currently writing a series of specialized data structure classes for a project I'm working on and I noticed that they all more or less share several completely identical properties and functions.
I could either just live with the duplicate code or let them inherit from a shared base class so that the total amount of code is reduced ALOT (which makes the whole thing much more maintainable). But alas I can't decide on what to do at this point.
I more or less understand the pros of the inheritance route but what are the cons when you compare it to having duplicate code lying around?
Which route is the more sane one to go for a 'long term' project?
From a design perspective, inheritance violates encapsulation. By inheriting from a class you link the new class with the implementation details of the parent, not all of which may be necessary for the particular implementation of the inheriting class.
For instance, let's say you have a class vehicle. This class has a number of private variables, e.g., weight_, maxSpeed_, and fuelCapacity_.
Now let's say you inherit to the class bicycle. The new class will have all the details associated with fuelCapacity_, even though they are not needed. This kind of thing can be quite a pain as the objects get more complex as changes which break the parent can also break the inheriting classes, even if they don't actually use that volatile part of the code.
A much safer choice is composition.

Inheritance & virtual functions Vs Generic Programming

I need to Understand that whether really Inheritance & virtual functions not necessary in C++ and one can achieve everything using Generic programming. This came from Alexander Stepanov and Lecture I was watching is Alexander Stepanov: STL and Its Design Principles
I always like to think of templates and inheritance as two orthogonal concepts, in the very literal sense: To me, inheritance goes "vertically", starting with a base class at the top and going "down" to more and more derived classes. Every (publically) derived class is a base class in terms of its interface: A poodle is a dog is an animal.
On the other hand, templates go "horizontal": Each instance of a template has the same formal code content, but two distinct instances are entirely separate, unrelated pieces that run in "parallel" and don't see each other. Sorting an array of integers is formally the same as sorting an array of floats, but an array of integers is not at all related to an array of floats.
Since these two concepts are entirely orthogonal, their application is, too. Sure, you can contrive situations in which you could replace one by another, but when done idiomatically, both template (generic) programming and inheritance (polymorphic) programming are independent techniques that both have their place.
Inheritance is about making an abstract concept more and more concrete by adding details. Generic programming is essentially code generation.
As my favourite example, let me mention how the two technologies come together beautifully in a popular implementation of type erasure: A single handler class holds a private polymorphic pointer-to-base of an abstract container class, and the concrete, derived container class is determined a templated type-deducing constructor. We use template code generation to create an arbitrary family of derived classes:
// internal helper base
class TEBase { /* ... */ };
// internal helper derived TEMPLATE class (unbounded family!)
template <typename T> class TEImpl : public TEBase { /* ... */ }
// single public interface class
class TE
{
TEBase * impl;
public:
// "infinitely many" constructors:
template <typename T> TE(const T & x) : impl(new TEImpl<T>(x)) { }
// ...
};
They serve different purpose. Generic programming (at least in C++) is about compile time polymorphisim, and virtual functions about run-time polymorphisim.
If the choice of the concrete type depends on user's input, you really need runtime polymorphisim - templates won't help you.
Polymorphism (i.e. dynamic binding) is crucial for decisions that are based on runtime data. Generic data structures are great but they are limited.
Example: Consider an event handler for a discrete event simulator: It is very cheap (in terms of programming effort) to implement this with a pure virtual function, but is verbose and quite inflexible if done purely with templated classes.
As rule of thumb: If you find yourself switching (or if-else-ing) on the value of some input object, and performing different actions depending on its value, there might exist a better (in the sense of maintainability) solution with dynamic binding.
Some time ago I thought about a similar question and I can only dream about giving you such a great answer I received. Perhaps this is helpful: interface paradigm performance (dynamic binding vs. generic programming)
It seems like a very academic question, like with most things in life there are lots of ways to do things and in the case of C++ you have a number of ways to solve things. There is no need to have an XOR attitude to things.
In the ideal world, you would use templates for static polymorphism to give you the best possible performance in instances where the type is not determined by user input.
The reality is that templates force most of your code into headers and this has the consequence of exploding your compile times.
I have done some heavy generic programming leveraging static polymorphism to implement a generic RPC library (https://github.com/bytemaster/mace (rpc_static_poly branch) ). In this instance the protocol (JSON-RPC, the transport (TCP/UDP/Stream/etc), and the types) are all known at compile time so there is no reason to do a vtable dispatch... or is there?
When I run the code through the pre-processor for a single.cpp it results in 250,000 lines and takes 30+ seconds to compile a single object file. I implemented 'identical' functionality in Java and C# and it compiles in about a second.
Almost every stl or boost header you include adds thousands or 10's of thousands of lines of code that must be processed per-object-file, most of it redundant.
Do compile times matter? In most cases they have a more significant impact on the final product than 'maximally optimized vtable elimination'. The reason being that every 'bug' requires a 'try fix, compile, test' cycle and if each cycle takes 30+ seconds development slows to a crawl (note motivation for Google's go language).
After spending a few days with java and C# I decided that I needed to 're-think' my approach to C++. There is no reason a C++ program should compile much slower than the underlying C that would implement the same function.
I now opt for runtime polymorphism unless profiling shows that the bottleneck is in vtable dispatches. I now use templates to provide 'just-in-time' polymorphism and type-safe interface on top of the underlying object which deals with 'void*' or an abstract base class. In this way users need not derive from my 'interfaces' and still have the 'feel' of generic programming, but they get the benefit of fast compile times. If performance becomes an issue then the generic code can be replaced with static polymorphism.
The results are dramatic, compile times have fallen from 30+ seconds to about a second. The post-preprocessor source code is now a couple thousand lines instead of 250,000 lines.
On the other side of the discussion, I was developing a library of 'drivers' for a set of similar but slightly different embedded devices. In this instance the embedded device had little room for 'extra code' and no use for 'vtable' dispatch. With C our only option was 'separate object files' or runtime 'polymorphism' via function pointers. Using generic programming and static polymorphism we were able to create maintainable software that ran faster than anything we could produce in C.

Why do Boost Parameter elected inheritance rather than composition?

I suppose most of the persons on this site will agree that implementation can be outsourced in two ways:
private inheritance
composition
Inheritance is most often abused. Notably, public inheritance is often used when another form or inheritance could have been better and in general one should use composition rather than private inheritance.
Of course the usual caveats apply, but I can't think of any time where I really needed inheritance for an implementation problem.
For the Boost Parameter library however, you will notice than they have chosen inheritance over composition for the implementation of the named parameter idiom (for the constructor).
I can only think of the classical EBO (Empty Base Optimization) explanation since there is no virtual methods at play here that I can see.
Does anyone knows better or can redirect me to the discussion ?
Thanks,
Matthieu.
EDIT: Ooopss! I posted the answer below because I misread your post. I thought you said the Boost library used composition over inheritance, not the other way around. Still, if its usefull for anyone... (See EDIT2 for what I think could be the answer for you question.)
I don't know the specific answer for the Boost Parameter Library. However, I can say that this is usually a better choice. The reason is because whenever you have the option to implement a relationship in more than one way, you should choose the weakest one (low coupling/high cohesion). Since inheritance is stronger than composition...
Notice that sometimes using private inhertiance can make it harder to implement exception-safe code too. Take operator==, for example. Using composition you can create a temporary and do the assignment with commit/rollback logic (assuming a correct construction of the object). But if you use inheritance, you'll probably do something like Base::operator==(obj) inside the operator== of the derived class. If that Base::operator==(obj) call throws, you risk your guarantees.
EDIT 2: Now, trying to answer what you really asked. This is what I could understand from the link you provided. Since I don't know all details of the library, please correct me if I'm wrong.
When you use composition for "implemented in terms of" you need one level of indirection for the delegation.
struct AImpl
{
//Dummy code, just for the example.
int get_int() const { return 10; }
};
struct A
{
AImpl * impl_;
int get_int() const { return impl->get_int(); }
/* ... */
};
In the case of the parameter-enabled constructor, you need to create an implementation class but you should still be able to use the "wrapper" class in a transparent way. This means that in the example from the link you mentioned, it's desired that you can manipulate myclass just like you would manipulate myclass_impl. This can only be done via inheritance. (Notice that in the example the inheritance is public, since it's the default for struct.)
I assume myclass_impl is supposed to be the "real" class, the one with the data, behavior, etc. Then, if you had a method like get_int() in it and if you didn't use inheritance you would be forced to write a get_int() wrapper in myclass just like I did above.
This isn't a library I've ever used, so a glance through the documentation you linked to is the only thing I'm basing this answer on. It's entirely possible I'm about to be wrong, but...
They mention constructor delegation as a reason for using a common base class. You're right that composition could address that particular issue just as well. Putting it all in a single type, however, would not work. They want to boil multiple constructor signatures into a single user-written initialization function, and without constructor delegation that requires a second data type. My suspicion is that much of the library had already been written from the point of view of putting everything into the class itself. When they ran into the constructor delegation issue they compromised. Putting it into a base class was probably closer to what they were doing with the previous functionality, where they knew that both interface and implementation aspects of the functionality would be accessible to the class you're working with.
I'm not slamming the library in any way. I highly doubt I could put together a library like this one in any reasonable amount of time. I'm just reading between the lines. You know, speaking from ignorance but pretending I actually know something. :-)