c++ - Will metaclasses replace inheritance? - c++

I've watched some videos and articles on metaclasses , and IMO they have the potential to replace inheritance and providing a way to do a sort of compile-time inheritance.
With metaclasses i can provide interfaces, default function implementation, and even correctness of implementation(not the body of the functions of course) of class during compilation. So there is something that can't be done in metaclasses that can be done with inheritance, polymorphism and OO stuff?

Just as a general remark, I write as a "frequent" author of Python metaclasses:
Metaclasses are not meant for "day to day" use, and however C++ metaclasses may not be the same as they are in Python, I can hardly see such a concept be used to replace something as common as inheritance.
Inheritance have their roles. If you are in need for special rules for a whole class hierarchy, it may be useful to have a metaclass to specify these rules to start with. (The first example in both the C++ proposal and on most explanatory material I've browsed are "interfaces" - which mandates that all methods are virtual). So, supposing you find out a special rule you want in a whole set of classes in your system and can express that using a metaclass, that does not preclude you from creating a single base-class from that metaclass, and create all other classes with "normal" inheritance, if what will change from one such class to another are just the usual method overriding and specialization: inheritance still will be simpler, both to code, as to whoever reads your code, as for the toolchain echo system that evolved around C++ in decades of language existance.
So, yes, metaclass inheritance could, for all that is, allow you to encode all common capabilities of your classes in the metaclass, and them just create new classes using this metaclass, instead of inheritance. But there are no motives on Earth to do that.
Just to bridge back to Python, where I am well acquainted with the concept: the language recently took one step in the opposite direction - by enabling two mechanisms in normal inheritance in version 3.6 that were previously only possible through metaclasses, and in doing that, further reducing the use cases where metaclasses are needed just due to the extra complication they necessarily call for.

Related

Implement concatenative inheritance in C++

Is it ppssible to implement a concatenative inheritance or at least mixins in C++?
It feels like it is impossible to do in C++, but I cannot prove it.
Thank you.
According to this article:
Concatenative inheritance is the process of combining the properties
of one or more source objects into a new destination object.
Are we speaking of class inheritance ?
This is the basic way public inheritance works in C++. Thanks to multiple inheritance, you can even combine several base classes.
There might be some constraints however (e.g. name conflicts between different sources have to be addressed, depending on use case you might need virtual functions, and there might be the need to create explicitly a combined constructors).
Or is inheritance from instantiated objects meant ?
If it's really about objects and not classes, the story is different. You cannot clone and combine object of random type with each other, since C++ is a strongly typed language.
But first, let's correct the misleading wording. It's not really about concatenative inheritance, since inheritance is for classes. It's rather "concatenative prototyping", since you create new objects by taking over values and behaviors of existing objects.
To realize some kind of "concatenative prototyping" in C++, you therefore need to design it, based on the principle of composition, using a set of well defined "concatenable" (i.e. composable) base classes. This can be achieved, using the prototype design pattern together with the entity-component-system architecture.
What's the purpose
You are currently looking for this kind of construct, probably because you used it heavily in a dynamically typed language.
So keep in mind the popular quote (Mark Twain ? Maslow ? ):
If you have a hammer in your hand, every problem looks like nails
So the question is what you are really looking for and what problem you intend to solve. IMHO, it cannot be excluded that other idioms could be more suitable in the C++ world to achieve the same objective.

Does it make sense to replace Interfaces/Pure Abstract Classes with Concepts?

As I have understood are concepts quite similar to interfaces: Like interfaces, concepts allow to define some kind of a set of methods/concept/interface, which the implementation expects and needs to perform its task. Both strengthen the focus on semantic needs.
While Bjarne and many other people seem to see concepts as way to get rid of uses of enable_if and generally complicated templates, I wonder if it makes sense to use it instead of interfaces/pure abstract classes.
Benefits are obvious:
no runtime cost (v-table)
kind of duck typing, because the suitable classes do not have to implement the interface
even relationships between parameters (which interfaces do not support at all)
Of course a disadvantage is not far away:
no template definition checking for concepts, at least for now
…
I wonder if there are more of these and if it would make no sense after all.
I know that there are similar questions, but they are not specific with their purpose nor is it answered in an answer. I also found other people who had the same idea, but at no point there is somebody who really encourages/discourages this, let alone argues on it.
If you are using abstract classes for their intended purpose, then there is pretty much no way to replace them with concepts. Abstract base classes are for runtime polymorphism: the ability to, at runtime, have the implementation of an interface be decoupled from the site(s) where that interface gets used. You can use user input or data from a file to determine which derived class instance to create, then pass that instance to some other code that uses a pointer/reference to the base class.
Abstract classes are for defining an interface for runtime polymorphism.
A template is instantiated at compile-time. As such, everything about its interface must be verified at compile-time. You cannot vary which implementation of an interface you use for a template; it's statically written into your program, and the template gets instantiated with exactly and only the types you spell out in your code. That's compile-time polymorphism.
Concepts are for defining an interface for compile-time polymorphism. They don't work at runtime.
If you've been using abstract base classes for compile-time polymorphism, then you've been doing the wrong thing, and you should have stopped well before concepts came out.

Why were concepts (generic programming) conceived when we already had classes and interfaces?

Also on programmers.stackexchange.com:
I understand that STL concepts had to exist, and that it would be silly to call them "classes" or "interfaces" when in fact they're only documented (human) concepts and couldn't be translated into C++ code at the time, but when given the opportunity to extend the language to accomodate concepts, why didn't they simply modify the capabilities of classes and/or introduced interfaces?
Isn't a concept very similar to an interface (100% abstract class with no data)? By looking at it, it seems to me interfaces only lack support for axioms, but maybe axioms could be introduced into C++'s interfaces (considering an hypothetical adoption of interfaces in C++ to take over concepts), couldn't them? I think even auto concepts could easily be added to such a C++ interface (auto interface LessThanComparable, anyone?).
Isn't a concept_map very similar to the Adapter pattern? If all the methods are inline, the adapter essentially doesn't exist beyond compile time; the compiler simply replaces calls to the interface with the inlined versions, calling the target object directly during runtime.
I've heard of something called Static Object-Oriented Programming, which essentially means effectively reusing the concepts of object-orientation in generic programming, thus permitting usage of most of OOP's power without incurring execution overhead. Why wasn't this idea further considered?
I hope this is clear enough. I can rewrite this if you think I was not; just let me know.
There is a big difference between OOP and Generic Programming, Predestination.
In OOP, when you design the class, you had the interfaces you think will be useful. And it's done.
In Generic Programming, on the other hand, as long as the class conforms to a given set of requirements (mainly methods, but also inner constants or types), then it fits the bill and may be used. The Concept proposal is about formalizing this, so that detection may occur directly when checking the method signature, rather than when instantiating the method body. It also makes checking template methods more easily, since some methods can be rejected without any instantiation if the concepts do not match.
The advantage of Concepts is that you do not suffer from Predestination, you can pick a class from Library1, pick a method from Library2, and if it fits, you're gold (if it does not, you may be able to use a concept map). In OO, you are required to write a full-fledged Adapter, every time.
You are right that both seem similar. The difference is mainly about the time of binding (and the fact that Concept still have static dispatch instead of dynamic dispatch like with interfaces). Concepts are more open, thus easier to use.
Classes are a form of named conformance. You indicate that class Foo conforms with interface I by inheriting from I.
Concepts are a form of structural and/or runtime conformance. A class Foo does not need to state up front which concepts it conforms to.
The result is that named conformance reduces the ability to reuse classes in places that were not expected up front, even though they would be usable.
The concepts are in fact not part of C++, they are just concepts! In C++ there is no way to "define a concept". All you have is, templates and classes (STL being all template classes, as the name says: S tandard T emplate L ibrary).
If you mean C++0x and not C++ (in which case I suggest you change the tag), please read here:
http://en.wikipedia.org/wiki/Concepts_(C++)
Some parts I am going to copy-paste for you:
In the pending C++0x revision of the C++ programming language, concepts and the related notion of axioms were a proposed extension to C++'s template system, designed to improve compiler diagnostics and to allow programmers to codify in the program some formal properties of templates that they write. Incorporating these limited formal specifications into the program (in addition to improving code clarity) can guide some compiler optimizations, and can potentially help improve program reliability through the use of formal verification tools to check that the implementation and specification actually match.
In July 2009, the C++0x committee decided to remove concepts from the draft standard, as they are considered "not ready" for C++0x.
The primary motivation of the introduction of concepts is to improve the quality of compiler error messages.
So as you can see, concepts are not there to replace interfaces etc, they are just there to help the compiler optimize better and produce better errors.
While I agree with all the posted answers, they seem to have missed one point which is performance. Unlike interfaces, concepts are checked in compile-time and therefore don't require virtual function calls.

inheritance from leaf classes

A class design guideline found in Sutter&Alexandrescu's coding standards book, among others, is to make base classes abstract, such that one cannot instantiate them. This is to prevent a.o. slicing and problems with polymorphic assignment and copying.
However, when using a class library like e.g. Qt, it is common practice to inherit from concrdete classes from the library to add additional behavior. For example one would create a class MyListBox which inherits from QListBox, adding application-specific behavior for all list boxes in my application.
How is can this common practice be reconciled with the advice to have non-instantiatiable base classes?
Your very first sentence contains the answer you are looking for:
A class design guideline found in Sutter&Alexandrescu's coding standards book
(emphasis mine).
It's just that - a guideline, not a rule carved in stone.
If you have specific technical limitations, for example in the library you are using, you may ignore it if the given alternative at that particular moment is way worse (like having to use some pattern that triples the total amount of code or having to rewrite the library).
The whole point of my answer: All these patterns, guidelines and best practices are means by which you can reach your programming goal. They are not the goal, themselves.
That guideline only applies to base classes should not be instantiated (usually because doing so would make no sense). This is not the case in your example: for example, just because you inherited from QDialog doesn't mean that you wouldn't want to use plain old QDialogs elsewhere in the code.

Is there a value in preceding the names of pure virtual base classes with the letter 'I', as is common in C#/Java?

I don't see this very often, if at all, in C++. Any reason not to do this? I think it would be easier to identify the implications and intentions of the typename by doing this, as well as the sourcefile. Thoughts?
IMHO, It is a non-sense in any modern language/ide.
This kind of notation is a relic of Microsoft COM coding standards. It was very common among Visual C++ developer but nowadays even Microsoft discourages the use of this kind of "Hungarian notation" habits.
Using a modern IDE, if I want to know if a "class" is an "interface", I can look at the icons near the name of the "class". I do not need anymore confusing prefix before the class names.
In addition, this coding convention is error prone. It requires human attention and, probably, someone that checks if some "I" was forgot.
Nowadays its more common to have an application/company specific coding style than follow some general rules for what is right or wrong(*) i.e. if your team find it useful to prefix with an I then why not. What is more important (as with all coding styles) is to be consistent.
(*) in C++/C not speaking of other languages but could apply there as well.
In Symbian C++ programming pure virtual classes being with 'M' (for Mixin) by convention.
What's the real significance of a class having pure virtual functions? It means it can't be instantiated directly, that derived classes are forced to implement some features, but is that really significantly different from deriving from a base with virtual but no pure functions, where you might want to override a default implementation anyway? During evolution of code, a pure virtual function might be replaced with a default implementation, but that doesn't affect or undermine the usage or design of existing clients. So, this distinction isn't very useful as far as I can see. But, there is a cost...
To maintain the I prefix convention as you've suggested, a decision to provide default implementations of the virtual functions would require a name change and edits to all the client code. If this is not done, then I becomes actively misleading. So, you'd need to find significant advantage in the convention before you'd adopt it, and a lack of common use suggests that advantage isn't perceived.
Based on the issue above, you might reach for a convention where I only indicated a base class with some virtual members, irrespective of whether any are pure. But then a class would add a virtual function during the evolution of a system, and you'd still need to rename the class and correct client usage.
In C++, most people have found these kind of distinctions don't add enough value to warrant their maintenance. One priority is to maintain as much freedom as possible to vary code without affecting clients, though this can conflict with other priorities - e.g. pImpl idiom versus inline performance.