Does it make sense to replace Interfaces/Pure Abstract Classes with Concepts? - c++

As I have understood are concepts quite similar to interfaces: Like interfaces, concepts allow to define some kind of a set of methods/concept/interface, which the implementation expects and needs to perform its task. Both strengthen the focus on semantic needs.
While Bjarne and many other people seem to see concepts as way to get rid of uses of enable_if and generally complicated templates, I wonder if it makes sense to use it instead of interfaces/pure abstract classes.
Benefits are obvious:
no runtime cost (v-table)
kind of duck typing, because the suitable classes do not have to implement the interface
even relationships between parameters (which interfaces do not support at all)
Of course a disadvantage is not far away:
no template definition checking for concepts, at least for now
…
I wonder if there are more of these and if it would make no sense after all.
I know that there are similar questions, but they are not specific with their purpose nor is it answered in an answer. I also found other people who had the same idea, but at no point there is somebody who really encourages/discourages this, let alone argues on it.

If you are using abstract classes for their intended purpose, then there is pretty much no way to replace them with concepts. Abstract base classes are for runtime polymorphism: the ability to, at runtime, have the implementation of an interface be decoupled from the site(s) where that interface gets used. You can use user input or data from a file to determine which derived class instance to create, then pass that instance to some other code that uses a pointer/reference to the base class.
Abstract classes are for defining an interface for runtime polymorphism.
A template is instantiated at compile-time. As such, everything about its interface must be verified at compile-time. You cannot vary which implementation of an interface you use for a template; it's statically written into your program, and the template gets instantiated with exactly and only the types you spell out in your code. That's compile-time polymorphism.
Concepts are for defining an interface for compile-time polymorphism. They don't work at runtime.
If you've been using abstract base classes for compile-time polymorphism, then you've been doing the wrong thing, and you should have stopped well before concepts came out.

Related

For non-header-only libararies, and those who explicit instantiate templates, does it mean C++20's concept is useless?

So C++20 Introduces a new thing called concept, which from what I see is used to constrain types of data that could be put into a template. So for a function, I could require that data that's fed in to like, must have a member ::inner, or something like that.
Which to me, it's like making sure whose using that function couldn't just put whatever they like into the argument. But doesn't explicit instantiation already doing the same stuff? Like if I wrote a function library, and I didn't directly wrote the implementation directly into header files, but rather wrote it in a separate .cpp file and also explicit instantiate them. Doesn't such approach defeats the usage of concept? As if me, the developer, is instantiate some data types to some function, I'm already guaranteeing that it'll work as expected when fed into the function's argument. And if I didn't instantiate a function for a class, then you simply couldn't call it.
In such case, is there any reason for me to implement concept? Except that it seems C++20's concept error is more clear than the error you'll receive without concept.
I'm setting aside the design choice of using templates only to explicitly instantiate everything. Maybe you need that maybe you don't, but concepts is a valuable tool regardless.
First of all a well defined concept will provide "in-code" documentation of what the characteristics of expected types are. If you instantiate something with int and Duck, it's not going to be clear what an int and a Duck have in common to be able to use the same template. Whereas if they were sharing for example the copyable concept it becomes apparent what instantiations have in common or why the generalization was made.
Secondly your library might need extensions (I mean if it's not dead code, it will need amending of some sort sooner or later). By expressing the type requirements, you communicate not only restrictions but intent as well; this is extremely valueable for code extensibility.
Lastly it makes your design process clear(er). If you're using templates in the first place, it is a good practice to be able to formally verify your type system, predict connections and dead ends and put some extra thought on what you actually want to generalize over. An amazing example of how concepts benefit this process, can be seen with named requirements. The Standards committee put a tremendous effort into formalizing the properties of types when defining standard library facilities so e.g. an algorithm may be defined on Containers of Trivially Copyable elements. Up until concepts, the burden of verifying and checking those types fell on the developer, since there was no formal way of expressing those requirements; now we're transitioning to concepts making the definition and checking of such properties a formal process, backed by the core language.
In support of the 2nd & 3rd points consider SFINAE techniques vs concepts. Using templates is much more than what you expose in an interface, so your library may internally rely on type restrictions to choose the correct compilation path. This process is cleanly defined with concepts, whereas legacy approaches tend to overcrowd your code.

Runtime vs compile-time polymorphism: better readability vs compile-time errors checks, what is more important? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Many nowadays speeches on C++ are about templates and theirs usage for compile-time polymorphism implementation; virtual functions and run-time polymorphism are almost not discussed.
We can use compile-time polymorphism in many situations. And because it gives us compile-time checks instead of possible run-time errors related to runtime polymorphism, and well as some (usually insignificant, however) performance benefit, it looks nowadays most widely use libraries prefer compile-time polymorhism over run-time one.
However, for me it looks like compile-time polymorphism implemented with C++ templates result in much less self-documented and readable code than virtual types hierarchy.
As real life example we can review boost::iostreams. It implements stream as template that accepts device class as an argument. It results a situation when implementation of specific functionality is divided in many classes and files in different folders, so investigation in such code is much more complex than if streams will form classes hierarchy with virtual functions like we have in Java and .NET Framework? What is benefit of compile-time polymorphism here? File stream is something that reads and writes file, stream is something that reads and writes (anything), it is classical example of types hierarchy, so why not to use single FileStream class that overloads some protected functions instead of dividing semantically united functionality into different files and classes?
Another example is boost::process::child class. It uses templated constructor to setup standard i/o and other process parameters. It is not well-documented, and it is not obvious from this function prototype what arguments in what format will this template accept; implementation of member functions similar to SetStandardOutput will be much better self-documented and result in faster compile time, so what is benefit of template usage here? Again, I compare this implementation to .NET Framework here. For member functions similar to SetStandardOutput it is enough to read single header file to understand how to use the class. For templated constructor of boost::process::child we have to read many small files instead.
There are a lot of examples similar to this one. For any reason, well known open source libraries almost never use virtual classes hierarchy and prefer to use compile-time polymorhism (primarily templates-based) like boost does.
The question: are there any clear guidelines what we have to prefer (compile-time or run-time polymorphism) in situations where we can use both ones?
Generally speaking, in 90% of situation templates and virtual functions are interchangeable.
First of all, we need a clarification what we are talking about. If you "compare" something, it must be in some criteria equivalent. My understand of your statement is not comparing virtual functions with templates but within the context of polymorphism!
Your examples are not well selected in that case and also dynamic cast is more a "big hammer" out of the toolbox as if we talk about polymorphism.
Your "template example" did not need to use templates as you have simple overloads which can be used without any templated code at all!
If we are talking of polymorphism and c++ we have at a first selection runtime-polymorphism and compile-time polymorphism. And for both we have standard solutions in c++. For runtime we go with virtual functions, for compile-time polymorphism we have CRTP as a typical implementation and not templates as a general term!
are there any comments or recommendations from C++ committee or any other authoritative source, when we have to prefer ugly syntax of templates over much more understandable and compact virtual functions and inheritance syntax?
The syntax isn't ugly if you are used to use it! If we are talking about implementing things with SFINAE, we have some hard to understand rules with template instantiation, especially often misunderstood deduced context.
But in C++20 we will have concepts, which can replace SFINAE in most contexts which is a great thing I believe. Writing code with concepts instead of SFINAE makes it more readable, easier to maintain and a lot easier to extend for new types and "rules".
Standard library if full of templates and has a very limited amount of virtual functions. Does it mean that we have to avoid virtual functions as much as possible and prefer templates always even if theirs syntax for some specific task is much less compact and much less understandable?
The question feels you misunderstood the C++ features. Templates allows use to write generic code while virtual functions are the C++ tool to implement runtime polymorphism. There is nothing which is 1:1 comparable.
While reading your example code, I would advice you to think again about your coding style!
If you want to write functions specific for different data types, simply use overloads as you did in your "template" example, but without the unneeded templates!
if you want to implement generic functions which works for different data types in the same code, use templates and if some exceptional code is needed for specific data types use template specialization for the selected specific code parts.
If you need more selective template code which needs SFINAE you should start implementing with c++20 concepts.
If you want to implement polymorphism decide to use run time or compile time polymorphism. As already said, for the first one virtual functions are the standard C++ tool to implement that and CRTP is one of standard solutions for the second one.
And my personal experience with "dynamic cast" is: Avoid it! Often it is a first hint that something is broken with your design. That is not a general rule but a check point to think again about the design. In rare cases it is the tool which fits. And in RTTI is not available for all targets and it has some overhead. On bare metal devices/embedded systems you sometimes can't use RTTI and also exceptions. If your code is intended to be used as a "platform" in your domain, and you have the mentioned restrictions, don't use RTTI!
EDIT: Answers from the comments
So, for now, with C++ we can make classes hierarchy with run-time polymorphism only.
No! CRTP builds also class hierarchies but for compile time polymorphism. But the solutions is quite different as you don't have a "common" base class. But as all is resolved in compile time, there is no technical need for the common base class. You simply should start reading about Mixins, maybe here: What are Mixins (as a concept) and CRTP as one of the implementation methods: CRTP article wikipedia.
don't know how to implemented something similar to virtual functions without run-time overhead.
See above CRTP and Mixin exactly implementing polymorphism without runtime overhead!
Templates give some possibility to do that.
Templates are only the base C++ tool. Templates are the same level as loops in C++. It is much to broad to say "templates" in this context.
So, if we need class hierarchy, does it mean that we have to use it even it will force us to use less compile time checks?
As said, a class hierarchy is only a part of the solution for the task to implement polymorphism. Think more in logical things to implement like polymorphism, serializer, database or whatever and the implementation solutions like virtual functions, loops, stacks, classes etc. "Compile time checks"? In most cases you don't have to write the "checks" your self. A simple overload is something like an if/else in compile time which "checks" for the data type. So simply use it out of the box, no template nor SFINAE is needed.
Or we have to use templates to implement some sort of compile-time classes hierarchy even it will make our syntax much less compact and understandable
Already mentioned: Template code can be readable! std::enable_if is much easier to read as some hand crafted SFINAE stuff, even both uses the same C++ template mechanics. And if you get familiar with c++ 20 concepts, you will see that there is a good chance to write more readable template code in the upcoming c++ version.

Implement concatenative inheritance in C++

Is it ppssible to implement a concatenative inheritance or at least mixins in C++?
It feels like it is impossible to do in C++, but I cannot prove it.
Thank you.
According to this article:
Concatenative inheritance is the process of combining the properties
of one or more source objects into a new destination object.
Are we speaking of class inheritance ?
This is the basic way public inheritance works in C++. Thanks to multiple inheritance, you can even combine several base classes.
There might be some constraints however (e.g. name conflicts between different sources have to be addressed, depending on use case you might need virtual functions, and there might be the need to create explicitly a combined constructors).
Or is inheritance from instantiated objects meant ?
If it's really about objects and not classes, the story is different. You cannot clone and combine object of random type with each other, since C++ is a strongly typed language.
But first, let's correct the misleading wording. It's not really about concatenative inheritance, since inheritance is for classes. It's rather "concatenative prototyping", since you create new objects by taking over values and behaviors of existing objects.
To realize some kind of "concatenative prototyping" in C++, you therefore need to design it, based on the principle of composition, using a set of well defined "concatenable" (i.e. composable) base classes. This can be achieved, using the prototype design pattern together with the entity-component-system architecture.
What's the purpose
You are currently looking for this kind of construct, probably because you used it heavily in a dynamically typed language.
So keep in mind the popular quote (Mark Twain ? Maslow ? ):
If you have a hammer in your hand, every problem looks like nails
So the question is what you are really looking for and what problem you intend to solve. IMHO, it cannot be excluded that other idioms could be more suitable in the C++ world to achieve the same objective.

Why were concepts (generic programming) conceived when we already had classes and interfaces?

Also on programmers.stackexchange.com:
I understand that STL concepts had to exist, and that it would be silly to call them "classes" or "interfaces" when in fact they're only documented (human) concepts and couldn't be translated into C++ code at the time, but when given the opportunity to extend the language to accomodate concepts, why didn't they simply modify the capabilities of classes and/or introduced interfaces?
Isn't a concept very similar to an interface (100% abstract class with no data)? By looking at it, it seems to me interfaces only lack support for axioms, but maybe axioms could be introduced into C++'s interfaces (considering an hypothetical adoption of interfaces in C++ to take over concepts), couldn't them? I think even auto concepts could easily be added to such a C++ interface (auto interface LessThanComparable, anyone?).
Isn't a concept_map very similar to the Adapter pattern? If all the methods are inline, the adapter essentially doesn't exist beyond compile time; the compiler simply replaces calls to the interface with the inlined versions, calling the target object directly during runtime.
I've heard of something called Static Object-Oriented Programming, which essentially means effectively reusing the concepts of object-orientation in generic programming, thus permitting usage of most of OOP's power without incurring execution overhead. Why wasn't this idea further considered?
I hope this is clear enough. I can rewrite this if you think I was not; just let me know.
There is a big difference between OOP and Generic Programming, Predestination.
In OOP, when you design the class, you had the interfaces you think will be useful. And it's done.
In Generic Programming, on the other hand, as long as the class conforms to a given set of requirements (mainly methods, but also inner constants or types), then it fits the bill and may be used. The Concept proposal is about formalizing this, so that detection may occur directly when checking the method signature, rather than when instantiating the method body. It also makes checking template methods more easily, since some methods can be rejected without any instantiation if the concepts do not match.
The advantage of Concepts is that you do not suffer from Predestination, you can pick a class from Library1, pick a method from Library2, and if it fits, you're gold (if it does not, you may be able to use a concept map). In OO, you are required to write a full-fledged Adapter, every time.
You are right that both seem similar. The difference is mainly about the time of binding (and the fact that Concept still have static dispatch instead of dynamic dispatch like with interfaces). Concepts are more open, thus easier to use.
Classes are a form of named conformance. You indicate that class Foo conforms with interface I by inheriting from I.
Concepts are a form of structural and/or runtime conformance. A class Foo does not need to state up front which concepts it conforms to.
The result is that named conformance reduces the ability to reuse classes in places that were not expected up front, even though they would be usable.
The concepts are in fact not part of C++, they are just concepts! In C++ there is no way to "define a concept". All you have is, templates and classes (STL being all template classes, as the name says: S tandard T emplate L ibrary).
If you mean C++0x and not C++ (in which case I suggest you change the tag), please read here:
http://en.wikipedia.org/wiki/Concepts_(C++)
Some parts I am going to copy-paste for you:
In the pending C++0x revision of the C++ programming language, concepts and the related notion of axioms were a proposed extension to C++'s template system, designed to improve compiler diagnostics and to allow programmers to codify in the program some formal properties of templates that they write. Incorporating these limited formal specifications into the program (in addition to improving code clarity) can guide some compiler optimizations, and can potentially help improve program reliability through the use of formal verification tools to check that the implementation and specification actually match.
In July 2009, the C++0x committee decided to remove concepts from the draft standard, as they are considered "not ready" for C++0x.
The primary motivation of the introduction of concepts is to improve the quality of compiler error messages.
So as you can see, concepts are not there to replace interfaces etc, they are just there to help the compiler optimize better and produce better errors.
While I agree with all the posted answers, they seem to have missed one point which is performance. Unlike interfaces, concepts are checked in compile-time and therefore don't require virtual function calls.

Why is boost so heavily templated?

There are many places in boost where I see a templated class and can't help but think why the person who wrote it used templates.
For example, the mutex class(es). All the mutex concepts are implemented as templates where one could simply create a few base classes or abstract classes with an interface that matches the concept.
edit after answers: I thought about the cost of virtual functions but isn't it sometimes worth giving away very little performance penalty for better understanding? I mean sometimes (especially with boost) it's really hard to understand templated code and decrypt compiler errors as a result of misusing templates.
Templates can be highly optimized at compile time, without the need for virtual functions. A lot of template tricks can be thought of as compile-time polymorphism. Since you know at compile time which behaviours you want, why should you pay for a virtual function call everytime you use the class. As a bonus, a lot of templated code can be easily inlined to eliminate even the most basic function-call overhead.
In addition, templates in C++ are extremely powerful and flexible - they have been shown to be a turing complete language in their own right. There are some things that are easy to do with templates that require much more work with runtime polymorphism.
Templates allow you to do a generic version of an algorithm. A generic version of a container. You no longer have to worry about types and what you produce need no longer be tied to a type. Boost is a collection of libraries that tries to address the needs of a wide variety of people using C++ in their day to day lives.