I was just wondering, since you can only pass random access iterators to std::sort anyway, why not enforce that restriction by defining it only for random access iterators in the first place?
#include <iterator>
#include <type_traits>
template <typename ForwardIterator>
typename std::enable_if<
std::is_same<
typename std::iterator_traits<ForwardIterator>::iterator_category,
std::random_access_iterator_tag>::value,
void>
::type sort(ForwardIterator begin, ForwardIterator end)
{
// ...
}
I find a single line error message a lot easier to read than pages and pages of error messages resulting from type errors far down in the implementation.
You could do the same with other algorithms. The standard C++ core language has always been expressive enough for that task, right? So, any particular reason why this was not done?
The core language has always been expressive enough to handle such checks, but when the first standard was being prepared (around 1996/1997), the tricks that you can play with SFINAE (what enable_if is based upon) were not yet known and the support for advanced template wizardry was limited in the compilers.
So, the reason why the standard did not mandate it was because the needed techniques were not invented yet.
The reason why compiler/library writers did not add it after the fact is probably just plain economics: not enough people asked for the feature, and when people did start asking for better diagnostics, hope was on the concepts proposal to take care of it. Unfortunately, this proved to be a bit too hard to get finalised in time.
My guess is that SFINAE was invented (or discovered) after the standard library implementations had reached a certain maturity. After that, changes to the core library had to be very justified in order to prevent the introduction of regressions and I guess that mere cosmetics are somewhat hard to justify.
That said, the GCC for example already does have a lot of diagnostics for template-related error messages, e.g. macros that perform a kind of concept checking. For example, the GCC-libstdc++ has the following:
// concept requirements
__glibcxx_function_requires(_Mutable_RandomAccessIteratorConcept<
_RandomAccessIterator>)
__glibcxx_function_requires(_LessThanComparableConcept<_ValueType>)
__glibcxx_requires_valid_range(__first, __last);
Actually, when there is only one overload of an algorithm, you'll nearly always get better diagnostics by causing a compilation error just inside using something like Boost.ConceptCheck or __glibcxx_function_requires. When SFINAE (which is what enable_if uses) leaves you with an empty overload set, most compilers simply tell you "there's no matching function," which tends not to be very helpful.
One of the nice things about templates in C++ is they can have a sort of static 'duck typing'. I can't speak for this particular case, but in many templates, as long as you keep the interface the same, all the hierarchy nonsense doesn't matter. And that's a good thing.
Related
I found about Concepts while reviewing C++20 features. I found that they add validation to templates arguments but apart from that I don't understand what are the real world use cases of C++20 concepts.
C++ already has things like std::is_integral and they can perform validation very well.
I'm sure I am missing something about C++20 concepts and what it enables.
SFINAE (see here & here) was an accidentally Turing complete sublanguage that executes at overload resolution and template specialization selection time.
Turns out it is used a lot in template code.
Concepts and requires clauses are an attempt to take that accidentally useful language feature and make it suck less.
The origin of concepts was going to have 3 pieces; (a) describing what is required for a given bit of template code in a clean way, (b) also provide a way to map other types to satisfy those requirements non-intrusively, and (c) check template code so that any type which satisfies the concept is guaranteed to compile
All attempts at (a) plus (c) sucked, usually taking forever to compile and/or restricting what you can check with (a). (b) was also dropped to ensure (a) was better; you can write such concept map machinery manually in many cases, but C++ doesn't provide it for you.
So, now what is it good for?
auto sum( Addable auto... values )
that uses the concept of Addable to concisely express an interface of a template. Error messages you get when passing a non-addable highlight it isn't Addable, and the expression that doesn't work.
template<class T, class A>
struct vector{
bool operator==(vector<t,A>const& o)requires EquallyComparible<T>;
};
here, we state this vector has a == if and only if the T does. Doing this before concepts is an annoying undertaking, and even adding the specs to the standard is.
This is the turing tar pit; everyting is equivalent, but nothing is easy. All programs can be written with I/O plus a (a=(a-b);(a<0)?goto c:next 3 argument instruction; but a richer language makes programs suck less. Concepts takes an esoteric branch of C++, SFINAE, and makes it clean and simpler (so more people can leverage it), and improves error messages.
I know there is concept for ContiguousIterator in words specification sense, but I wonder if it can be written using C++20/C++17 Concepts TS syntax.
My problem with this is that unlike RandomAccessIterator ContiguousIterator requires not just some operations like it+123 to work, but depends on runtime result of that operation.
No you cannot, not without a traits class or other helper, where types opt-in to being contiguous.
Your problem is currently unsolvable. The committee is considering what to do about deducing contiguous memory access. The flub is that iterator_category is not a trait (although it resides in iterator_traits); It is an ad-hoc type. It cannot be subtyped without breaking existing code. (Beginner mistake, eh what?) The Committee has recognized the mess. This recent discussion tells all -> How to deduce contiguous memory from iterator
Why is std::pair<A,B> not the same as std::tuple<A,B>? It always felt strange to not be able to just substitute one with the other. They are somewhat convertible, but there are limitations.
I know that std::pair<A,B> is required to have the two data members A first and B second, so it can't be just a type alias of std::tuple<A,B>. But my intuition says that we could specialize std::tuple<A,B>, that is a tuple with exactly two elements, to equal the definition of what the standard requires a std::pair to be. And then alias this to std::pair.
I guess this wouldn't be possible as it is too straight-forward to not to be already thought of, yet it wasn't done in g++'s libstdc++ for example (I didn't look at the source code of other libraries). What would the problem of this definition be? Is it "just" that it would break the standard library's binary compatibility?
You've gotta be careful about things like SFINAE and overloading. For example, the code below is currently well-formed but you would make it illegal:
void f(std::pair<int, int>);
void f(std::tuple<int, int>);
Currently, I can disambiguate between pair and tuple through overload resolution, SFINAE, template specialization, etc. These tools would all become incapable of telling them apart if you make them the same thing. This would break existing code.
There might have been an opportunity to introduce it as part of C++11, but there certainly isn't now.
This is purely historical. std::pair exist since C++98 whereas tuple came after and was initially not part of the standard.
Backward compatibility is the biggest burden for C++ evolution, preventing some nice things to be done easily !
I've not tried this and don't have the bandwidth right now to do so. You could try making a specialisation of std::tuple, derived from a sd::pair. Someone please tell me this won't work or is particularly horrible idea. I suspect you'd run into trouble with accessors.
From what I understand, standard layout allows three things:
Empty base class optimization
Backwards compatibility with C with certain pointer casts
Use of offsetof
Now, included in the library is the is_standard_layout predicate metafunction, but I can't see much use for it in generic code as those C features I listed above seem extremely rare to need checking in generic code. The only thing I can think of is using it inside static_assert, but that is only to make code more robust and isn't required.
How is is_standard_layout useful? Are there any things which would be impossible without it, thus requiring it in the standard library?
General response
It is a way of validating assumptions. You wouldn't want to write code that assumes standard layout if that wasn't the case.
C++11 provides a bunch of utilities like this. They are particularly valuable for writing generic code (templates) where you would otherwise have to trust the client code to not make any mistakes.
Notes specific to is_standard_layout
It looks to me like the (pseudo code) definition of is_pod would roughly be...
// note: applied recursively to all members
bool is_pod(T) { return is_standard_layout(T) && is_trivial(T); }
So, you need to know is_standard_layout in order to implement is_pod. Given that, we might as well expose is_standard_layout as a tool available to library developers. Also of note: if you have a use-case for is_pod, you might want to consider the possibility that is_standard_layout might actually be a better (more accurate) choice in that case, since POD is essentially a subset of standard layout.
I get the feeling that they added every conceivable variant of type evaluation, regardless of any obvious value, just in case someone might encounter a need sometime before the next standard comes out. I doubt if piling on these "extra" type properties adds a significant additional burden to compiler developers.
There is a nice discussion of standard layout here: Why is C++11's POD "standard layout" definition the way it is?
There is also a lot of good detail at cppreference.com: Non-static data members
I find this atrocious:
std::numeric_limits<int>::max()
And really wish I could just write this:
int::max
Yes, there is INT_MAX and friends. But sometimes you are dealing with something like streamsize, which is a synonym for an unspecified built-in, so you don't know whether you should use INT_MAX or LONG_MAX or whatever. Is there a technical limitation that prevents something like int::max from being put into the language? Or is it just that nobody but me is interested in it?
Primitive types are not class types, so they don't have static members, that's it.
If you make them class types, you are changing the foundations of the language (although thinking about it it wouldn't be such a problem for compatibility reasons, more like some headaches for the standard guys to figure out exactly what members to add to them).
But more importantly, I think that nobody but you is interested in it :) ; personally I don't find numeric_limits so atrocious (actually, it's quite C++-ish - although many can argue that often what is C++-ish looks atrocious :P ).
All in all, I'd say that this is the usual "every feature starts with minus 100 points" point; the article talks about C#, but it's even more relevant for C++, that has already tons of language features and subtleties, a complex standard and many compiler vendors that can put their vetoes:
One way to do that is through the concept of “minus 100 points”. Every feature starts out in the hole by 100 points, which means that it has to have a significant net positive effect on the overall package for it to make it into the language. Some features are okay features for a language to have, they just aren't quite good enough to make it into the language.
Even if the proposal were carefully prepared by someone else, it would still take time for the standard committee to examine and discuss it, and it would probably be rejected because it would be a duplication of stuff that is already possible without problems.
There are actually multiple issues:
built-in types aren't classes in C++
classes can't be extended with new members in C++
assuming the implementation were required to supply certain "members": which? There are lots of other attributes you might want to find for type and using traits allows for them being added.
That said, if you feel you want shorter notation for this, just create it:
namespace traits {
template <typename T> constexpr T max() {
return std::numeric_limits<T>::max();
}
}
int m = traits::max<int>();
using namespace traits;
int n = max<int>();
Why don't you use std::numeric_limits<streamsize>::max()? As for why it's a function (max()) instead of a constant (max), I don't know. In my own app I made my own num_traits type that provides the maximum value as a static constant instead of a function, (and provides significantly more information than numeric_limits).
It would be nice if they had defined some constants and functions on "int" itself, the way C# has int.MaxValue, int.MaxValue and int.Parse(string), but that's just not what the C++ committee decided.