What exactly is the Readable concept in Range v3? - c++

I have coded an iterators-like class and for some reason it doesn’t pass the Readable concept as defined in Range v3. I don’t know why and I am trying to see exactly how I need to
modify the syntax (and semantics) to fulfill the concept.
What are the minimum syntactic requirements for an iterator to be Readable according to Range v3? Can that be written a set of statements-that-must-compile? (see example below)
I have an iterator It with which I can do the basic stuff (what I would call "readable"), yet it doesn't pass the concept check:
#include <range/v3/all.hpp>
...
It i; // ok
typename It::value_type val = *i; // ok
typename It::reference ref = *i; // ok
typename It::value_type val2{ref}; // ok
static_assert( ranges::CommonReference<typename It::reference&&, typename It::value_type&>{} ); // ok
static_assert( ranges::Readable<It>{} ); // error: static assertion failed
What other constructs involving i I can write that it will make obvious that It is not Readable? In other words, what generic code would compile only and only-if the iterator is Range v3-Readable?
In many places, it says "if it behaves like a pointer then is Readable", but I can not find what is wrong with my iterator. I will be able to understand what is wrong when I see what code needs to compile?
I am trying to debug why my iterator fails to fullfil the concept (and therefore is rejected by range v3 functions). Note that is It were std::vector<bool>::iterator it will all work.
The Readable concept code in Range v3 https://ericniebler.github.io/range-v3/structranges_1_1v3_1_1concepts_1_1_readable.html is similar to https://en.cppreference.com/w/cpp/experimental/ranges/iterator/Readable
(I am using version 0.5.0 [Fedora30])
template < class In >
concept bool Readable =
requires {
typename ranges::value_type_t<In>;
typename ranges::reference_t<In>;
typename ranges::rvalue_reference_t<In>;
} &&
CommonReference<ranges::reference_t<In>&&, ranges::value_type_t<In>&> &&
CommonReference<ranges::reference_t<In>&&, ranges::rvalue_reference_t<In>&&> &&
CommonReference<ranges::rvalue_reference_t<In>&&, const ranges::value_type_t<In>&>;
So it looks like the iterator must (or can deduce) value_t<It>, extracted from It::value_type, reference_t<It> extracted from It::reference.
I don't know how rvalue_reference_t is deduced or what CommonReference means in terms of contrains to the syntax.

For an iterator to model Readable in range-v3, it needs:
to be dereferenceable via a meaningful operator *
to either have a specialization of readable_traits or have a public member type value_type or element_type defining the associated value type.
The simplest Readable user-defined type I can think of is:
#include <range/v3/all.hpp>
template <typename T>
class It
{
public:
using value_type = T;
private:
T x;
public:
T operator *() const { return x; }
};
static_assert( ranges::Readable<It<int>>{} );
which compiles cleanly with range-v3 version 0.3.5 (https://godbolt.org/z/JMkODj).
In version 1 of range-v3 Readable is no longer a type, but a constexpr value convertible to bool, so that the correct assertion in this case would be:
static_assert( ranges::Readable<It<int>> );
value_type and the value returned by operator * need not to be the same. However, they must be in a sense inter-convertible for the algorithms to work. That's where the CommonReference concept comes into play. This concept basically requires that two types share a “common reference type” to which both can be converted. It essentially delegates to a common_reference type traits, whose behaviour is described in great detail at cppreference (note, however, that what is described there is for the stuff in the ranges TS, which may not be exactly the same as that of the range-v3 library).
In practice, the Readable concept can be satisfied by defining a conversion operator in the type returned by operator *. Here's a simple example that passes the test (https://godbolt.org/z/5KkNpv):
#include <range/v3/all.hpp>
template <typename T>
class It
{
private:
class Proxy
{
private:
T &x;
public:
Proxy(T &x_) : x(x_) {}
operator T &() const { return x; }
};
public:
using value_type = T;
private:
T x;
public:
Proxy operator *() { return {x}; }
};
static_assert( ranges::Readable<It<bool>>{} );

Related

What C++20 change to reverse_iterator is breaking this code?

The following code compiles in C++11, C++14, and C++17, but does not compile in C++20. What change to the standard broke this code?
#include <vector>
#include <utility>
template<typename T>
struct bar
{
typename T::reverse_iterator x;
};
struct foo
{
bar<std::vector<std::pair<foo, foo>*>> x;
};
int main()
{
foo f;
}
The error is quite long, but can be summarized as:
template argument must be a complete class
This was always undefined. [res.on.functions]/2.5 says:
In particular, the effects are undefined in the following cases:
[...]
If an incomplete type ([basic.types]) is used as a template argument when instantiating a template component or evaluating a concept, unless specifically allowed for that component.
std::pair does not (and cannot) support incomplete types. You were just relying on order of instantiation to kind of get around that. Something changed in the library that slightly changed the evaluation order, leading to the error. But undefined behavior is undefined - it happened to work before and it happens to not work now.
As to why it's specifically C++20 that is causing this to fail. In C++20, iterators changed to have this new iterator_concept idea. In order to instantiate that, reverse_iterator needs to determine what the concept should be. That looks like this:
#if __cplusplus > 201703L && __cpp_lib_concepts
using iterator_concept
= conditional_t<random_access_iterator<_Iterator>,
random_access_iterator_tag,
bidirectional_iterator_tag>;
using iterator_category
= __detail::__clamp_iter_cat<typename __traits_type::iterator_category,
random_access_iterator_tag>;
#endif
Now, in the process of checking random_access_iterator, the root of iterator concept hierarchy is wonderfully named input_or_output_iterator, specified in [iterator.concept.iterator]:
template<class I>
concept input_or_output_iterator =
requires(I i) {
{ *i } -> can-reference;
} &&
weakly_incrementable<I>;
So, we have to do *i on our iterator type. That's __gnu_cxx::__normal_iterator<std::pair<foo, foo>**, std::vector<std::pair<foo, foo>*> > , in this case. Now, *i triggers ADL - because of course it does. And ADL requires instantiation of all the associated types - because those associated types might have injected friends that could be candidates!
This, in turn, requires instantiating pair<foo, foo> - because, we have to check. And then that ultimately fails in this specific case because instantiating a type requires instantiating all of the type's special member functions, and the way that libstdc++ implements conditional assignment for std::pair is using Eric Fisellier's trick:
_GLIBCXX20_CONSTEXPR pair&
operator=(typename conditional<
__and_<is_copy_assignable<_T1>,
is_copy_assignable<_T2>>::value,
const pair&, const __nonesuch&>::type __p)
{
first = __p.first;
second = __p.second;
return *this;
}
And is_copy_assignable requires complete types and we don't have one.
But really even if pair used concepts to check in this case, that would still involve instantiating the same type traits, so we'd ultimately end up in the same position.
Moral of the story is, undefined behavior is undefined.

Something like `declval` for concepts

When you are working with templates and with decltype you often need an instance of a certain type even though you do not have any the moment. In this case, std::declval<T>() is incredibly useful. This creates an imaginary instance of the type T.
Is there a something similar for concepts? i.e. a function which would create and imaginary type for a concept.
Let me give you an example(a bit contrived but should serve the purpose):
Let's define a concept Incrementable
template <typename T>
concept Incrementable = requires(T t){
{ ++t } -> T;
};
Now I would like to have a concept which tests if an object has the operator operator() which can accept Incrementable. In my imaginary syntax I would write something like this:
template <typename F, typename T = declval<Incrementable>>
concept OperatesOnIncrementable = requires(F f, T t){
{ f(t) } -> T;
}
There the declval in typename T = declval<Incrementable> would create an imaginary type T which is not really a concrete type but for all intents and purposes behaves like a type which satisfies Incrementable.
Is there a mechanism in the upcoming standard to allow for this? I would find this incredibly useful.
Edit: Some time ago I have asked a similar question if this can be done with boost::hana.
Edit: Why is this useful? For example if you want to write a function which composes two functions
template <typename F, typename G>
auto compose(F f, G g) {
return [f, g](Incrementable auto x) { return f(g(x)); };
}
I want to get an error when I try to compose two functions which cannot be composed. Without constraining the types F and G I get error only when I try to call the composed function.
There is no such mechanism.
Nor does this appear to be implementable/useful, since there is an unbounded number of Incrementable types, and F could reject a subset selected using an arbitrarily complex metaprogram. Thus, even if you could magically synthesize some unique type, you still have no guarantee that F operates on all Incrementable types.
Is there a something similar for concepts? i.e. a function which would create and imaginary type for a concept.
The term for this is an archetype. Coming up with archetypes would be a very valuable feature, and is critical for doing things like definition checking. From T.C.'s answer:
Thus, even if you could magically synthesize some unique type, you still have no guarantee that F operates on all Incrementable types.
The way to do that would be to synthesize an archetype that as minimally as possible meets the criteria of the concept. As he says, there is no archetype generation in C++20, and is seems impossible given the current incarnation of concepts.
Coming up with the correct archetype is incredibly difficult. For example, for
template <typename T>
concept Incrementable = requires(T t){
{ ++t } -> T;
};
It is tempting to write:
struct Incrementable_archetype {
Incrementable_archetype operator++();
};
But that is not "as minimal as possible" - this type is default constructible and copyable (not requirements that Incrementable imposes), and its operator++ returns exactly T, which is also not the requirement. So a really hardcore archetype would look like:
struct X {
X() = delete;
X(X const&) = delete;
X& operator=(X const&) = delete;
template <typename T> operator,(T&&) = delete;
struct Y {
operator X() &&;
};
Y operator++();
};
If your function works for X, then it probably works for all Incrementable types. If your function doesn't work for X, then you probably need to either change the implementation so it does or change the constraints to to allow more functionality.
For more, check out the Boost Concept Check Library which is quite old, but is very interesting to at least read the documentation.

Is there elegant way to write type alias in polymorphic lambda

Consider the following code:
#include <initializer_list>
#include <vector>
auto cref_lambda = [] (const auto& il){
using T= typename decltype(il)::value_type;
};
auto cval_lambda = [] (const auto il){
using T=typename decltype(il)::value_type;
};
int main(){
std::initializer_list<int> il;
cref_lambda(il);
cval_lambda(il);
}
cref_lambda does not compile because we are trying to :: into a reference.
I am aware of the workarounds(using std::remove_reference_t or just using decltype(*il.begin());) but I wonder if there is a better idiom to use here.
The way to resolve your problem at hand is to add std::decay_t to the decltype instruction. From cppreference:
Applies lvalue-to-rvalue, array-to-pointer, and function-to-pointer implicit conversions to the type T, removes cv-qualifiers, and defines the resulting type as the member typedef type.
Most importantly, it acts as the identity for a type which is not qualified according to any of the above annotations. Hence it is safe to write
using T = typename std::decay_t<decltype(il)>::value_type;
to get the unqualified value_type, independent of the the function signature.
Now to the other part of your question, how to write this shorter. Well, in the case of your example one could say, that since your lambda does not capture anything it could also be replaced by a free function template.
template < typename T >
void cref(std::initializer_list<T> const &il) {
/* use T and il */
}
or if it should work for any container
template < typename U >
void cref(U const &il) {
using T = typename U::value_type;
/* use T and il */
}
The clear advantage of the first case is, that you get access to T = value_type “for free“. Another advantage (in my opinion) is that you will get a much clearer compiler error should you accidentally call this function with something that is not a std::initializer_list<T>. You could remedy this shortcoming of the lambda by adding a static_assert but that would further strain the “shortness” which you initially wanted to find.
Lastly, if you really like the lambda style of writing functions or you have to capture something and cannot use the free function approach, you might want to consider using the GCC extension for template lambdas:
auto cref_lambda = [] <typename U> (U const &il){
using T = typename U::value_type;
};
That's probably the shortest you can get.

Comparing two iterators of a different type in a template function

No C++11 or Boost :(
I have a function with the following signature.
template<class INPUT_ITR, class OUTPUT_ITR>
void DoWork(const INPUT_ITR in_it, const INPUT_ITR in_it_end, OUTPUT_ITR out_it, OUTPUT_ITR out_it_end, CONTEXT_DATA)
Normally some complex processing takes place between the input and output.. but sometimes a no-op is required, and the data is simply copied. The function supports in-place operations if the input and output data types are the same. So I have this code.
if (NoOp)
{
if (in_it != out_it)
{
copy(in_it, in_it_end, out_it);
}
}
If an in-place operation has been requested (the iterator check), there is no need to copy any data.
This has worked fine until I call the function with iterators to different data types (int32 to int64 for example). Then it complains about the iterator check because they are incompatible types.
error C2679: binary '!=' : no operator found which takes a right-hand operand of type 'std::_Vector_iterator<std::_Vector_val<std::_Simple_types<unsigned __int64>>>
This has left me a bit stumped. Is there an easy way to perform this check if the data types are the same, but simply perform the copy if they are different types?
Thanks
You can extract the test into a pair of templates; one for matching types, one for non-matching types.
template <class T1, class T2>
bool same(T1 const &, T2 const &) {return false;}
template <class T>
bool same(T const & a, T const & b) {return a == b;}
Beware that this can give confusing results when used with types that you'd expect to be comparable. In C++11 (or with Boost, or a lot of tedious mucking around with templates) you could extend this to compare different types when possible; but that's beyond what you need here.
Also, note that you're relying on formally undefined behaviour, since iterators over different underlying sequences aren't required to be comparable. There is no way to tell from the iterators themselves whether this is the case.
I came up with a workaround. Any better suggestions welcome.
Overload the function to provide an in-place version. It's up to the user to request in-place now (in-place using the old function will perform the redundant copy in case of no op).
template<class ITR>
void DoWork(const ITR it, const ITR it_end, CONTEXT_DATA)
{
if (! NoOp)
{
DoWork(it, it_end, it, it_end, sSourceSpec, sDestSpec);
}
}
You could use std::iterator_traits and a custom is_same type trait since you don't have c++11:
template<class T, class U>
struct is_same
{
static const bool value = false;
};
template<class T>
struct is_same<T, T>
{
static const bool value = true;
};
if(!is_same<typename std::iterator_traits<INPUT_ITR>::value_type,
typename std::iterator_traits<OUTPUT_ITR>::value_type>::value)
{
copy(...);
}

Why do I need to use typedef typename in g++ but not VS?

It had been a while since GCC caught me with this one, but it just happened today. But I've never understood why GCC requires typedef typename within templates, while VS and I guess ICC don't. Is the typedef typename thing a "bug" or an overstrict standard, or something that is left up to the compiler writers?
For those who don't know what I mean here is a sample:
template<typename KEY, typename VALUE>
bool find(const std::map<KEY,VALUE>& container, const KEY& key)
{
std::map<KEY,VALUE>::const_iterator iter = container.find(key);
return iter!=container.end();
}
The above code compiles in VS (and probably in ICC), but fails in GCC because it wants it like this:
template<typename KEY, typename VALUE>
bool find(const std::map<KEY,VALUE>& container, const KEY& key)
{
typedef typename std::map<KEY,VALUE>::const_iterator iterator; //typedef typename
iterator iter = container.find(key);
return iter!=container.end();
}
Note: This is not an actual function I'm using, but just something silly that demonstrates the problem.
The typename is required by the standard. Template compilation requires a two step verification. During the first pass the compiler must verify the template syntax without actually supplying the type substitutions. In this step, std::map::iterator is assumed to be a value. If it does denote a type, the typename keyword is required.
Why is this necessary? Before substituing the actual KEY and VALUE types, the compiler cannot guarantee that the template is not specialized and that the specialization is not redefining the iterator keyword as something else.
You can check it with this code:
class X {};
template <typename T>
struct Test
{
typedef T value;
};
template <>
struct Test<X>
{
static int value;
};
int Test<X>::value = 0;
template <typename T>
void f( T const & )
{
Test<T>::value; // during first pass, Test<T>::value is interpreted as a value
}
int main()
{
f( 5 ); // compilation error
X x; f( x ); // compiles fine f: Test<T>::value is an integer
}
The last call fails with an error indicating that during the first template compilation step of f() Test::value was interpreted as a value but instantiation of the Test<> template with the type X yields a type.
Well, GCC doesn't actually require the typedef -- typename is sufficient. This works:
#include <iostream>
#include <map>
template<typename KEY, typename VALUE>
bool find(const std::map<KEY,VALUE>& container, const KEY& key)
{
typename std::map<KEY,VALUE>::const_iterator iter = container.find(key);
return iter!=container.end();
}
int main() {
std::map<int, int> m;
m[5] = 10;
std::cout << find(m, 5) << std::endl;
std::cout << find(m, 6) << std::endl;
return 0;
}
This is an example of a context sensitive parsing problem. What the line in question means is not apparent from the syntax in this function only -- you need to know whether std::map<KEY,VALUE>::const_iterator is a type or not.
Now, I can't seem to think of an example of what ...::const_iterator might be except a type, that would also not be an error. So I guess the compiler can find out that it has to be a type, but it might be difficult for the poor compiler (writers).
The standard requires the use of typename here, according to litb by section 14.6/3 of the standard.
It looks like VS/ICC supplies the typename keyword wherever it thinks it is required. Note this is a Bad Thing (TM) -- to let the compiler decide what you want. This further complicates the issue by instilling the bad habit of skipping the typename when required and is a portability nightmare. This is definitely not the standard behavior. Try in strict standard mode or Comeau.
This is a bug in the Microsoft C++ compiler - in your example, std::map::iterator might not be a type (you could have specialised std::map on KEY,VALUE so that std::map::iterator was a variable for example).
GCC forces you to write correct code (even though what you meant was obvious), whereas the Microsoft compiler correctly guesses what you meant (even though the code you wrote was incorrect).
It should be noted that the value/type kinding issue is not the fundamental problem. The primary issue is parsing. Consider
template<class T>
void f() { (T::x)(1); }
There is no way to tell if this is a cast or a function call unless the typename keyword is mandatory. In that case, the above code contains a function call. In general the choice cannot be delayed without forgoing parsing altogether, just consider fragment
(a)(b)(c)
In case you didn't remember, cast has a higher precedence than function call in C, one reason Bjarne wanted function style casts. It is therefore not possible to tell if the above means
(a)(b) (c) // a is a typename
or
(a) (b)(c) // a is not a typename , b is
or
(a)(b) (c) // neither a nor b is a typename
where I inserted space to indicate grouping.
Note also "templatename" keyword is required for the same reason as "typename", you can't parse things without knowing their kind in C/C++.