Why are iterator items which are references not cast to a trait object reference? - casting

I'm trying to define a function that should receive an iterator where each item is a reference to a trait object. For example:
use std::fmt::Display;
fn show_items<'a>(items: impl Iterator<Item = &'a Display>) {
items.for_each(|item| println!("{}", item));
}
When I try to call it on an iterator where each item is a reference to a type implementing Display:
let items: Vec<u32> = (1..10).into_iter().collect();
show_items(items.iter());
I get an error:
error[E0271]: type mismatch resolving `<std::slice::Iter<'_, u32> as std::iter::Iterator>::Item == &dyn std::fmt::Display`
--> src/lib.rs:9:5
|
9 | show_items(items.iter());
| ^^^^^^^^^^ expected u32, found trait std::fmt::Display
|
= note: expected type `&u32`
found type `&dyn std::fmt::Display`
note: required by `show_items`
--> src/lib.rs:3:1
|
3 | fn show_items<'a>(items: impl Iterator<Item = &'a Display>) {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Why is &u32 not considered as &dyn std::fmt::Display?
An explicit cast works fine:
show_items(items.iter().map(|item| item as &Display));
It also works fine for a single item:
fn show_item(item: &Display) {
println!("{:?}", item);
}
let item: u32 = 1;
show_item(&item);

The implicit conversion from a type T to dyn Trait for a Trait implemented by T is a so-called unsized coercion, a special kind of coercion. While Rust is somewhat reluctant with implicit type conversions, coercions do happen implicitly at coercion sites, but not in other places.
Function call arguments are coercion sites. This explains why your show_item() function works as desired.
All coercions can also be performed explicitly using the as operator. For this reason, the version using map() works fine.
Your definition of show_items(),
fn show_items<'a>(items: impl Iterator<Item = &'a Display>)
on the other hand is a completely different story. The impl syntax used here is a shorthand for
fn show_items<'a, I>(items: I)
where
I: Iterator<Item = &'a dyn Display>,
The function is generic over the iterator type, and the compiler verifies that the type that you actually pass in implements the trait bound Iterator<Item = &'a dyn Display>. The type std::slice::Iter<'_, u32> from you example code simply does not, hence the error. There is no coercion that converts an argument to a different type to make it implement some trait bound required by a generic function. It is also entirely unclear what type to convert std::slice::Iter<'_, u32> to to turn it into an iterator over &dyn Display.
Note that your version of the function definition is unnecessarily restrictive by requiring an iterator over trait objects. It would be far more natural and more performant to simply require that the iterator items implement Display instead:
fn show_items<I>(items: I)
where
I: IntoIterator,
I::Item: Display,
(I also changed Iterator to IntoIterator, since this is more general and more convenient.)

Related

Doesn't constraining the "auto" in C++ defeat the purpose of it?

In C++20, we are now able to constrain the auto keyword to only be of a specific type. So if I had some code that looked like the following without any constraints:
auto something(){
return 1;
}
int main(){
const auto x = something();
return x;
}
The variable x here is deduced to be an int. However, with the introduction of C++20, we can now constrain the auto to be a certain type like this:
std::integral auto something(){
return 0;
}
int main(){
const auto x = something();
return x;
}
Doesn't this defeat the purpose of auto here? If I really need a std::integral datatype, couldn't I just omit the auto completely? Am I misunderstanding the use of auto completely?
A constraint on the deduced auto type doesn't mean it needs to be a specific type, it means it needs to be one of a set of types that satisfy the constraint. Note that a constraint and a type are not the same thing, and they're not interchangeable.
e.g. a concept like std::integral constrains the deduced type to be an integral type, such as int or long, but not float, or std::string.
If I really need a std::integral datatype, couldn't I just omit the auto completely?
In principle, I suppose you could, but this would at the minimum lead to parsing difficulties. e.g. in a declaration like
foo f = // ...
is foo a type, or a constraint on the type?
Whereas in the current syntax, we have
foo auto f = // ...
and there's no doubt that foo is a constraint on the type of f.
If I really need a std::integral datatype, couldn't I just omit the auto completely?
No, because std::integral is not a type, it's a concept, a constraint on types (or if you will, a set of types rather than a single type).
Doesn't this defeat the purpose of auto here?
The original purpose of auto in C++11 is telling the compiler: Whatever type you deduce.*
With C++20, auto has an expanded use case - together with a concept, a constraint over types. auto still tells the compiler: Whatever type you deduce - but the deduction must also respect the constraint.
* - ignoring issues like constness, l/rvalue reference etc.
A concept often just move the error earlier in the compilation and makes code a bit more readable (since the concept name is a hint to the reader what you require from a type).
Rephrased:
It is rare you will ever use an auto variable in a way that it will work on every type.
For example:
auto fn(auto x) {
return x++;
}
will not work if you do:
f(std::string("hello"));
because you can not increment std::string, the error is something like:
error: cannot increment value of type 'std::basic_string<char>'
return x++;
If you change the function to:
auto fn(std::integral auto x) {
return x++;
}
You will get an error like:
:6:6: note: candidate template ignored: constraints not
satisfied [with x:auto = std::basic_string] auto
fn(std::integral auto x) {
For a small example this, it does not matter a lot, but for real code often the fn would call fn2 that calls fn3... and you would get the error deep in the std/boost/... implementation file.
So in this way concepts move the error to the site of the first function call.

Implicit type in lambda capture

I'm starting out with C++ programming, and am curious why this is legal:
auto myFun = [n=0]() mutable {return n++;};
I would have thought this wouldn't work, as C++ is a strong typed language, but it seems that the compiler infers the integer type?
Lambdas, were introduced with C++11. There were no default initializer values in C++11:
auto myFun = [n]() { /* body */ };
That's all you had in C++11. Whatever the type of n was, that's what you captured, type and value.
Default initialization values were introduced with C++14. I suppose that it would've been possible to change the syntax so that the initialized captured variables used a complete, full-fledged declaration, something like:
auto myFun = [int n=0]() mutable {return n++;};
That might've been possible, but this wasn't really necessary. Even though C++14's default capture values do not explicitly state their types, their types are inferred from their initialization expressions just as strongly as if they were explicitly declared. And the resulting change in syntax is minimal. With:
auto myFun = [n=0]() mutable {return n++;};
the type of n is int, just as "strong" as if it were explicitly declared. It is not a char, and it is not a short. It is an int. End of story.
Also, keep in mind that with:
template<typename Arg> void function(Arg arg)
when this is invoked the type of Arg gets deduced, and it becomes a bone-fide, strong type, too. So this is really no different than template parameters: in the end when instantiated their types are still as strong as they are in the rest of C++.
From the reference on lambda captures:
A capture with an initializer acts as if it declares and explicitly captures a variable declared with type auto, ...
This means the [n = 0] is basically treated as if it's
auto n = 0;
The placeholder type is deduced by the compiler as int, and the same type inference happens in the lambda capture.
This convenient syntax of not needing to say auto in the initializer of a lambda capture is just that, a convenience. This syntax doesn't result in any changes to the type safety imposed by the language.

Cannot initialize std::variant with various lambda expressions

I'm playing with std::variant, lambdas and std::future, and got super weird results when I tried to compose them together. Here are examples:
using variant_t = std::variant<
std::function<std::future<void>(int)>,
std::function<void(int)>
>;
auto f1 = [](int) { return std::async([] { return 1; }); };
auto f2 = [](int) { return std::async([] { }); };
variant_t v1(std::move(f1)); // !!! why DOES this one compile when it SHOULDN'T?
auto idx1 = v1.index(); //equals 1. WHY?
variant_t v2(std::move(f2)); // !!! why DOESN'T this one compile when it SHOULD?
Here is the compilation error:
Error C2665 'std::variant<std::function<std::future<void>
(int)>,std::function<void (int)>>::variant': none of the 2 overloads
could convert all the argument types
OK, lets change variant's items signatures from returning void to int:
using variant_t = std::variant<
std::function<std::future<int>(int)>,
std::function<int(int)>
>;
variant_t v1(std::move(f1)); // COMPILES (like it should)
auto idx1 = v1.index(); // equals 0
variant_t v2(std::move(f2)); // DOESN'T compile (like it should)
What the hell is going on here? Why is std::future<void> so special?
variant's converting constructor template employs overload resolution to determine which type the constructed object should have. In particular, this means that if the conversions to those types are equally good, the constructor doesn't work; in your case, it works iff exactly one of the std::function specializations is constructible from your argument.
So when is function<...> constructible from a given argument? As of C++14, if the argument is callable with the parameter types and yields a type that is convertible to the return type. Note that according to this specification, if the return type is void, anything goes (as any expression can be converted to void with static_cast). If you have a function returning void, the functor you pass in can return anything—that's a feature, not a bug! This is also why function<void(int)> is applicable for f1. On the other hand, future<int> does not convert to future<void>; hence only function<void(int)> is viable, and the variant's index is 1.
However, in the second case, the lambda returns future<void>, which is convertible to both future<void> and void. As mentioned above, this causes both function specializations to be viable, which is why the variant cannot decide which one to construct.
Finally, if you adjust the return type to int, this whole void conversion issue is avoided, so everything works as expected.

restrict(amp) function type

I can create restrict(amp) function as follows:
auto f = [](int& item) restrict(amp) {item += 1;};
And I can use this function in other restrict(amp) functions, for example:
concurrency::parallel_for_each(av.extent,
[=](concurrency::index<1> idx) restrict(amp)
{
f(av[idx]);
}
);
What type of substituted instead "auto" after compilation? I tried to use the "std::function":
std::function<void (int&) restrict(amp)> f
= [](int& item) restrict(amp) {item += 1;};
but received a compile error.
Thank you for your attention!
The result of a lambda expression is a closure object, and the type of the closure object is unknowable. You can only use auto to declare a variable of its exact type.
However, you can convert a closure object into a suitable instance of an std::function, and if the lambda is non-capturing, you can even convert it to a function pointer. However, this conversion may come at a (significant) cost, so you should prefer using auto as much as possible to handle the actual closure type.
The same goes for bind expressions.
The relevant standard section is 5.1.2(3):
The type of the lambda-expression (which is also the type of the closure object) is a unique, unnamed non-union class type — called the closure type — whose properties are described below. This class type is not an aggregate.
That said, I'm not sure how the special AMP extensions behave in this context, and it's conceivable that AMP-restricted lambdas are not convertible to anything else. I'll try and look this up in the AMP specification.
Update: Sections 2.2.3 and 2.3 of the AMP Specification seem to apply to this question.

How can I get around specifying variables in decltype expressions?

Assume I have the following exemplary function:
template <typename Fn> auto call(Fn fn) -> decltype(fn()) {
return fn();
}
The important thing about this function is that its return type depends on its template parameter, which can be inferred. So ultimately, the return type depends on how the function is called.
Now, we also have a test class:
struct X {
int u;
auto test() -> decltype(call([this]() -> double {this->u = 5; return 7.4;})) {
return call([this]() -> double {this->u = 5; return 7.4;});
}
};
as you can see, X::test calls call, returning the same return value. In this case, the return type is trivially given as double, but let's assume for a bit we didn't know what call does and that the lambda has a more complicated return type.
If we try to compile this, the compiler will complain, because we're using this at the top level (not in a scope that would allow an expression):
error: lambda-expression in unevaluated context
error: invalid use of ‘this’ at top level
However, I have to use the capture of the lambda which I pass to call in order to get call's return type right. How would you suggest to get around this, while still leaving the lambda?
Note: Of course I could move the lambda to be an operator() of some helper type, which I instantiate with a copy of the this pointer, but I'd like to avoid that boilerplate.
I think the real error to be concerned about is "lambda-expression in unevaluated context". You can't use a lambda in an unevaluated context because every lambda expression has a unique type. That is, if decltype([]{}) were allowed it would deduce a different type than []{} in some other context. I.e. decltype([]{}) fn = []{}; wouldn't work.
Unless you want to just explicitly write the return type rather than have it deduced, I don't think you have any choice but to create a real type that you can use in the contexts you need, with whatever boilerplate that entails.
Although if changing test to not be a member function is acceptable then you could use the fact that lambda's can deduce their return type by omitting it if the body is only a single return statement:
template <typename Fn> auto call(Fn fn) -> decltype(fn()) {
return fn();
}
struct X {
int u;
};
int main() {
auto test = [](X *x) { return call([x]() -> double {x->u = 5; return 7.4; });};
X x;
test(&x);
}
It would be nice if the trailing return type syntax for functions had the same property. I'm not sure why it doesn't.
It seems to be a made up (construed, artificial) question, since
If you get the lambda from somewhere else, then it's named and no problem binding this.
If you're not getting the lambda from somewhere else, then you know the result type.
In short, as the problem is currently stated (as I'm writing this answer) there's no problem except one imposed by your own will.
But if you insist on that, well, just pass this as an argument instead of binding it via the lambda definition. Then for the call to call, bind the argument. But, perhaps needless to say, since that only solves a made-up problem it's a real Rube Goldberg construction, a decent into over-flowering needless complexity that doesn't solve anything real outside its own intricacies.
What was the original real problem, if any?
You shouldn't always copy-and-paste function body to decltype. The point of introducing late-specified return type was that you'll be able to somehow infer correct return type from arguments.
e.g. auto f(T x) -> decltype(g(x)) { return h(), g(x); }, not -> decltype(h(), g(x))
So, in your case, double test() is enough, because we know behavior of call and we know return type of lambda function we pass to it.
In more complex case, we should reduce code inside decltype, by using knowledge about call and other stuff.