This code
#include <iostream>
#include <optional>
struct foo
{
explicit operator std::optional<int>() {
return std::optional<int>( 1 );
}
explicit operator int() {
return 2;
}
};
int main()
{
foo my_foo;
std::optional<int> my_opt( my_foo );
std::cout << "constructor: " << my_opt.value() << std::endl;
my_opt = static_cast<std::optional<int>>(my_foo);
std::cout << "static_cast: " << my_opt.value() << std::endl;
}
produces the following output
constructor: 2
static_cast: 2
in Clang 4.0.0 and in MSVC 2017 (15.3). (Let's ignore GCC for now, since it's behavior seems to be buggy in that case.)
Why is the output 2? I would expect 1. The constructors of std::optional seem to prefer casting to the inner type (int) despite the fact that a cast to the outer type (std::optional<int>) is available. Is this correct according to the C++ standard? If so, is there a reason the standard does not dictate to prefer an attempt to cast to the outer type? I would find this more reasonable and could imagine it to be implemented using enable_if and is_convertible to disable the ctor if a conversion to the outer type is possible. Otherwise every cast operator to std::optional<T> in a user class - even though it is a perfect match - would be ignored on principle if there is also one to T. I would find this quite obnoxious.
I posted a somewhat similar question yesterday but probably did not state my problem accurately, since the resulting discussion was more about the GCC bug. That's why I am asking again more explicitly here.
In case Barry's excellent answer still isn't clear, here's my version, hope it helps.
The biggest question is why isn't the user-defined conversion to optional<int> preferred in direct initialization:
std::optional<int> my_opt(my_foo);
After all, there is a constructor optional<int>(optional<int>&&) and a user-defined conversion of my_foo to optional<int>.
The reason is the template<typename U> optional(U&&) constructor template, which is supposed to activate when T (int) is constructible from U and U is neither std::in_place_t nor optional<T>, and direct-initialize T from it. And so it does, stamping out optional(foo&).
The final generated optional<int> looks something like:
class optional<int> {
. . .
int value_;
. . .
optional(optional&& rhs);
optional(foo& rhs) : value_(rhs) {}
. . .
optional(optional&&) requires a user-defined conversion whereas optional(foo&) is an exact match for my_foo. So it wins, and direct-initializes int from my_foo. Only at this point is operator int() selected as a better match to initialize an int. The result thus becomes 2.
2) In case of my_opt = static_cast<std::optional<int>>(my_foo), although it sounds like "initialize my_opt as-if it was std::optional<int>", it actually means "create a temporary std::optional<int> from my_foo and move-assign from that" as described in [expr.static.cast]/4:
If T is a reference type, the effect is the same as performing the
declaration and initializationT t(e); for some invented temporary
variable t ([dcl.init]) and then using the temporary variable as the
result of the conversion. Otherwise, the result object is
direct-initialized from e.
So it becomes:
my_opt = std::optional<int>(my_foo);
And we're back to the previous situation; my_opt is subsequently initialized from a temporary optional, already holding a 2.
The issue of overloading on forwarding references is well-known. Scott Myers in his book Effective Modern C++ in Chapter 26 talks extensively about why it is a bad idea to overload on "universal references". Such templates will tirelessly stamp out whatever the type you throw at them, which will overshadow everything and anything that is not an exact match. So I'm surprised the committee chose this route.
As to the reason why it is like this, in the proposal N3793 and in the standard until Nov 15, 2016 it was indeed
optional(const T& v);
optional(T&& v);
But then as part of LWG defect 2451 it got changed to
template <class U = T> optional(U&& v);
With the following rationale:
Code such as the following is currently ill-formed (thanks to STL for
the compelling example):
optional<string> opt_str = "meow";
This is because it would require two user-defined conversions (from
const char* to string, and from string to optional<string>) where the
language permits only one. This is likely to be a surprise and an
inconvenience for users.
optional<T> should be implicitly convertible from any U that is
implicitly convertible to T. This can be implemented as a non-explicit
constructor template optional(U&&), which is enabled via SFINAE only
if is_convertible_v<U, T> and is_constructible_v<T, U>, plus any
additional conditions needed to avoid ambiguity with other
constructors...
In the end I think it's OK that T is ranked higher than optional<T>, after all it's a rather unusual choice between something that may have a value and the value.
Performance-wise it is also beneficial to initialize from T rather than from another optional<T>. An optional is typically implemented as:
template<typename T>
struct optional {
union
{
char dummy;
T value;
};
bool has_value;
};
So initializing it from optional<T>& would look something like
optional<T>::optional(const optional<T>& rhs) {
has_value = rhs.has_value;
if (has_value) {
value = rhs.value;
}
}
Whereas initializing from T& would require less steps:
optional<T>::optional(const T& t) {
value = t;
has_value = true;
}
A static_cast is valid if there is an implicit conversion sequence from the expression to the desired type, and the resulting object is direct-initialized from the expression. So writing:
my_opt = static_cast<std::optional<int>>(my_foo);
Follows the same steps as doing:
std::optional<int> __tmp(my_foo); // direct-initialize the resulting
// object from the expression
my_opt = std::move(__tmp); // the result of the cast is a prvalue, so move
And once we get to construction, we follow the same steps as my previous answer, enumerating the constructors, which ends up selecting the constructor template, which uses operator int().
Related
std::reference_wrapper cannot be bound to rvalue reference to prevent dangling pointer.
However, with combination of std::optional, it seems that rvalue could be bound.
That is, std::is_constructible_v<std::reference_wrapper<const int>, int&&>) is false but std::is_constructible_v<std::optional<std::reference_wrapper<const int>>, std::optional<int>&&> is true.
Here's an example:
#include <iostream>
#include <optional>
auto make() -> std::optional<int>
{
return 3;
}
int main()
{
std::optional<std::reference_wrapper<const int>> opt = make();
if (opt)
std::cout << opt->get() << std::endl;
return 0;
}
I expected this code will be rejected by compiler, but it compiles well and opt contains dangling pointer.
Is this a bug of standard library? Or, is it just not possible to prevent dangling pointer here because of some kind of limitaion of C++ language specification?
If it is a bug of standard library, how can I fix it when I implement my own optional type?
It it's a limitation of current C++ specification, could you tell me where this problem comes from?
#Jarod42 already pointed out the core reason why this code compiles, but I will elaborate a bit.
The following two constructor templates for std::optional<T> are relevant to this question:
template <class U>
constexpr optional(const optional<U>& other)
requires std::is_constructible_v<T, const U&>; // 1
template <class U>
constexpr optional(optional<U>&& other);
requires std::is_constructible_v<T, U>; // 2
Note that the requires-clauses above are for exposition only. They might not be present in the actual declarations provided by the library implementation. However, the standard requires constructor templates 1 and 2 to only participate in overload resolution when the corresponding std::is_constructible_v constraint is satisfied.
The second overload will not participate in overload resolution because std::reference_wrapper<const int> is not constructible from int (meaning an rvalue of type int), which is the feature that is intended to prevent dangling references. However, the first overload will participate, because std::reference_wrapper<const int> is constructible from const int& (meaning an lvalue of type const int). The problem is that, when U is deduced and the std::optional<int> rvalue is bound to the const optional<U>& constructor argument, its rvalueness is "forgotten" in the process.
How might this issue be avoided in a user-defined optional template? I think it's possible, but difficult. The basic idea is that you would want a constructor of the form
template <class V>
constexpr optional(V&& other)
requires (is_derived_from_optional_v<std::remove_cvref_t<V>> && see_below) // 3
where the trait is_derived_from_optional detects whether the argument type is a specialization of optional or has an unambiguous base class that is a specialization of optional. Then,
If V is an lvalue reference, constructor 3 has the additional constraint that the constraints of constructor 1 above must be satisfied (where U is the element type of the optional).
If V is not a reference (i.e., the argument is an rvalue), then constructor 3 has the additional constraint that the constraints of constructor 2 above must be satisfied, where the argument is const_cast<std::remove_cv_t<V>&&>(other).
Assuming the constraints of constructor 3 are satisfied, it delegates to constructor 1 or 2 depending on the result of overload resolution. (In general, if the argument is a const rvalue, then constructor 1 will have to be used since you can't move from a const rvalue. However, the constraint above will prevent this from occurring in the dangling reference_wrapper case.) Constructors 1 and 2 would need to be made private and have a parameter with a private tag type, so they wouldn't participate in overload resolution from the user's point of view. And constructor 3 might also need a bunch of additional constraints so that its overload resolution priority relative to the other constructors (not shown) is not higher than that of constructors 1 and 2. Like I said, it's not a simple fix.
The reason your code is allowed is because when the underlying object of the optional cannot be initialized with rvalue reference, it will fallback to passing the underlying object as const&.
In fact even this code is allowed:
auto ref = std::cref(static_cast<const int&>(1));
Link https://cplusplus.github.io/LWG/issue2067 provides the following discussion:
Class template packaged_task is a move-only type with the following form of the deleted copy operations:
packaged_task(packaged_task&) = delete;
packaged_task& operator=(packaged_task&) = delete;
Note that the argument types are non-const. This does not look like a typo to me, this form seems to exist from the very first proposing paper on N2276. Using either of form of the copy-constructor did not make much difference before the introduction of defaulted special member functions, but it makes now an observable difference. This was brought to my attention by a question on a German C++ newsgroup where the question was raised why the following code does not compile on a recent gcc:
#include <utility>
#include <future>
#include <iostream>
#include <thread>
int main() {
std::packaged_task<void()> someTask([]{ std::cout << std::this_thread::get_id() << std::endl; });
std::thread someThread(std::move(someTask)); // Error here
// Remainder omitted
}
It turned out that the error was produced by the instantiation of some return type of std::bind which used a defaulted copy-constructor, which leads to a const declaration conflict with [class.copy] p8.
Some aspects of this problem are possibly core-language related, but I consider it more than a service to programmers, if the library would declare the usual form of the copy operations (i.e. those with const first parameter type) as deleted for packaged_task to prevent such problems.
Could anybody explain the meaning of the marked statement? I don't undestand how the missing const qualifer affects the compilation process, and how this behavior is explained in standard.
What is the point of adding const to the parameter of the deleted copy constructor?
Here is a toy example:
struct problem {
problem()=default;
problem(problem&&)=default;
problem(problem&)=delete;
};
template<class T>
struct bob {
T t;
bob()=default;
bob(bob&&)=default;
bob(bob const&)=default;
};
int main() {
problem p;
problem p2 = std::move(p);
bob<problem> b;
bob<problem> b2 = std::move(b);
}
bob<problem> fails to compile because the bob(bob const&)=default errors out when it interacts with problem(problem&)=delete.
Arguably the standard "should" error-out cleanly when it determines that it cannot implement bob(bob const&), and treat the =default as =delete (like it would if we had problem(problem const&)=delete), but the standard wording isn't going to be flawless in this corner case of a corner case. And this corner of a corner case is going to be strange and quirky enough that I'm not certain a general rule that makes it translate =default to =delete would be right!
The fix if we problem(problem const&)=delete (well, to packaged_task) is going to be so much cleaner than anything we do to =default ctor rules.
Now standard delving:
First, it is obvious that the implicitly declared copy constructor of bob<problem> above is going to have signature bob(bob&) in [class.ctor]. I won't even quote the standard for that, because lazy.
We go and explicitly default bob(bob const&) copy ctor, which differs in signature from the one that would be implicitly declared.
There are rules about explicitly defaulting functions and their conflict with the signatures is in 11.4.2.
In Explicitly-defaulted functions[dcl.fct.def.default] 11.4.2/2
2 The typeT1of an explicitly defaulted function F is allowed to differ from the type T2 it would have had if it were implicitly declared, as follows:
—(2.1) T1 and T2 may have differing ref-qualifiers; and
—(2.2) if T2 has a parameter of type const C&, the corresponding parameter of T1 may be of type C&.
If T1 differs from T2 in any other way, then:
—(2.3) if F is an assignment operator, and the return type of T1 differs from the return type of T2 or T1’s parameter type is not a reference, the program is ill-formed;
—(2.4) otherwise, if F is explicitly defaulted on its first declaration, it is defined as deleted;
—(2.5) otherwise, the program is ill-formed.
The defaulted one is T1, which contains const& not &, so (2.2) doesn't apply.
My reading actually has it getting caught on (2.4); the type of bob(bob const&) differs from the implicitly declared bob(bob&) in an impermissible way; but first declaration is defaulted, so it should be deleted.
I'm looking at the n4713 draft version; maybe an older version didn't have that clause.
In "The C++ Programming Language" 4th edition page 164:
When we explicitly mention the type of an object we are initializing,
we have two types to consider: the type of the object and the type of
the initializer. For example:
char v1 = 12345; // 12345 is an int
int v2 = 'c'; // 'c' is a char
T v3 = f();
By using the {}-initializer syntax for such definitions, we minimize
the chances for unfortunate conversions:
char v1 {12345}; // error : narrowing
int v2 {'c'}; // fine: implicit char->int conversion
T v3 {f()}; // works if and only if the type of f() can be implicitly converted to a T
I don't quite understand the sentence minimize the chances for unfortunate conversions and the comment for T v3 {f()}; that it works if and only if the type of f() can be implicitly converted to a T. Consider the following two cases:
a) If T has an explicit constructor taking an argument of the type of f().
b) If the type of f() has a conversion operator to some type X and T has a constructor taking an argument of type X.
For both cases, the type of f() can't be implicitly converted to T, but T v3 {f()} is well-formed, so at least the only if part of that comment seems not appropriate? (Also not sure whether the if part is right or not.)
And for both cases, it is T v3 = f(); that is ill-formed, so what does the sentence minimize the chances for unfortunate conversions mean here? It seems that {}-initializer is actually accepting more conversion forms (whether it's unfortunate or not is another question). (Preventing narrowing is illustrated in the case of v1 and that's clear. I'm confused about v3.)
It actually does conversions in other way than it was default.
Using one of your examples, char x = 12345; will actually cut down this value to last 8 bits since that's the size of char. With {} notation, it rises error since it's not approperiate.
Not this was sometimes (ab)used to get special effects, so it was left as default option in new C++, while new notation is used to provide better behaviour with less room for errors.
The conversion:
char v1 = 12345;
is unfortunate because it is almost certainly not what the programmer wants:
either we want a type which can represent the value 12345,
or we used the correct type, but a wrong value
For v3 the same applies, but in a more complicated context. The first snippet will force the compiler to consider user-defined conversion sequences. This increases potential for error; after all, we may be mis-implementing a conversion operator, made a typo, or otherwise managed to fit a round peg in a square hole. Using copy-list-initialization, we exclude user-defined conversion sequences, making conversion a lot safer.
For an example and detailed explanation of why this is happening, see this question.
The comment for the initialization of v3:
T v3 {f()}; // works if and only if the type of f() can be implicitly converted to a T
is not strictly correct. That initialization works if and only if the type of f() can be explicitly converted to T. This syntax:
T v3 = {f()}; // truly does work if and only if the type of f()
// can be implicitly converted to a T
(copy-list-initialization instead of direct-list-initialization) truly does require the conversion to be implicit. The difference is illustrated by this program:
struct T {
explicit T(int) {}
};
int f() { return 0; }
int main() {
T v3 = {f()};
}
in which the initialization will be diagnosed as ill-formed, since it selects an explicit constructor for T (Live demo at Coliru).
I've come across this syntactic construct a few times, and I'm wondering:
What does this do?
What might the design reasoning be?
It tends to look something like this:
struct SubType : public SomeSuperType {
SubType(int somthing) : SuperType(something), m_foo(*((FooType *)0))
{}
private:
FooType m_foo;
}
To be clear, the code works. But what's the purpose? What would be the status of m_foo without that line?
The purpose of this construct is to emulate a fake unnamed object of type SomeType in situations when you formally need an object, but don't want or can't declare a real one. It has its valid uses and does not necessarily cause undefined behavior.
A classic example would be determining the size of some class member
sizeof (*(SomeClass *) 0).some_member
or a similar application of decltype
decltype((*(SomeClass *) 0).some_member)
Neither of the above examples causes any undefined behavior. In non-evaluated context expressions like *(SomeClass *) 0 are perfectly legal and valid.
You can also see this technique used for illustrative purposes in the language standard itself, as in 8.3.5/12
A trailing-return-type is most useful for a type that would be more
complicated to specify before the declarator-id:
template <class T, class U> auto add(T t, U u) -> decltype(t + u);
rather than
template <class T, class U> decltype((*(T*)0) + (*(U*)0)) add(T t, U u);
Observe how the (*(T*)0) + (*(U*)0) expression is used under decltype to perform compile-time prediction of the result type of binary + operator between types T and U.
Of course, again, such tricks are only valid when used in non-evaluated contexts, as shown above.
Sometimes it is used as an initializer for "null references" as in
SomeType &r = *(SomeType *) 0;
but this actually crosses the boundary of what's legal and produces undefined behavior.
What you have in your specific example is invalid, since it attempts to access an invalid "null lvalue" in evaluated context.
P.S. In C language there's also that peculiar part of specification that says that operators & and * cancel each other, meaning that &*(SomeType *) 0 is valid and guaranteed to evaluate to null pointer. But it does not extend to C++.
What does this do? Undefined behaviour.
What might the design reasoning be? A desire to cause undefined behaviour. There's no other rationale.
I don't think the example is necessarily UB. It depends on the definition of FooType. Suppose Foo is an empty class with a constructor that does something:
class Foo {
public:
Foo() { std::cout << "Hey, world! A new Foo just arrived.\n"; }
// I think the default copy and assign constructors do nothing
// with an empty type, but just in case:
Foo(const Foo&) {}
Foo& operator=(const Foo&) { return *this; }
};
Now, suppose I need a Foo, for whatever reason, and I don't want to trigger the constructor. Doing this will not cause any actual dereferencing because operator* does not dereference and the copy constructor doesn't use its reference argument:
Foo(*static_cast<Foo*>(0));
This is a follow-on question to
C++0x rvalue references and temporaries
In the previous question, I asked how this code should work:
void f(const std::string &); //less efficient
void f(std::string &&); //more efficient
void g(const char * arg)
{
f(arg);
}
It seems that the move overload should probably be called because of the implicit temporary, and this happens in GCC but not MSVC (or the EDG front-end used in MSVC's Intellisense).
What about this code?
void f(std::string &&); //NB: No const string & overload supplied
void g1(const char * arg)
{
f(arg);
}
void g2(const std::string & arg)
{
f(arg);
}
It seems that, based on the answers to my previous question that function g1 is legal (and is accepted by GCC 4.3-4.5, but not by MSVC). However, GCC and MSVC both reject g2 because of clause 13.3.3.1.4/3, which prohibits lvalues from binding to rvalue ref arguments. I understand the rationale behind this - it is explained in N2831 "Fixing a safety problem with rvalue references". I also think that GCC is probably implementing this clause as intended by the authors of that paper, because the original patch to GCC was written by one of the authors (Doug Gregor).
However, I don't this is quite intuitive. To me, (a) a const string & is conceptually closer to a string && than a const char *, and (b) the compiler could create a temporary string in g2, as if it were written like this:
void g2(const std::string & arg)
{
f(std::string(arg));
}
Indeed, sometimes the copy constructor is considered to be an implicit conversion operator. Syntactically, this is suggested by the form of a copy constructor, and the standard even mentions this specifically in clause 13.3.3.1.2/4, where the copy constructor for derived-base conversions is given a higher conversion rank than other user-defined conversions:
A conversion of an expression of class type to the same class type is given Exact Match rank, and a conversion
of an expression of class type to a base class of that type is given Conversion rank, in spite of the fact that
a copy/move constructor (i.e., a user-defined conversion function) is called for those cases.
(I assume this is used when passing a derived class to a function like void h(Base), which takes a base class by value.)
Motivation
My motivation for asking this is something like the question asked in How to reduce redundant code when adding new c++0x rvalue reference operator overloads ("How to reduce redundant code when adding new c++0x rvalue reference operator overloads").
If you have a function that accepts a number of potentially-moveable arguments, and would move them if it can (e.g. a factory function/constructor: Object create_object(string, vector<string>, string) or the like), and want to move or copy each argument as appropriate, you quickly start writing a lot of code.
If the argument types are movable, then one could just write one version that accepts the arguments by value, as above. But if the arguments are (legacy) non-movable-but-swappable classes a la C++03, and you can't change them, then writing rvalue reference overloads is more efficient.
So if lvalues did bind to rvalues via an implicit copy, then you could write just one overload like create_object(legacy_string &&, legacy_vector<legacy_string> &&, legacy_string &&) and it would more or less work like providing all the combinations of rvalue/lvalue reference overloads - actual arguments that were lvalues would get copied and then bound to the arguments, actual arguments that were rvalues would get directly bound.
Clarification/edit: I realize this is virtually identical to accepting arguments by value for movable types, like C++0x std::string and std::vector (save for the number of times the move constructor is conceptually invoked). However, it is not identical for copyable, but non-movable types, which includes all C++03 classes with explicitly-defined copy constructors. Consider this example:
class legacy_string { legacy_string(const legacy_string &); }; //defined in a header somewhere; not modifiable.
void f(legacy_string s1, legacy_string s2); //A *new* (C++0x) function that wants to move from its arguments where possible, and avoid copying
void g() //A C++0x function as well
{
legacy_string x(/*initialization*/);
legacy_string y(/*initialization*/);
f(std::move(x), std::move(y));
}
If g calls f, then x and y would be copied - I don't see how the compiler can move them. If f were instead declared as taking legacy_string && arguments, it could avoid those copies where the caller explicitly invoked std::move on the arguments. I don't see how these are equivalent.
Questions
My questions are then:
Is this a valid interpretation of the standard? It seems that it's not the conventional or intended one, at any rate.
Does it make intuitive sense?
Is there a problem with this idea that I"m not seeing? It seems like you could get copies being quietly created when that's not exactly expected, but that's the status quo in places in C++03 anyway. Also, it would make some overloads viable when they're currently not, but I don't see it being a problem in practice.
Is this a significant enough improvement that it would be worth making e.g. an experimental patch for GCC?
What about this code?
void f(std::string &&); //NB: No const string & overload supplied
void g2(const std::string & arg)
{
f(arg);
}
...However, GCC and MSVC both reject g2 because of clause 13.3.3.1.4/3, which prohibits lvalues from binding to rvalue ref arguments. I understand the rationale behind this - it is explained in N2831 "Fixing a safety problem with rvalue references". I also think that GCC is probably implementing this clause as intended by the authors of that paper, because the original patch to GCC was written by one of the authors (Doug Gregor)....
No, that's only half of the reason why both compilers reject your code. The other reason is that you can't initialize a reference to non-const with an expression referring to a const object. So, even before N2831 this didn't work. There is simply no need for a conversion because a string is a already a string. It seems you want to use string&& like string. Then, simply write your function f so that it takes a string by value. If you want the compiler to create a temporary copy of a const string lvalue just so you can invoke a function taking a string&&, there wouldn't be a difference between taking the string by value or by rref, would it?
N2831 has little to do with this scenario.
If you have a function that accepts a number of potentially-moveable arguments, and would move them if it can (e.g. a factory function/constructor: Object create_object(string, vector, string) or the like), and want to move or copy each argument as appropriate, you quickly start writing a lot of code.
Not really. Why would you want to write a lot of code? There is little reason to clutter all your code with const&/&& overloads. You can still use a single function with a mix of pass-by-value and pass-by-ref-to-const -- depending on what you want to do with the parameters. As for factories, the idea is to use perfect forwarding:
template<class T, class... Args>
unique_ptr<T> make_unique(Args&&... args)
{
T* ptr = new T(std::forward<Args>(args)...);
return unique_ptr<T>(ptr);
}
...and all is well. A special template argument deduction rule helps differentiating between lvalue and rvalue arguments and std::forward allows you to create expressions with the same "value-ness" as the actual arguments had. So, if you write something like this:
string foo();
int main() {
auto ups = make_unique<string>(foo());
}
the string that foo returned is automatically moved to the heap.
So if lvalues did bind to rvalues via an implicit copy, then you could write just one overload like create_object(legacy_string &&, legacy_vector &&, legacy_string &&) and it would more or less work like providing all the combinations of rvalue/lvalue reference overloads...
Well, and it would be pretty much equivalent to a function taking the parameters by value. No kidding.
Is this a significant enough improvement that it would be worth making e.g. an experimental patch for GCC?
There's no improvement.
I don't quite see your point in this question. If you have a class that is movable, then you just need a T version:
struct A {
T t;
A(T t):t(move(t)) { }
};
And if the class is traditional but has an efficient swap you can write the swap version or you can fallback to the const T& way
struct A {
T t;
A(T t) { swap(this->t, t); }
};
Regarding the swap version, I would rather go with the const T& way instead of that swap. The main advantage of the swap technique is exception safety and is to move the copy closer to the caller so that it can optimize away copies of temporaries. But what do you have to save if you are just constructing the object anyway? And if the constructor is small, the compiler can look into it and can optimize away copies too.
struct A {
T t;
A(T const& t):t(t) { }
};
To me, it doesn't seem right to automatically convert a string lvalue to a rvalue copy of itself just to bind to a rvalue reference. An rvalue reference says it binds to rvalue. But if you try binding to an lvalue of the same type it better fails. Introducing hidden copies to allow that doesn't sound right to me, because when people see a X&& and you pass a X lvalue, I bet most will expect that there is no copy, and that binding is directly, if it works at all. Better fail out straight away so the user can fix his/her code.