In C++, only one user-defined conversion is allowed in implicit conversion sequence. Are there any practical reasons (from language user point of view) for that limit?
Allowing only one user defined conversion limits the search scope when attempting to match the source and destination types. Only those two types need to be checked to determine whether they are convertible (non-user defined conversions aside).
Not having that limit might cause ambiguity in some cases and it might even require testing infinite conversion paths in others. Since the standard cannot require to do something unless it is impossible to do in your particular case, the simple rule is only one conversion.
Consider as a convoluted example:
template <typename T>
struct A {
operator A<A<T>> () const; // A<int> --> A<A<int>>
};
int x = A<int>();
Now, there can potentially be a specialization for A<A<A<....A<int>...>>> that might have a conversion to int, so the compiler would have to recursively instantiate infinite versions of A and check whether each one of them is convertible to int.
Conversely, with two types that are convertible from-to any other type, it would cause other issues:
struct A {
template <typename T>
A(T);
};
struct B {
template <typename T>
operator T() const;
};
struct C {
C(A);
operator B() const;
};
If multiple user-defined conversions where allowed, any type T can be converted to any type U by means of the conversion path: T -> A -> C -> B -> U. Just managing the search space would be a hard task for the compiler, and it would most probably cause confusion on the users.
IMHO It is a design choice based on the fact that constructors are not explicit by default. Which is a practical reason for setting a limit, to disallow the following expression to be valid.
objectn o = 5;
5-> object1(5)->object2(object1)->object3(object2)->...->objectn(objectn-1)
Each arrow above is an implicit conversion. One seems to be a reasonable choice.If more are allowed, you have several implicit conversion paths between an object0 and objectn . Each one leading to possibly different objectn o. Which one to choose??
If the allowed number were infinite, you could quite quickly end up with an attempted circular conversion sequence.
It's much easier to say "if you need more than one conversion, you have a design flaw" and use that rationale to emplace a bug-saving limit within the language.
In any case, implicit conversions are generally considered to be best when used sporadically. One conversion is very useful for the sort of heavy operator overloading used by the standard library, and similar code; beyond that, there's not much excuse to ever use implicit conversions. Certainly a language would go in the direction of seeking fewer, not more (e.g. C++11 adding explicit in this context, to help you enforce no implicit conversions at all).
If it allows more than one, then how many in the sequence? If infinite, then wouldn't it make too complicated for a large number of types in a large project? For example, everytime you add implicit conversion, you have to think really really hard as to what else it converts into and then what else that converts into? and the chain goes on. Very dangerous situation, IMHO.
Implicit conversions (what is currently allowed by the language) are considered bad, which is why C++11 has added contextual conversion which is implemented using explicit keyword.
Related
After learning about std::lerp I tried to use it with strong types, but it fails miserably since it only works for built in types...
#include <iostream>
#include <cmath>
struct MyFloat{
float val = 4.7;
MyFloat operator *(MyFloat other){
return MyFloat(other.val*val);
}
MyFloat operator +(MyFloat other){
return MyFloat(other.val+val);
}
MyFloat operator -(MyFloat other){
return MyFloat(other.val-val);
}
};
int main()
{
MyFloat a{1}, b{10};
//std::lerp(a, b, MyFloat{0.3}); :(
std::lerp(a.val, b.val, 0.3f);
}
My question is:
Is there a good reason why C++20 introduced a function/algorithm that is not generic?
It would be impossible for std::lerp to provide its guarantees about numerical behavior for arbitrary types that happen to provide some arithmetic operators. (There’s no way for the library to detect that your example merely forwards them to the builtin float versions.)
While requirements could be imposed on the parameter type to allow a correct implementation, they would need to be exceedingly detailed for MyFloat to be handled with the same performance and results as float. For example, the implementation may need to compare values of the parameter type (which your type doesn’t support!) and can capitalize on the spacing between floating-point values to provide monotonicity guarantees near t=1.
Since those guarantees are the entire point of the function (the naïve formulas are trivial), it’s not provided at all in a generic form.
Is there a good reason why C++20 introduced a function/algorithm that is not generic?
Implementing it for a small set of types makes it easier to ensure correct results when mixing different argument types and dealing with possible implicit conversions (not to mention ambiguous overloads). As the last overload on cppreference (which is likely a template) specifies, the types are adjusted such that as little precision is lost as possible.
How can the same be achieved when the list of types is open ended? And a client programmer injects whatever meaning they want into overloaded operators? I'd say its pretty much impossible.
And it's nothing new, take std::pow for instance, which had a similar overload added to it in C++11. The standard library utilities that deal with numerical data are always specified only for types the implementation is aware of.
If lerp makes sense for your custom type, then you can add overloads into your custom namespace. ADL will find it, and generic code that is build on top of
using std::lerp;
lerp(arg1, arg2, arg3);
can be made to work for your custom type too.
In N4659 16.3.3.1 Implicit conversion sequences says
10 If several different sequences of conversions exist that each convert the argument to the parameter type, the implicit conversion sequence associated with the parameter is defined to be the unique conversion sequence designated the ambiguous conversion sequence. For the purpose of ranking implicit conversion sequences as described in 16.3.3.2, the ambiguous conversion sequence is treated as a user-defined conversion sequence that is indistinguishable from any other user-defined conversion sequence [Note: This rule prevents a function from becoming non-viable because of an ambiguous conversion sequence for one of its parameters.] If a function that uses the ambiguous conversion sequence is selected as the best viable function, the call will be ill-formed because the conversion of one of the arguments in the call is ambiguous.
(The corresponding section of the current draft is 12.3.3.1)
What is the intended purpose of this rule and the concept of ambiguous conversion sequence it introduces?
The note supplied in the text states that the purpose of this rule is "to prevent a function from becoming non-viable because of an ambiguous conversion sequence for one of its parameters". Um... What does this actually refer to? The concept of a viable function is defined in the preceding sections of the document. It does not depend on ambiguity of conversions at all (conversions for each argument must exist, but they don't have to be unambiguous). And there seems to be no provision for a viable function to somehow "become non-viable" later (neither because of some ambiguity nor anything else). Viable functions are enumerated, they compete against each other for being "the best" in accordance with certain rules and if there's a single "winner", the resolution is successful. At no point in this process a viable function may (or needs) to turn into a non-viable one.
The example provided within the aforementioned paragraph is not very enlightening (i.e. it is not clear what role the above rule plays in that example).
The question originally popped up in connection with this simple example
struct S
{
operator int() const { return 0; };
operator long() const { return 0; };
};
void foo(int) {}
int main()
{
S s;
foo(s);
}
Let's just mechanically apply the above rule here. foo is a viable function. There are two implicit conversion sequences from argument type S to parameter type int: S -> int and S -> long -> int. This means that per the above rule we have to "pack" them into a single ambiguous conversion sequence. Then we conclude that foo is the best viable function. Then we discover that it uses our ambiguous conversion sequence. Consequently, per the above rule the code is ill-formed.
This seems to make no sense. The natural expectation here is that S -> int conversion should be chosen, since it is ranked higher than S -> long -> int conversion. All compilers I know follow that "natural" overload resolution.
So, what am I misunderstanding?
To be viable, there must be an implicit conversion sequence.
The standard could have permitted multiple implicit conversion sequences, but that might make the wording for determining which overload to select more complex.
So the standard ends up defining one and exactly one implicit conversion sequence for each argument to each parameter. In the case where there is ambiguity, the one it uses is the ambiguous conversion sequence.
Once this is done, it no longer has to deal with the possibility of multiple conversion sequences of one argument to one parameter.
Imagine if we where writing this in C++. We might have a few types:
namespace conversion_sequences {
struct standard;
struct user_defined;
struct ellipsis;
}
where each has plenty of stuff in them (skipped here).
As each conversion sequence is one of the above, we define:
using any_kind = std::variant< standard, user_defined, ellipsis >;
Now, we run into the case of there being more than one conversion sequence for a given argument and parameter. We have two choices at this point.
We could pass around a using any_kinds = std::vector<any_kind> for a given argument,parameter pair, and ensure all of the logic that handles picking a conversion sequence deals with this vector...
Or we could note that the handling of a 1 or more entry vector never looks at the elements in the vector, and it is treated exactly like a kind of user_defined conversion sequence, until the very end at which point we generate an error.
Storing that extra state, and having the extra logic, is a pain. We know we don't need that state, and we don't need code to handle the vector. So we just define a sub-type of conversion_sequence::user_defined, and the code after it finds the preferred overload it checks if the resulting overload chosen should generate errors.
While the C++ standard is not (always) implemented in C++ and the wording doesn't have to have a 1:1 relationship with its implementation, writing a robust standard document is a kind of coding, and some of the same concerns apply.
Suppose there're two types:
typedef unsigned short Altitude;
typedef double Time;
To detect some errors like passing time argument in position of altitude to functions at compile time I'd like to prohibit implicit conversion from Altitude to Time and vice-versa.
What I tried first was declaring an operator Altitude(Time) without an implementation, but the compiler said that it must be a member function, so I understood that it's not going to work for a typedefed type.
Next I've tried turning one of these types into a class, but it appeared that the project extensively uses lots of arithmetic including implicit conversions to double, int, bool etc., as well as passes them to and from streams via operator<< and operator>>. So despite this way allowed me to find the errors I was looking for, I didn't even try to make full implementation of the compatible class because it would take a lot of code.
So my question is: is there a more elegant way to prevent implicit conversions between two particular typedefed types, and if yes, then how?
A typedef does nothing more than establish another name for an existing type.
Therefore your question boils down to whether you can disable implicit conversions between unsigned short and double, which is not possible in general.
Two ways exist to deal with this problem:
First, you can make Altitude and Time into their own types (read: define classes instead of typedefs) and ensure that they can be easily converted to and from their underlying numeric types - but not each other.
Second, you can ensure that whatever you do is protected by language constructs, e.g. if you have a function f that should take an Altitude a.k.a. unsigned short, you can overload it with another function f that takes an Time a.k.a. double and causes an error. This provides a better overload match and would thus prevent the implicit conversion in that case.
Where I can find an excellently understandable article on C++ type conversion covering all of its types (promotion, implicit/explicit, etc.)?
I've been learning C++ for some time and, for example, virtual functions mechanism seems clearer to me than this topic. My opinion is that it is due to the textbook's authors who are complicating too much (see Stroustroup's book and so on).
(Props to Crazy Eddie for a first answer, but I feel it can be made clearer)
Type Conversion
Why does it happen?
Type conversion can happen for two main reasons. One is because you wrote an explicit expression, such as static_cast<int>(3.5). Another reason is that you used an expression at a place where the compiler needed another type, so it will insert the conversion for you. E.g. 2.5 + 1 will result in an implicit cast from 1 (an integer) to 1.0 (a double).
The explicit forms
There are only a limited number of explicit forms. First off, C++ has 4 named versions: static_cast, dynamic_cast, reinterpret_cast and const_cast. C++ also supports the C-style cast (Type) Expression. Finally, there is a "constructor-style" cast Type(Expression).
The 4 named forms are documented in any good introductory text. The C-style cast expands to a static_cast, const_cast or reinterpret_cast, and the "constructor-style" cast is a shorthand for a static_cast<Type>. However, due to parsing problems, the "constructor-style" cast requires a singe identifier for the name of the type; unsigned int(-5) or const float(5) are not legal.
The implicit forms
It's much harder to enumerate all the contexts in which an implicit conversion can happen. Since C++ is a typesafe OO language, there are many situations in which you have an object A in a context where you'd need a type B. Examples are the built-in operators, calling a function, or catching an exception by value.
The conversion sequence
In all cases, implicit and explicit, the compiler will try to find a conversion sequence. A conversion sequence is a series of steps that gets you from type A to type B. The exact conversion sequence chosen by the compiler depends on the type of cast. A dynamic_cast is used to do a checked Base-to-Derived conversion, so the steps are to check whether Derived inherits from Base, via which intermediate class(es). const_cast can remove both const and volatile. In the case of a static_cast, the possible steps are the most complex. It will do conversion between the built-in arithmetic types; it will convert Base pointers to Derived pointers and vice versa, it will consider class constructors (of the destination type) and class cast operators (of the source type), and it will add const and volatile. Obviously, quite a few of these step are orthogonal: an arithmetic type is never a pointer or class type. Also, the compiler will use each step at most once.
As we noted earlier, some type conversions are explicit and others are implicit. This matters to static_cast because it uses user-defined functions in the conversion sequence. Some of the conversion steps consiered by the compiler can be marked as explicit (In C++03, only constructors can). The compiler will skip (no error) any explicit conversion function for implicit conversion sequences. Of course, if there are no alternatives left, the compiler will still give an error.
The arithmetic conversions
Integer types such as char and short can be converted to "greater" types such as int and long, and smaller floating-point types can similarly be converted into greater types. Signed and unsigned integer types can be converted into each other. Integer and floating-point types can be changed into each other.
Base and Derived conversions
Since C++ is an OO language, there are a number of casts where the relation between Base and Derived matters. Here it is very important to understand the difference between actual objects, pointers, and references (especially if you're coming from .Net or Java). First, the actual objects. They have precisely one type, and you can convert them to any base type (ignoring private base classes for the moment). The conversion creates a new object of base type. We call this "slicing"; the derived parts are sliced off.
Another type of conversion exists when you have pointers to objects. You can always convert a Derived* to a Base*, because inside every Derived object there is a Base subobject. C++ will automatically apply the correct offset of Base with Derived to your pointer. This conversion will give you a new pointer, but not a new object. The new pointer will point to the existing sub-object. Therefore, the cast will never slice off the Derived part of your object.
The conversion the other way is trickier. In general, not every Base* will point to Base sub-object inside a Derived object. Base objects may also exist in other places. Therefore, it is possible that the conversion should fail. C++ gives you two options here. Either you tell the compiler that you're certain that you're pointing to a subobject inside a Derived via a static_cast<Derived*>(baseptr), or you ask the compiler to check with dynamic_cast<Derived*>(baseptr). In the latter case, the result will be nullptr if baseptr doesn't actually point to a Derived object.
For references to Base and Derived, the same applies except for dynamic_cast<Derived&>(baseref) : it will throw std::bad_cast instead of returning a null pointer. (There are no such things as null references).
User-defined conversions
There are two ways to define user conversions: via the source type and via the destination type. The first way involves defining a member operator DestinatonType() const in the source type. Note that it doesn't have an explicit return type (it's always DestinatonType), and that it's const. Conversions should never change the source object. A class may define several types to which it can be converted, simply by adding multiple operators.
The second type of conversion, via the destination type, relies on user-defined constructors. A constructor T::T which can be called with one argument of type U can be used to convert a U object into a T object. It doesn't matter if that constructor has additional default arguments, nor does it matter if the U argument is passed by value or by reference. However, as noted before, if T::T(U) is explicit, then it will not be considered in implicit conversion sequences.
it is possible that multiple conversion sequences between two types are possible, as a result of user-defined conversion sequences. Since these are essentially function calls (to user-defined operators or constructors), the conversion sequence is chosen via overload resolution of the different function calls.
Don't know of one so lets see if it can't be made here...hopefully I get it right.
First off, implicit/explicit:
Explicit "conversion" happens everywhere that you do a cast. More specifically, a static_cast. Other casts either fail to do any conversion or cover a different range of topics/conversions. Implicit conversion happens anywhere that conversion is happening without your specific say-so (no casting). Consider it thusly: Using a cast explicitly states your intent.
Promotion:
Promotion happens when you have two or more types interacting in an expression that are of different size. It is a special case of type "coercion", which I'll go over in a second. Promotion just takes the small type and expands it to the larger type. There is no standard set of sizes for numeric types but generally speaking, char < short < int < long < long long, and, float < double < long double.
Coercion:
Coercion happens any time types in an expression do not match. The compiler will "coerce" a lesser type into a greater type. In some cases, such as converting an integer to a double or an unsigned type into a signed type, information can be lost. Coercion includes promotion, so similar types of different size are resolved in that manner. If promotion is not enough then integral types are converted to floating types and unsigned types are converted to signed types. This happens until all components of an expression are of the same type.
These compiler actions only take place regarding raw, numeric types. Coercion and promotion do not happen to user defined classes. Generally speaking, explicit casting makes no real difference unless you are reversing promotion/coercion rules. It will, however, get rid of compiler warnings that coercion often causes.
User defined types can be converted though. This happens during overload resolution. The compiler will find the various entities that resemble a name you are using and then go through a process to resolve which of the entities should be used. The "identity" conversion is preferred above all; this means that a f(t) will resolve to f(typeof_t) over anything else (see Function with parameter type that has a copy-constructor with non-const ref chosen? for some confusion that can generate). If the identity conversion doesn't work the system then goes through this complex higherarchy of conversion attempts that include (hopefully in the right order) conversion to base type (slicing), user-defined constructors, user-defined conversion functions. There's some funky language about references which will generally be unimportant to you and that I don't fully understand without looking up anyway.
In the case of user type conversion explicit conversion makes a huge difference. The user that defined a type can declare a constructor as "explicit". This means that this constructor will never be considered in such a process as I described above. In order to call an entity in such a way that would use that constructor you must explicitly do so by casting (note that syntax such as std::string("hello") is not, strictly speaking, a call to the constructor but instead a "function-style" cast).
Because the compiler will silently look through constructors and type conversion overloads during name resolution, it is highly recommended that you declare the former as 'explicit' and avoid creating the latter. This is because any time the compiler silently does something there's room for bugs. People can't keep in mind every detail about the entire code tree, not even what's currently in scope (especially adding in koenig lookup), so they can easily forget about some detail that causes their code to do something unintentional due to conversions. Requiring explicit language for conversions makes such accidents much more difficult to make.
For integer types, check the book Secure Coding n C and C++ by Seacord, the chapter about integer overflows.
As for implicit type conversions, you will find the books Effective C++ and More Effective C++ to be very, very useful.
In fact, you shouldn't be a C++ developer without reading these.
Why does C++ require that user-defined conversion operator can only be non-static member?
Why is it not allowed to use standalone functions as for other unary operators?
Something like this:
operator bool (const std::string& s) { return !s.empty(); }
The one reason I can think of is to prevent implicit conversions being applied to the thing being cast. In your example, if you said:
bool( "foo" );
then "foo" would be implicitly converted to a string, which would then have the explicit bool conversion you provided applied to it.
This is not possible if the bool operator is a member function, as implicit conversions are not applied to *this. This greatly reduces the possibilities for ambiguity - ambiguities normally being seen as a "bad thing".
By keeping the conversion operator within the class you give the author of the class control of how it could be converted (It prevents users from creating implicit conversions). As an implementer I would consider this an advantage, as implicit conversions does have its issues
There is a difference being able to pass one object as another, and having it to go through a conversion function. The former communicates that the object is of a given type, while the latter shows new readers that there is a difference between the two types and that a conversion is necessary.
Implicit user-defined conversions are frowned upon anyway. Don't use them. Just pretend that they aren't there. Let alone thinking about newer ways to introduce them.
Anyway, I guess they aren't there because the way they are they can do enough unexpected things. Including a new header which introduces such a conversion for a class defined somewhere else might lead to even more confusing errors.
There's a group of operators that have to be overloaded as non-static member functions: assignment, subscripting, function call, class member access, conversion functions.
I guess the standard's committee or Stroustrup simply felt it might be just too confusing if it was allowed to inject these very special behaviors to classes from outside.
I suppose the best way to get the answer would be to e-mail the author.