I am curious about the following code, somebody could explain why does it call bool func first ? isn`t "str" more suitable for arg type string ?
void a(bool input)
{
cout<<"I amd first"<<endl;
cout<<input<<endl;
}
void a(const string &input)
{
cout<<"I amd second"<<endl;
cout<<input<<endl;
}
int main( )
{
a("str"); // call void a(bool input)
a(string("str")); //call void a(const string &input)
return 0;
}
"str" is of type const char[4], which decays immediately to const char *, and the conversion from any pointer type to bool is considered before non-explicit constructors to custom types.
So, I'd say that the answer is "because the standard says so".
The relevant passage should be 13.3.3.2 ¶2:
When comparing the basic forms of implicit conversion sequences (as defined in 13.3.3.1)
a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion sequence or an ellipsis conversion sequence [...]
I'd guess it is because when you call a("str"), you're trying to call a function with parameters const char *. It will convert any type of pointer to bool before any other implicit conversion (::std::string etc).
Related
I fixed a bug recently.
In the following code, one of the overloaded function was const and the other one was not. The issue will be fixed by making both functions const.
My question is why compiler only complained about it when the parameter was 0.
#include <iostream>
#include <string>
class CppSyntaxA
{
public:
void f(int i = 0) const { i++; }
void f(const std::string&){}
};
int main()
{
CppSyntaxA a;
a.f(1); // OK
//a.f(0); //error C2666: 'CppSyntaxA::f': 2 overloads have similar conversions
return 0;
}
0 is special in C++. A null pointer has the value of 0 so C++ will allow the conversion of 0 to a pointer type. That means when you call
a.f(0);
You could be calling void f(int i = 0) const with an int with the value of 0, or you could call void f(const std::string&) with a char* initialized to null.
Normally the int version would be better since it is an exact match but in this case the int version is const, so it requires "converting" a to a const CppSyntaxA, where the std::string version does not require such a conversion but does require a conversion to char* and then to std::string. This is considered enough of a change in both cases to be considered an equal conversion and thus ambiguous. Making both functions const or non const will fix the issue and the int overload will be chosen since it is better.
My question is why compiler only complained about it when the parameter was 0.
Because 0 is not only an integer literal, but it is also a null pointer literal. 1 is not a null pointer literal, so there is no ambiguity.
The ambiguity arises from the implicit converting constructor of std::string that accepts a pointer to a character as an argument.
Now, the identity conversion from int to int would otherwise be preferred to the conversion from pointer to string, but there is another argument that involves a conversion: The implicit object argument. In one case, the conversion is from CppSyntaxA& to CppSyntaxA& while in other case it is CppSyntaxA& to const CppSyntaxA&.
So, one overload is preferred because of one argument, and the other overload is preferred because of another argument and thus there is no unambiguously preferred overload.
The issue will be fixed by making both functions const.
If both overloads are const qualified, then the implicit object argument conversion sequence is identical, and thus one of the overloads is unambiguously preferred.
This question already has answers here:
C++ overload resolution, conversion operators and const
(3 answers)
Closed 5 years ago.
I'm writing a interface to a 3rd party library. It manipulates objects through a C interface which essentially is a void*. Here is the code simplified:
struct LibIntf
{
LibIntf() : opaquePtr{nullptr} {}
operator void *() /* const */ { return opaquePtr; }
operator void **() { return &opaquePtr; }
void *opaquePtr;
};
int UseMe(void *ptr)
{
if (ptr == (void *)0x100)
return 1;
return 0;
}
void CreateMe(void **ptr)
{
*ptr = (void *)0x100;
}
int main()
{
LibIntf lib;
CreateMe(lib);
return UseMe(lib);
}
Everything works great until I add the const on the operator void *() line. The code then defaults silently to using the operator void **() breaking the code.
My question is why?
I'm returning a pointer through a function that doesn't modify the object. Should be able to mark it const. If that changes it to a const pointer, the compiler should error because operator void **() shouldn't be a good match for function CallMe() that just want a void *.
This is what the standard says should happen, but this is far from obvious. For quick readers, jump to the "How to fix it?" at the end.
Understanding why the const matters
Once you add the const qualifier, when you call UseMe with an instance of LibIntf, the compiler have the two following possibilities:
LibIntf →1 LibIntf →2 void** →3 void* (through operator void**())
LibIntf →3 const LibIntf →2 void* →1 void* (through operator void* const())
1) No conversion needed.
2) User-defined conversion operator.
3) Legal conversions.
Those two conversion paths are legal, so which one to choose?
The standard defining C++ answers:
[over.match.best]/1
Define ICSi(F) as follows:
[...]
let ICSi(F) denote the implicit conversion sequence that converts the ith argument in the list to the type of the ith parameter of viable function F. [over.best.ics] defines the implicit conversion sequences and [over.ics.rank] defines what it means for one implicit conversion sequence to be a better conversion sequence or worse conversion sequence than another.
Given these definitions, a viable function F1 is defined to be a
better function than another viable function F2 if for all arguments
i, ICSi(F1) is not a worse conversion sequence than ICSi(F2), and then
for some argument j, ICSj(F1) is a better conversion sequence than ICSj(F2), or, if not that,
the context is an initialization by user-defined conversion (see [dcl.init], [over.match.conv], and [over.match.ref]) and the standard conversion sequence from the return type of F1 to the destination type (i.e., the type of the entity being initialized) is a better conversion sequence than the standard conversion sequence from the return type of F2 to the destination type.
(I had to read it a couple times before getting it.)
This all means in your specific case than option #1 is better than option #2 because for user-defined conversion operators, the conversion of the return type (void** to void* in option #1) is considered after the conversion of the parameter type (LibIntf to const LibIntf in option #2).
In chain, this means in option #1 there is nothing to convert (latter in the conversion chain, there will but this is not yet considered) but in option #2 a conversion from non-const to const is needed. Option #1 is, thus, dubbed better.
How to fix it?
Simply remove the need to consider the non-const to const conversion by casting the variable to const (explicitly (casts are always explicit (or are called conversions))):
struct LibIntf
{
LibIntf() : opaquePtr{nullptr} {}
operator void *() const { return opaquePtr; }
operator void **() { return &opaquePtr; }
void *opaquePtr;
};
int UseMe(void *ptr)
{
if (ptr == (void *)0x100)
return 1;
return 0;
}
void CreateMe(void **ptr)
{
*ptr = (void *)0x100;
}
int main()
{
LibIntf lib;
CreateMe(lib);
// unfortunately, you cannot const_cast an instance, only refs & ptrs
return UseMe(static_cast<const LibIntf>(lib));
}
Why is C++ casting the string literal I pass in as a bool rather than a string?
#include <iostream>
using namespace std;
class A
{
public:
A(string v)
{
cout << v;
}
A(bool v)
{
cout << v;
}
};
int main()
{
A("hello");
return 0;
}
Output: 1
Is it because the compiler isn't smart enough to make the jump from char * to string and rather just assumes that bool is the closest thing to a pointer? Is my only option to make an explicit char * constructor that basically does the exact same thing as the string constructor?
If you have C++11 you can use a delegating constructor:
A(char const* s) : A(std::string(s)) { }
The reason the boolean converting-constructor is chosen over the one for std::string is because the conversion from char const* to bool is a standard conversion while the one to std::string is a user-defined conversion. Standard conversions have a greater rank than user-defined conversions.
With
A(string("hello"));
it will give the expected result.
Why is it so ?
It's because of the standard conversions:
"hello" is understood as a const char* pointer
this pointer can be converted to a bool (section 4.12 of the standard: "A prvalue of (...) pointer (...) type can be converted to a prvalue of type bool." )
the conversion from "hello" to string is not considered because section 12.3 of standard explains that "Type conversions of class objects can be specified by constructors and by conversion functions. These conversions are called user-defined conversions" and "User-defined conversions are applied only where they are unambiguous". Wouldn't you have the bool constructor the std::string conversion would be done implicitely.
How to get what you expected ?
Just add the missing constructor for string litterals:
A(const char* v)
{
cout << v; // or convert v to string if you want to store it in a string member
}
Of course, instead of rewriting a constructor from scratch, you could opt for a delegate as suggested by 0x499602D2 in another answer.
Recently I passed this problem too, let me share another way.
You can change the bool constructor to a unsigned char. So the decay and implict conversion of string literal don't happen, and the std::string constructor takes place.
class A
{
public:
A(string v)
{
cout << v;
}
A(unsigned char v)
{
cout << static_cast<bool>(v);
}
};
int main()
{
A("Hello"); // <- Call A(string)
A(false); // <- Call A(unsigned char)
}
This way you don't need to provide always overloads to std::string and const char* neither making code bloat constructing the std::string at the client call site.
I don't claim that's better, but it's simpler.
When selecting an overloaded method this is the order that the compiler tries:
Exact match
Promotion
Standard numerical conversion
User defined operators
A pointer doesn't promote to bool but it does convert to bool. A char* uses an std operator to convert to std::string. Note that if char* used the same number in this list to convert to both bool and std::string it would be ambiguous which method the compiler should choose, so the compiler would throw an "ambiguous overload" error.
I would throw my weight behind 0x499602D2's solution. If you have C++11 your best bet is to call: A(char* s) : A(std::string(s)){}
If you don't have C++11 then I would create an A(char* s) constructor and abstract the logic of the A(std::string) constructor into a method and call that method from both constructors.
http://www.learncpp.com/cpp-tutorial/76-function-overloading/
Here is the code:
class A{
public:
int val;
char cval;
A():val(10),cval('a'){ }
operator char() const{ return cval; }
operator int() const{ return val; }
};
int main()
{
A a;
cout << a;
}
I am running the code in VS 2013, the output value is 10, if I comment out operator int() const{ return val; }, the output value will then become a.
My question is how does the compiler determine which implicit type conversion to choose, I mean since both int and char are possible options for the << operator?
Yes, this is ambiguous, but the cause of the ambiguity is actually rather surprising. It is not that the compiler cannot distinguish between ostream::operator<<(int) and operator<<(ostream &, char); the latter is actually a template while the former is not, so if the matches are equally good the first one will be selected, and there is no ambiguity between those two. Rather, the ambiguity comes from ostream's other member operator<< overloads.
A minimized repro is
struct A{
operator char() const{ return 'a'; }
operator int() const{ return 10; }
};
struct B {
void operator<< (int) { }
void operator<< (long) { }
};
int main()
{
A a;
B b;
b << a;
}
The problem is that the conversion of a to long can be via either a.operator char() or a.operator int(), both followed by a standard conversion sequence consisting of an integral conversion. The standard says that (§13.3.3.1 [over.best.ics]/p10, footnote omitted):
If several different sequences of conversions exist that each convert
the argument to the parameter type, the implicit conversion sequence
associated with the parameter is defined to be the unique conversion
sequence designated the ambiguous conversion sequence. For the
purpose of ranking implicit conversion sequences as described in
13.3.3.2, the ambiguous conversion sequence is treated as a user-defined sequence that is indistinguishable from any other
user-defined conversion sequence. *
Since the conversion of a to int also involves a user-defined conversion sequence, it is indistinguishable from the ambiguous conversion sequence from a to long, and in this context no other rule in §13.3.3 [over.match.best] applies to distinguish the two overloads either. Hence, the call is ambiguous, and the program is ill-formed.
* The next sentence in the standard says "If a function that uses the ambiguous conversion sequence is selected as the best viable function, the call will be ill-formed because the conversion of one of the arguments in the call is ambiguous.", which doesn't seem necessarily correct, but detailed discussion of this issue is probably better in a separate question.
It shouldn't compile, since the conversion is ambiguous; and it doesn't with my compiler: live demo. I've no idea why your compiler accepts it, or how it chooses which conversion to use, but it's wrong.
You can resolve the ambiguity with an explicit cast:
cout << static_cast<char>(a); // uses operator char()
cout << static_cast<int>(a); // uses operator int()
Personally, I'd probably use named conversion functions, rather than operators, if I wanted it to be convertible to more than one type.
A debugging session gave the result. One is globally defined operator<< and other one is class method. You guess which one is calling which.
Test.exe!std::operator<<<std::char_traits<char> >(std::basic_ostream<char,std::char_traits<char> > & _Ostr, char _Ch)
msvcp120d.dll!std::basic_ostream<char,std::char_traits<char> >::operator<<(int _Val) Line 292 C++
I am not a language lawyer, but I believe compiler is giving precedence to member-function first.
Why are the following overloaded function calls ambiguous?? With the compile error:
call of overloaded 'test(long int)' is ambiguous,candidates are: void test(A)|
void test(B)|
The code:
class A
{
public:
A(int){}
A(){}
};
class B: public A
{
public:
B(long){}
B(){}
};
void test(A a)
{
}
void test(B b)
{
}
void main()
{
test(0L);
return;
}
You got an error because overload resolution has to choose from two equally viable functions (both have user-defined conversions). Function overload resolution is a very complicated subject. For more details on the overload resolution see e.g. this recent lecture by Stephan T. Lavavej. It's generally best to make single-argument constructors explicit, and then to call your function with an explicit constructor argument.
test(0L) is not an exact match to any overload because there is no overload test(long). The two overloads you provided both have user-defined conversions on their arguments, but the compiler considers them equally viable. The A overload has to do a standard conversion (long to int) followed by a user-defined conversion (int to A), and the B overload a user-defined conversion (long to B). But both are implicit user-defined conversion sequences.
How are these ranked? The Standard says in 13.3.3.2 Ranking implicit conversion sequences [over.ics.rank]
Standard conversion sequence S1 is a better conversion sequence than
standard conversion sequence S2 if S1 is a proper subsequence of S2
These type of tie-breaking e.g. apply if A would be a derived class from B (or vice versa). But here neither conversion sequence is a subsequence of the other. Therefore they are equally viable and the compiler cannot resolve the call.
class A
{
public:
explicit A(int){}
A(){}
};
class B: public A
{
public:
explicit B(long){}
B(){}
};
void test(A a)
{}
void test(B b)
{}
int main()
{
test(A(0L)); // call first overload
test(B(0L)); // call second overload
return 0;
}
NOTE: it's int main(), not void main().
Function overloading consider exact argument types or implicit conversions.
In your example both alternatives A(0L) and B(0L) are same from overload point of view, because require the implicit constructor call.
You are calling test with a parameter of type long.
There is no test(long).
The compiler has to choose between test(A) and test(B).
To call test(A) it has a conversion sequence of long -> int -> A.
To call test(B) it has a conversion sequence of long -> B.
Depending on the ranking rules of the standard it will either pick one if one is ranked better than the other - or it will fail with ambiguity.
In this specific case the two conversion sequences are ranked equally.
There is a long list of rules in the standard about how it calculates the ranking of conversion sequences in section 13.3.3 Best viable function"
Try this:
class A
{
public:
explicit A(int){}
A(){}
};
Keyword explicit stops the compiler doing implicit conversions.
Compiler is allowed to do only one implicit conversion to user type. If this also involves conversions between primitive types, they do not count. Even though you in case of test(B) you have two conversions in place, but the following will not compile:
class B
{
public:
B(int) {}
};
class A
{
public:
A(const B&) {}
};
void test(const A&) {}
....
test(5);
To disable compiler doing implicit conversion you should use explicit keyword with constructor