Currently I have a member function defined as such:
template<typename T> bool updateParameter(const std::string& name, const T& data);
With an overload for pointers.
template<typename T> bool updateParameter(const std::string& name, T* data);
I would like to be able to use this function as such:
int test = 20;
updateParameter<int>("name", 0);
updateParameter<int>("Referenced parameter", &test);
This way I can have a parameter object that either owns the data that it represents, or points to a user owned member.
Now the problem that I have is with the current setup MSVC implicitly will convert the const 0 of "name" to a pointer, so it ends up calling the overload designed for pointers. I can use the explicit keyword, but then I can't get the implicit conversion from const char[] to std::string for the name parameter.
Is there a way of telling the compiler, MSVC and GCC that a certain field should not be implicitly converted, or at least for it to prefer the const T& version over the T* version?
This is a VC++ bug. The first argument's conversion is identical for both overloads (char const[5] => std::string const&).
For the second argument, there are two distinct standard conversion sequences though: For the T const&-overload, the conversion is an identity conversion - §13.3.3.1.4/1:
When a parameter of reference type binds directly (8.5.3) to an
argument expression, the implicit conversion sequence is the identity
conversion, unless the argument expression has a type that is a
derived class of the parameter type […]
However, converting 0 to a pointer type has Conversion rank. §4.10 goes
A null pointer constant is an integer literal (2.13.2) with value zero
or a prvalue of type std::nullptr_t. A null pointer constant can be
converted to a pointer type; the result is the null pointer value of
that type and is distinguishable from every other value of object
pointer or function pointer type. Such a conversion
is called a null pointer conversion.
And §13.3.3.1.1/3 categorizes that accordingly, while also listing our identity conversion and how both relate:
The best workaround is to simply upgrade VC++, as recent versions select the correct overload (e.g. compare with rextester's VC++).
Another option is to take data by reference instead for your second overload. Ambiguities would be prevented by §13.3.3.2/3.2.6. Or simply don't overload updateParameter at all and provide a second function template instead.
Related
In "C++ Primer", exercise 14.47, there is a question:
Explain the difference between these two conversion
operators:
struct Integral {
operator const int();
operator int() const;
}
I don't know why the the answer I found on GitHub says that the first const is meaningless, because for one conversion operator should not define return type, this const here is unspecified, it will be ignored by the compiler. But I also found some guys say that it means the function will return a const value.
So, I wonder which one is correct, and why?
it will be ignored by compiler.
This is because of expr#6 which states:
If a prvalue initially has the type cv T, where T is a cv-unqualified non-class, non-array type, the type of the expression is adjusted to T prior to any further analysis.
This means that in your particular example, const int will be adjusted to int before further analysis since int is a built in type and not a class type.
which one is right?
Even though the return type of the first conversion function is const int, it will be adjusted to int prior to any further analysis.
While the const on the second conversion function means that the this pointer inside that function is of type const Integral*. This means that it(the conversion function) can be used with const as well as non-const Integral object. This is from class.this
If the member function is declared const, the type of this is const X*, ...
Is
char x[10] = "banana";
considered to be a implicit conversion from const char[7] to char[10]?
Since std::is_convertible<const char[7], char[10]>::value evaluates to false the obvious answer would be that it isn't, but I couldn't find a proper definition of implicit conversion anywhere. Reading cppreference I'd say that it is because:
Implicit conversions are performed whenever an expression of some type
T1 is used in context that does not accept that type, but accepts some
other type T2; in particular: ... when initializing a new object of
type T2, including return statement in a function returning T2;
although I'm not sure why they didn't exclude explicit constructors from this case.
Follow-up question (which may be useless):
Are arrays completely excluded from any kind of conversions (meaning array-to-array conversions) ?
Language-lawyerly speaking, initializing char array from string literal is a implicit conversion.
[conv.general]:
An expression E can be implicitly converted to a type T if and only if the declaration T t=E; is well-formed, for some invented temporary variable t ([dcl.init]).
Note that the core language only defines implicit conversion from an expression to a type. So the meaning of "implicit conversion from const char[7] to char[10]" is undefined.
is_convertible<From, To>::value is false whenever To is an array type, because it is defined to produce false if To is not a valid return type, which an array is not. (This can be implemented in different ways.)
[meta.rel]/5:
The predicate condition for a template specialization is_convertible<From, To> shall be satisfied if and only if the return expression in the following code would be well-formed, including any implicit conversions to the return type of the function:
To test() {
return declval<From>();
}
Arrays can rarely be the destination of implicit conversions, since they can be neither parameter types nor return types. But they are not excluded from temporary materialization conversion.
In this declaration
char x[10] = "banana";
there is no conversion. Elements of the string literal are used to initialize elements of the array. In fact it is equivalent to
char x[10] = { 'b', 'a', 'n', 'a', 'n', 'a', '\0' };
If instead of the array you declared a pointer like
const char *x = "banana";
then the string literal having the type const char[7] would be implicitly converted to a pointer to its first element of the type const char *.
The above declaration is equivalent to
const char *x = &"banana"[0];
I don't have a copy of the newest standard, but in [dcl.init.string], there is the paragraph:
If there are fewer initializers than there are array elements, each element not explicitly initialized shall be zero-initialized (8.5).
C++.2011§8.5.2¶3
So it is the specified behavior for initializing a char array of known size with a literal.
For the purposes of overload resolution, it is considered to be an implicit conversion, but one that involves no actual conversions (the identity conversion sequence). See [over.ics.list]/4:
Otherwise, if the parameter type is a character array125 and the initializer list has a single element that is an
appropriately-typed string-literal (9.4.2), the implicit conversion sequence is the identity conversion.
125 Since there are no parameters of array type, this will only occur as the referenced type of a reference parameter.
The implicit conversion to array type itself (not a reference to array) seems to be pretty restricted: it's apparently only allowed in a variable definition. Something like static_cast<char[10]>("banana") won't compile (at least on GCC and Clang).
I have a header file that defines a class with a boolean_type conversion operator in it, which is used in main.cpp. I am not able to understand how the boolean_type operator is getting called in an if statement
Header.h
template <typename Type>
class Sptr
{
public:
struct boolean_struct { int member; };
typedef int boolean_struct::* boolean_type;
operator boolean_type() const throw()
{
return nullptr != m_value ? &boolean_struct::member : nullptr;
}
explicit Sptr(Type value = nullptr) throw() :
m_value(value)
{
}
private:
Type m_value;
};
main.cpp
int main()
{
int* p = new int;
Sptr<int*> sPtr(p);
if (sPtr) //>>>>> How this is calling operator boolean_type() ?
{
std::cout << "hellos";
}
}
C++ is all to happy to make a construct work, so long as it can (within certain constraints) find a valid conversion sequence to types that work in the given context.
In this case, the compiler sees an if statement, which expects an operator of type bool. It will therefore attempt to form a bool value from sPtr. A conversion sequence can be formed from at most one user-defined conversion, but may contain more standard conversions.
In this case, the first conversion in the sequence is the user defined one to boolean_type. Following that, we can perform the following standard conversion
[conv.bool] (emphasis mine)
1 A prvalue of arithmetic, unscoped enumeration, pointer, or pointer-to-member type can be converted to a prvalue of type bool. A zero value, null pointer value, or null member pointer value is converted to false; any other value is converted to true.
boolean_type is a pointer-to-member type. And so can be converted to the bool we need for the if statement to work.
The reason such code may have been written in the past, is in the first paragraph to this answer. C++ can be too conversion happy. Had the user-defined conversion been a plain operator bool, one could end up with silliness such as this
sPtr * 2; // What is multiplying a pointer even mean!? Why?
foo(sPtr); // But foo expects an int!
The conversion to bool permits using sPtr in contexts that require an integral type, because, as it were, bool is also an integral type in the C++ type system! We don't want that. So the above trick is employed to prevent the compiler from considering our user-defined conversion in more contexts than we want.
Nowadays, such tricks are not required. Since C++11, we have the notion of type being contextually converted and explicit conversion operators. Basically if we write instead
explicit operator bool()
This operator will only be called in very specific contexts (such as the condition in an if statement), but not implicitly in the problematic examples listed above.
I was trying to post this code as an answer to this question, by making this pointer wrapper (replacing raw pointer). The idea is to delegate const to its pointee, so that the filter function can't modify the values.
#include <iostream>
#include <vector>
template <typename T>
class my_pointer
{
T *ptr_;
public:
my_pointer(T *ptr = nullptr) : ptr_(ptr) {}
operator T* &() { return ptr_; }
operator T const*() const { return ptr_; }
};
std::vector<my_pointer<int>> filter(std::vector<my_pointer<int>> const& vec)
{
//*vec.front() = 5; // this is supposed to be an error by requirement
return {};
}
int main()
{
std::vector<my_pointer<int>> vec = {new int(0)};
filter(vec);
delete vec.front(); // ambiguity with g++ and clang++
}
Visual C++ 12 and 14 compile this without an error, but GCC and Clang on Coliru claim that there's an ambiguity. I was expecting them to choose non-const std::vector::front overload and then my_pointer::operator T* &, but no. Why's that?
[expr.delete]/1:
The operand shall be of pointer to object type or of class type. If of
class type, the operand is contextually implicitly converted (Clause
[conv]) to a pointer to object type.
[conv]/5, emphasis mine:
Certain language constructs require conversion to a value having one
of a specified set of types appropriate to the construct. An
expression e of class type E appearing in such a context is said
to be contextually implicitly converted to a specified type T and
is well-formed if and only if e can be implicitly converted to a type
T that is determined as follows: E is searched for non-explicit
conversion functions whose return type is cv T or reference to cv T
such that T is allowed by the context. There shall be exactly
one such T.
In your code, there are two such Ts (int * and const int *). It is therefore ill-formed, before you even get to overload resolution.
Note that there's a change in this area between C++11 and C++14. C++11 [expr.delete]/1-2 says
The operand shall have a pointer to object type, or a class type
having a single non-explicit conversion function (12.3.2) to a pointer
to object type. [...]
If the operand has a class type, the operand is converted to a pointer type by calling the above-mentioned conversion function, [...]
Which would, if read literally, permit your code and always call operator const int*() const, because int* & is a reference type, not a pointer to object type. In practice, implementations consider conversion functions to "reference to pointer to object" like operator int*&() as well, and then reject the code because it has more than one qualifying non-explicit conversion function.
The delete expression takes a cast expression as argument, which can be const or not.
vec.front() is not const, but it must first be converted to a pointer for delete. So both candidates const int* and int* are possible candidates; the compiler cannot choose which one you want.
The eaiest to do is to use a cast to resolve the choice. For example:
delete (int*)vec.front();
Remark: it works when you use a get() function instead of a conversion, because the rules are different. The choice of the overloaded function is based on the type of the parameters and the object and not on the return type. Here the non const is the best viable function as vec.front()is not const.
Edit: I have reformatted the post to be clearer.
Why does this work:
struct A {};
struct B {
B(A){}
};
void operator+(const B&, const B&) {}
int main()
{
A a1, a2;
a1 + a2;
}
and this does not?
struct B {
B(const char*){}
};
void operator+(const B&, const B&) {} //error: invalid operands of types 'const char [6]' and 'const char [6]' to binary 'operator+'|
int main()
{
"Hello" + "world";
}
Essentially, in the first example a1 and a2 both convert to B objects through the implicit conversion and use the operator+(const B&, const B&) to add.
Following from this example, I would have expected "Hello" and "world" to convert to B objects, again through the implicit constructor, and use operator+(const B&, const B&) to add to each other. Instead there is an error, which indicates the C-style strings do not attempt a user-defined conversion to B in order to add. Why is this? Is there a fundamental property that prevents this?
In your first example, overload resolution is allowed to find your operator+:
[C++14: 13.3.1.2/2]: If either operand has a type that is a class or an enumeration, a user-defined operator function might be declared that implements this operator or a user-defined conversion can be necessary to convert the operand to a type that is appropriate for a built-in operator. In this case, overload resolution is used to determine which operator function or built-in operator is to be invoked to implement the operator. [..]
[C++14: 13.3.2/1]: From the set of candidate functions constructed for a given context (13.3.1), a set of viable functions is chosen, from which the best function will be selected by comparing argument conversion sequences for the best fit (13.3.3). The selection of viable functions considers relationships between arguments and function parameters other than the ranking of conversion sequences.
[C++14: 13.3.2/2]: First, to be a viable function, a candidate function shall have enough parameters to agree in number with the arguments in the list.
If there are m arguments in the list, all candidate functions having exactly m parameters are viable.
[..]
[C++14: 13.3.2/3]: Second, for F to be a viable function, there shall exist for each argument an implicit conversion sequence (13.3.3.1) that converts that argument to the corresponding parameter of F. [..]
(You may examine the wording for "implicit conversion sequence" yourself to see that the operator+ call is permissible; the rules are too verbose to warrant verbatim reproduction here.)
However, in your second example, overload resolution is constrained to a basic arithmetic addition mechanism (one which is not defined for const char[N] or const char*), effectively prohibiting any operator+ function from being considered:
[C++14: 13.3.1.2/1]: If no operand of an operator in an expression has a type that is a class or an enumeration, the operator is assumed to be a built-in operator and interpreted according to Clause 5.
[C++14: 5.7/1]: [..] For addition, either both operands shall have arithmetic or unscoped enumeration type, or one operand shall be a pointer to a completely-defined object type and the other shall have integral or unscoped enumeration type. [..]
[C++14: 5.7/3]: The result of the binary + operator is the sum of the operands.
1. Explaining your compiler error:
The reason you can't concatenate two string literals using the '+' operator,
is because string literals are simply arrays of characters, and you can't concatenate two arrays.
Arrays will be implicitly converted to the pointer of their first element.
Or as the standard describes it:
[conv.array]
An lvalue or rvalue of type “array of N T” or “array of unknown bound
of T” can be converted to a prvalue of type “pointer to T”. The result
is a pointer to the first element of the array.
What you are really doing in the example above,
is trying to add two const char pointers together, and that is not possible.
2. Why the string literals aren't implicitly converted:
Since arrays and pointers are fundamental types, you can't provide an implicit conversation operator as you have done in your class example.
The main thing to keep in mind, is that std::string knows how to take in char[], but char[] does not know how to become a std::string. In your example, you've used B, as a replacement to char[], but you've also given it the ability to convert itself to A.
3. Alternatives:
You can concatenate string literals by leaving out the plus operator.
"stack" "overflow"; //this will work as you indented
Optionally, you could make "stack" a std::string, and then use the std::string's overloaded '+' operator:
std::string("stack") + "overflow"; //this will work