In order to prevent "unexpected issues", with format specifies where types are defined in other modules, I'm looking for a cast operator which will fail to compile on a narrowing conversion: this represents a fundamental type error that needs to be addressed.
For example, a using in some external header that has recently been changed from a compatible type to an incompatible type:
namespace y {
using X = uint64_t; // "recent change", was previously int32_t
}
The goal is to get this to fail (error not warning-as-error), as the result is used as "%d.
y::X a; // question covers ANY value in domain of X, not if overflow at run-time
int32_t r = cast_that_fails_to_compile_if_can_narrow<int32_t>(a);
// So this is guaranteed to always be a valid type specifier
// (per http://www.cplusplus.com/reference/cstdio/printf/)
// for the provided argument if the cast/code above compiles.
printf("%d", r);
(In this case the intractable narrowing issue should be handled by additional code changes.)
Initialization with braces (but not parentheses) disallows narrowing conversions:
int32_t r{a};
// or
int32_t r = {a};
// or
auto r = int32_t{a};
Your compiler may be allowing this anyway, but that is not standard-conform [1]. E.g. for GCC you need to add the -pedantic-errors flag for it to actually generate a hard error.
Also note that the type for %d should be int. If you use int32_t instead, you are risking a similar issue should a platform use a differently sized int.
You can use it directly in the printf call:
printf("%d", int{a});
[1] The standard always only requires compilers to print some diagnostic. It does not require hard errors preventing the compilation of the program. GCC for example only warns by-default, but that is still conforming.
A template function can be made to check for narrowing casts but permit other static casts:
template <typename Tto, typename Tfr>
Tto static_cast_prohibit_narrow(Tfr v)
{
static_assert(sizeof(Tfr) <= sizeof(Tto), "Narrowing cast prohibited");
return static_cast<Tto>(v);
}
The compiler produces an error encountering a narrow cast while permitting other static_casts.
int main()
{
long long int i1{ 4 };
int i2{ 5 };
//i2 = static_cast_prohibit_narrow<int>(i1); // Compiler static_assert error
i1 = static_cast_prohibit_narrow<int>(i2);
}
Related
#include <cmath>
#include <cstdlib>
int main()
{
short int k = 11;
switch(std::abs(k)) {
case 44:
return 5;
break;
}
}
The above code works fine in GCC 4.4.7 and 7.1 and later.
It gives an error in GCC 4.5.4 and later releases:
<source>: In function 'int main()':
<source>:7:23: error: switch quantity not an integer
So my question is why was this breaking change introduced in GCC?
Or, were the implementors not aware this was a breaking change? If so, how come, how do they test that they do not break existing code?
The question can be aimed at Clang as well since it had similar issues with the abs function.
GCC (and clang) libraries (glibc and libc++, respectively) broke backward compatibility in order to comply with the C++ Standard.
The trouble is caused by this clause:
Moreover, there shall be additional overloads suffcient to ensure:
If any arithmetic argument corresponding to a double parameter has type long double, then all arithmetic arguments corresponding to
double parameters are effectively cast to long double.
Otherwise, if any arithmetic argument corresponding to a double parameter has type double or an integer type, then all arithmetic
arguments corresponding to double parameters are effectively cast to
double.
Otherwise, all arithmetic arguments corresponding to double parameters have type float.
short int is "an integer type", so bullet #2 kicks in and causes generation of a wrapper which calls double abs(double) and this wrapper is a better match than int abs(int).
Notably, the latest drafts of the Standard have an explicit exception added to this rule:
For each set of overloaded functions within , with the exception of abs, there shall be additional overloads suffcient to ensure:
This exception actually is derived from handling of unsigned types but solves your problem as well.
Clang (3.9.1) and GCC (7, snapshot) print "1", "2" to the console when this code is run.
However, MSVC fails to compile this code:
source_file.cpp(15): error C2668: 'Dictionary::set': ambiguous call to overloaded function
source_file.cpp(9): note: could be 'void Dictionary::set(int64_t)'
source_file.cpp(8): note: or 'void Dictionary::set(const char *)'
source_file.cpp(15): note: while trying to match the argument list '(const unsigned int)'
#include <iostream>
static const unsigned ProtocolMajorVersion = 1;
static const unsigned ProtocolMinorVersion = 0;
class Dictionary {
public:
void set(const char *Str) { std::cout << "1"; }
void set(int64_t val) { std::cout << "2"; }
};
int main() {
Dictionary dict;
dict.set(ProtocolMajorVersion);
dict.set(ProtocolMinorVersion);
}
I think MSVC is right - the value of ProtocolMajorVersion is 0, which can be NULL or int64_t(0).
However, this seems to be the case when replacing
dict.set(ProtocolMinorVersion)
with
dict.set(0);
source_file.cpp:15:10: error: call to member function 'set' is ambiguous
dict.set(0);
source_file.cpp:8:10: note: candidate function
void set(const char *Str) { std::cout << "1"; }
source_file.cpp:9:10: note: candidate function
void set(int64_t val) { std::cout << "2"; }
So what's going on here - which compiler is right? Would surprise me if both GCC and Clang are accepting incorrect code, or is MSVC just being buggy? Please refer to the standard
In C++11 and before, any integral constant expression which evaluates to 0 is a considered a null pointer constant. This has been restricted in C++14: only integer literals with value 0 are considered. In addition, prvalues of type std::nullptr_t are null pointer constants since C++11. See [conv.ptr] and CWG 903.
Regarding overload resolution, both the integral conversion unsigned -> int64_t and the pointer conversion null pointer constant -> const char* have the same rank: Conversion. See [over.ics.scs] / Table 12.
So if ProtocolMinorVersion is considered a null pointer constant, then the calls are ambiguous. If you just compile the following program:
static const unsigned ProtocolMinorVersion = 0;
int main() {
const char* p = ProtocolMinorVersion;
}
You will see that clang and gcc reject this conversion, whereas MSVC accepts it.
Since CWG 903 is considered a defect, I'd argue that clang and gcc are right.
When two compilers agree and one doesn't, it's nearly always the one that doesn't that is wrong.
I would argue that if you declare a value as const unsigned somename = 0;, it is no longer a simple zero, it is a named unsigned constant with the value zero. So should not be considered equivalent to a pointer type, leaving only one plausible candidate.
Having said that, BOTH of the set functions require conversion (it's not a uint64_t, neither a const char *), so one could argue that MSVC is right [the compiler shall pick the type that requires least conversion, if multiple types require equal amount of conversion, it's ambiguous] - although I still don't think the compiler should accept a named constant of the value zero as an equivalent to a pointer...
Sorry, probably more of a "comment" than an answer - I started writing with the intention of saying "gcc/clang are right", but then thinking more about it came to the conclusion that "although I would be happier with that behaviour, it's not clear that this is the CORRECT behaviour".
How can we tell the C++ compiler that it should avoid implicit casting while using arithmetic operators such as +and /, i.e.,
size_t st_1, st_2;
int i_1, i_2;
auto st = st_1 + st_2; // should compile
auto i = i_1 + i_2; // should compile
auto error_1 = st_1 + i_2; // should not compile
auto error_2 = i_1 + st_2; // should not compile
// ...
Unfortunately the language specifies what should happen when you add an int to a size_t (see its rules for type promotion) so you can't force a compile time error.
But you could build your own add function to force the arguments to be the same type:
template <class Y>
Y add(const Y& arg1, const Y& arg2)
{
return arg1 + arg2;
}
The constant references prevent any type conversion, and the template forces both arguments to be the same type.
This will always work in your particular case since size_t must be an unsigned type:
The best answer I can give you is to use units: give a look at boost unit.
Another interesting method is the use of opaque typedef you can give a look at this paper Toward Opaque Typedef here a very interesting talk and implementation.
Hope the material can be useful
With built in (non-class) types, it is not possible to prevent unwanted implicit type conversions.
Some compilers can be configured to give warnings for operations involving suspicious conversions, but that does not cover all possible implicit conversions (after all, a conversion from short to long is value preserving, so not all compilers will report it as suspicious). And some of those compiles may also be configured to give errors where they give warnings.
With C++ class types, it is possible to prevent implicit conversions by making constructors explicit, and not defining conversion operators (for example, a class member function named operator int()).
It is also possible for a class type to supply numeric operators (operator+(), etc), which only accept operands of the required types. The problem is that this doesn't necessarily prevent promotion of built in types participating in such expressions. For example, a class that provides an operator+(int) const (so some_object = some_other_object + some_int would work) would not stop an expression like some_other_object + some_short from compiling (as some_short can be implicitly promoted to int).
Which basically means it is possible to prevent implicit conversions to class types, but not prevent promotions occurring in expressions with numeric operators.
C++11 adds this.
int x;
unsigned int y{ x }; // ERROR
Is it possible to enable something like this.
int x;
void f(unsigned int y);
f(x); //ERROR
Compiler: VC++ 2013
Try this (live example):
template <
typename T,
typename = typename std::enable_if <std::is_same <T, unsigned int>{}>::type
>
void f(T x) { }
No, there's no compiler switch or other general setting to do that. Implicit conversions are part of the language, and cannot be disabled in the general case for built-in types. The closest you can get is a user-defined wrapper class with only explicit constructors, or applying some template meta-hackery to the function you're trying to call (as in iavr's answer).
The premise of your question appears to conflate implicit conversions with the special case of narrowing conversions. The following:
int x = 0;
unsigned int y{ x };
is not an error. You may get a warning about the potential narrowing conversion, and this warning may be turned into an error (with GCC's -Werror, for example), but this is not new in C++11 and it does not inherently prohibit the conversion.
The program is only ill-formed if you actually cause a narrowing conversion. Again, this is specific to the value involved and is not a general rule about implicit conversions.
When I try to compile
template<int dim>
struct Foo
{
Foo(const int (&i)[dim]) {}
};
int main()
{
Foo<2> f = Foo<2>((int[2]){0}); // line 9
return 0;
}
I get the compilation error
test.cpp:9:31: error: no matching function for call to ‘Foo<2>::Foo(int [1])’
Apparently, the argument I pass to the constructor is regarded as an int[1]. Why isn't it regarded as an int[2] (which could then be casted to a const reference as expected by the constructor)? Shouldn't the missing elements be value-initialized according to 8.5.1 (7)?
After all, replacing line 9 with
int arg[2] = {0};
Foo<2> f = Foo<2>(arg);
lets me compile the program. Additionally, when I try to pass (const int [2]){0, 0, 0} to the constructor, I get the error message too many initializers for ‘const int [2]’, so apparently, the compiler is trying to construct a const int[2].
Somebody please shed some light on this unintuitive behavior.
The construct (int[2]){0} is a C99 compound literal, which is not part of C++. How particular compilers interpret in the context of C++ is anyone's guess (or a matter of examining the source code).
PS. OK, it seems that gcc 4.7/gcc 4.8/clang-3.1 handle it quite sensibly - the type of the compound literal is the same as the C99 standard specifies it.
I guess the OP compiler is a bit older.