We have a big C++ project where we rely on compiler warnings and Flexelint to identify potential programming errors. I was curious about how they will warn us once we accidentally try to cast an enum value to a narrower integer.
As suggested by Lint, we usually perform static casts from the enum to the integer. Lint doesn't like implicit casts. We usually cast to the exact type expected by the method.
I got interesting results, see this experiment:
#include <iostream>
#include <string>
#include <stdint.h>
void foo(uint8_t arg)
{
std::cout << static_cast<int>(arg) << std::endl;
}
enum Numbers
{
hundred = 100,
thousand = 1000,
};
int main()
{
std::cout << "start" << std::endl;
foo(static_cast<uint8_t>(hundred)); // 1) no compiler or lint warning
foo(static_cast<uint8_t>(thousand)); // 2) no compiler or lint warning
foo(static_cast<int>(thousand)); // 3) compiler and lint warning
foo(thousand); // 4) compiler and lint warning
std::cout << "end" << std::endl;
}
http://cpp.sh/5hpyz
First case is not a concern, just to mention the good case.
Interestingly, I only got compiler warnings in the latter two cases, when doing an implicit cast. The explicit cast in case 2) will truncate the value (output is 232 like the following two) but with no warning. Ok, the compiler is probably assuming I know what I'm doing here with my explicit cast to uint8_t. Fair enough.
I expected Lint to help me out here. I run this code in Gimpel's online Lint but didn't get any warnings either. Only in the latter two cases again, with this warning:
warning 569: loss of information (call) in implicit conversion from 'int' 1000 (10 bits) to 'uint8_t' (aka 'unsigned char') (8 bits)
Again, the explicit cast to uint8_t in case 2), that truncates my value, doesn't bother Lint at all.
Given a case where all values in an enum fit into the uint8_t. But in some future, we add bigger values (or say: more than 256 values in total), cast them and without noticing that this will truncate them and get unexpected results.
By default, I always cast to the target variable size (case 2) ). Given this experiment, I wonder if this is a wise approach. Shall I cast to the widest type and rely on implicit casts instead (case 3) )?
What's the right approach to get the expected warnings?
You could also write foo(uint8_t{thousand}); instead of a static_cast. With that, you would get a compiler error/warning if thousand is too large for uint8_t. But I don't know what lint thinks about it
This is a problem I also encountered. What I found to work best is to write a function which performs the cast for you and would generate an error in case something is wrong based on type traits.
#include <type_traits>
#include <limits>
template<class TYPE>
TYPE safe_cast(const Numbers& number)
{
using FROM_TYPE = std::underlying_type_t<Numbers>;
// Might have to add some additional code here to fix signed unsigned comparisons.
if((abs(std::numeric_limits<TYPE>::min()) > static_cast<FROM_TYPE>(number)) ||
(std::numeric_limits<TYPE>::max() < static_cast<FROM_TYPE>(number)))
{
// Throw an error or assert.
std::cout << "Error in safe_cast" << std::endl;
}
return static_cast<TYPE>(number);
}
Hope this wil help
p.s. In case you could rewrite this to compile time with a constexpr you could also uses static_assert.
Related
I recently discovered a hard to find bug in a project I am working on. The problem was that we did a calculation that had a uint32_t result and stored that in a uint64_t variable. We expected the result to be a uint64_t because we knew that the result can be too big for a 32 bit unsigned integer.
My question is: is there a way to make the compiler (or a static analysis tool like clang-tidy) warn me when something like this happens?
An example:
#include <iostream>
constexpr uint64_t MUL64 { 0x00000000ffffffff };
constexpr uint32_t MUL32 { 0xffffffff };
int main() {
const uint32_t value { 0xabababab };
const uint64_t value1 { MUL64 * value }; // the result is a uint64_t because
// MUL64 is a uint64_t
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
if (value1 == value2) {
std::cout << "Looks good!\n";
return EXIT_SUCCESS;
}
std::cout << "Whoopsie\n";
return EXIT_FAILURE;
}
Edit:
The overflow was expected, i.e. we knew that we would need an uint64_t to store the calculated value. We also know how to fix the problem and we changed it later to something like:
const uint64_t value2 { static_cast<uint64_t>(MUL32) * value };
That way the upper 32 bits aren't cut off during the calculation. But things like that may still happen from time to time, and I just want to know whether there is way to detect this kind of mistakes.
Thanks in advance!
Greetings,
Sebastian
The multiplication behavior for unsigned integral types is well-defined to wrap around modulo 2 to the power of the width of the integer type. Therefore there isn't anything here that the compiler could be warning about. The behavior is expected and may be intentional. Warning about it would give too many false positives.
Also, in general the compiler cannot test for overflow at compile-time outside a constant expression evaluation. In this specific case the values are obvious enough that it could do that though.
Warning about any widening conversion after arithmetic would very likely also be very noisy.
I am not aware of any compiler flag that would add warnings for the reasons given above.
Clang-tidy does have a check named bugprone-implicit-widening-of-multiplication-result specifically for this case of performing a multiplication in a narrower type which is then implicitly widened. It seems the check is present since LLVM 13. I don't think there is an equivalent for addition though.
This check works here as expected:
<source>:11:29: warning: performing an implicit widening conversion to type 'const uint64_t' (aka 'const unsigned long') of a multiplication performed in type 'unsigned int' [bugprone-implicit-widening-of-multiplication-result]
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
^
<source>:11:29: note: make conversion explicit to silence this warning
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
^~~~~~~~~~~~~
static_cast<const uint64_t>( )
<source>:11:29: note: perform multiplication in a wider type
const uint64_t value2 { MUL32 * value }; // i'd like to have a warning here
^~~~~
static_cast<const uint64_t>()
Clang's undefined behavior sanitizer also has a check that flags all unsigned integer overflows at runtime, which is not normally included in -fsanitize=undefined. It can be included with -fsanitize=unsigned-integer-overflow. That will very likely require adding suppressions for intended wrap-around behavior. See https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html for details.
It seems however that this check isn't applied here since the arithmetic is performed by the compiler at compile-time. If you remove the const on value2, UBSan does catch it:
/app/example.cpp:11:29: runtime error: unsigned integer overflow: 4294967295 * 2880154539 cannot be represented in type 'unsigned int'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /app/example.cpp:11:29 in
Whoopsie
GCC does not seem to have an equivalent option.
If you want consistent warnings for overflow in unsigned arithmetic, you need to define your own wrapper classes around the integer types that perform the overflow check and e.g. throw an exception if it fails or alternatively you can implement functions for overflow-safe addition/multiplication which you would then have to use instead of the + and * operators.
I was analyzing some warnings in a codebase and got puzzled by this one generated by Clang
Consider the following C++ code:
#include <iostream>
int main(int , char *[])
{
uint32_t val1 = 10;
int32_t val2 = -20;
int32_t result = val1 + val2;
std::cout << "Result is " << result << "\n";
return 0;
}
Clang gives me the following warning when compiling this code with -Wconversion
<source>:9:25: warning: implicit conversion changes signedness:
'unsigned int' to 'int32_t' (aka 'int') [-Wsign-conversion]
int32_t result = val1 + val2;
~~~~~~ ~~~~~^~~~~~
<source>:9:27: warning: implicit conversion changes signedness:
'int32_t' (aka 'int') to 'unsigned int' [-Wsign-conversion]
int32_t result = val1 + val2;
~ ^~~~
2 warnings generated.
GCC also gives me this warning, however I need to provide -Wsign-conversion to trigger it.
The warning says that val2 will be cast to an unsigned int and therefore will loose it's sign. So far so good. However I was expecting that the code above would produce incorrect output, to my surprise it works perfectly fine.
Result is -10
See the program running on both compilers on godbolt.
The cast does not happen in the compiled code and val2 keep its original value. The result of the computation is correct. What is the actual danger that this warning is warning me against? How can I trigger this behaviour? Is the warning bogus?
The second conversion is where things become implementation dependent.
The first (evaluation of expression val1+val2) triggers conversion of val2 to unsigned from signed, which is standard-compliant and documented. The result of the expression is therefore unsigned.
The second (conversion of the resulting unsigned back to signed) is where potential problems ensue. If the unsigned value is not within the defined domain of the target signed type (and in this case, it isn't), implementation behavior ensues, which you cannot assume is portable across the known universe.
What is the actual danger that this warning is warning me against?
The potential danger is that you may have been unaware of the implicit sign conversion, and have made it by accident. If it is intentional and behaves as desired, then there is no danger).
How can I trigger this behaviour?
You already have triggered an implicit sign conversion.
That said, if you want to see some output which may be surprising to you, try this instead:
std::cout << val1 + val2;
Is the warning bogus?
Depends on your definition of bogus.
There definitely is an implicit sign conversion in the program, and therefore if you ask the compiler to warn about implicit sign conversions, then it is entirely correct for the compiler to warn about the implicit sign conversion that is in the program.
There is a reason why this warning option is not enabled by default, nor when enabling "all" warnings using -Wall, nor even when enabling "extra" warnings with -Wextra. These sign conversion warnings warn about a program with well defined behaviour, but which may be surprising to someone who is not paying close attention. A program can be meaningful and correct despite this warning.
I am not able to understand the following behavior of the this warning.
case 1:
bool read = (33 & 3) ; //No Warning issued by vs 2013
case 2:
int b = 33;
bool read = (b & 3) ; //Now compiler is generating C4800 warning.
Why compiler is generating warning in case 2 while it is not issuing any warning in case 1.
C4800 is a performance warning - coercing an integer to bool at runtime has a cost.
It has nothing to do with logical correctness.
The most common occurrence of the coersion (and the warning) is when you interface with code that uses integers (BOOL in VC++) for booleans.
The compile-time coercion in your first snippet incurs no runtime overhead, so there is no warning.
To get rid of the warning, get rid of the coercion:
bool read = (b & 3) != 0;
In first case you create boolean variable from expression. It is possible
std::cout << std::is_constructible<decltype((33 & 3)), bool>::value<<std::endl; // output: 1
In second case you construct int variable from expression. Type of this expression int
std::cout << typeid(b & 3).name() << std::endl; // output: i
And, finally, you use implicit type conversion from int to bool and get warning.
Concerning the two cases you mention, their difference is that in one case the whole integral value is a compile-time constant (or at least it can easily be reduced to one). Maybe the assumption is that initialization with constant expressions should not trigger this warning? I'd check the bug tracking system of the vendor for further info.
However, in practice I'd ignore or even disable this warning. Consider the case of testing for a single bit: bool negative = byte & 0x80;. This code is what I'd call idiomatic code and it generates a warning. To me, that's a proof why this warning is bad.
Apologies if this question has already been answered.
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
int main () {
srand( time(NULL) );
cout << rand();
}
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
Is the error message Im getting when I execute the code above. I am using xcode 4.6.1. Now when I use a different complier such as the one from codepad.org it executes perfectly fine generating what seems like random numbers so I am assuming it is an xcode issue that I need to work around?
I have JUST started programming so I am a complete beginner when it comes to this. Is there a problem with my code or is it my complier?
Any help would be appreciated!
"implicit conversion loses integer precision: 'time_t' (aka 'long') to 'unsigned int'"
You're losing precision implicitly because time() returns a long which is larger than an unsigned int on your target. In order to workaround this problem, you should explicitly cast the result (thus removing the "implicit precision loss"):
srand( static_cast<unsigned int>(time(nullptr)));
Given that it's now 2017, I'm editing this question to suggest that you consider the features provided by std::chrono::* defined in <chrono> as a part of C++11. Does your favorite compiler provide C++11? If not, it really should!
To get the current time, you should use:
#include <chrono>
void f() {
const std::chrono::time_point current_time = std::chrono::system_clock::now();
}
Why should I bother with this when time() works?
IMO, just one reason is enough: clear, explicit types. When you deal with large programs among big enough teams, knowing whether the values passed around represent time intervals or "absolute" times, and what magnitudes is critical. With std::chrono you can design interfaces and data structures that are portable and skip out on the is-that-timeout-a-deadline-or-milliseconds-from-now-or-wait-was-it-seconds blues.
As mentioned by "nio", a clean workaround would be to explicitly type cast.
Deeper explanation:
The srand() requires an unsigned int as parameter (srand(unsigned int)) but time() returns a long int (long int time()) and this is not accepted by the srand() so in order to fix this, the compiler has to simply typecast (convert) the "long int" to "unsigned int".
BUT in your case the compiler warns you about it instead (as the designers of the compiler thought you should be aware that's all).
So a simple
srand( (unsigned int) time(NULL) );
will do the trick!
(forgive me if i have done something wrong, this is my first answer on stackoverflow)
The srand() function has unsigned int as a type of argument, time_t is long type. the upper 4 bytes from long are stripped out, but there's no problem in it.
srand() will randomize the rand() algorithm with 4 lower bytes of time(), so you're supplying more data than is needed.
If you get an error, try to just explicitly cast the time_t type to unsigned int:
srand( static_cast<unsigned int>(time(NULL)) );
Another interesting thing is that if you run your program twice in the same second, you'll get the same random number, which can be sometimes undesired, that's because if you seed the rand() algorithm with the same data, it will generate the same random sequence. Or it can be desirable when you debug some piece of code and need to test the same behaviour again... then you simply use something like srand(123456).
This is not an error. The code is valid and its meaning is well defined; if a compiler refuses to compile it, the compiler does not conform to the language definition. More likely, it's a warning, and it's telling you that the compiler writer thinks that you might have made a mistake. If you insist on eliminating warning messages you could add a cast, as others have suggested. I'm not a big fan of rewriting valid, meaningful code in order to satisfy some compiler writer's notion of good style; I'd turn off the warning. If you do that, though, you might overlook other places where a conversion loses data that you didn't intend.
#include <stdlib.h>
#include <iostream> //rand
#include <time.h> //time
float randomizer(int VarMin, int VarMax){
srand((unsigned)time(NULL));
int range = (VarMax - VarMin);
float rnd = VarMin + float(range*(rand()/(RAND_MAX + 1.0)));
return rnd;
}
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How much is too much with C++0x auto keyword
I find using "auto" near critical points maybe cause some problems.
This is the example code:
#include <iostream>
#include <typeinfo>
#include <limits>
using std::cout;
using std::endl;
using std::numeric_limits;
using std::cerr;
int main() {
auto i = 2147483647 /* numeric_limits<int>::max() */ ;
cout << "The type of i is " << typeid(i).name() << endl;
int count = 0;
for (auto i = 2147483647;
i < 2147483657 /* numeric_limits<int>::max() + 10 */ ; ++i) {
cout << "i = " << i << " " << endl;
if (count > 30) {
cerr << "Too many loops." << endl;
break;
}
++count;
}
return 0;
}
The "auto" decides the type of "i" is integer, but the upper limit of integer is 2147483647, that's easily overflow.
That's the outputs on Ideone(gcc-4.5.1) and LWS(gcc-4.7.2). They're different: "i" remains 2147483647 in the loops on Ideone(gcc-4.5.1) and overflows on LWS(gcc-4.7.2). But none of them is the expecting result: 10 cycles, +1 every time.
Should I avoid to use "auto" near critical points? Or How I use "auto" appropriately here?
UPDATE: Someone says "Use auto everywhere you can." in this thread you tell me. I don't think that's quite right. Type "long long int" is more appropriate the type "int" here. I wonder where I can use "auto" safely, where can't.
UPDATE 2: The solution 4(b) of the article by Herb Sutter should have answered the question.
You should only rely on type deduction to work out the type of your variables if it's going to be correct. Here, the compiler makes the deduction that it's an int, which is right as far as the standard is concerned, but your specific problem requires another type with a larger range. When you use auto, you're saying "the compiler knows best", but the compiler doesn't always know everything.
You wouldn't use auto here, just as you wouldn't use int. You could make your literal have higher rank (stick L or LL after it - although they're not guaranteed to be any larger than your int) and then auto would deduce a larger integral type.
Not to mention that auto really saves you nothing in this case. auto is usually used to avoid typing long, ugly types or types that you don't know. In this case, the type is not long and ugly, and you do know it.
auto is just a syntactic sugar. It isn't a type, it just infers what type the right side is supposed to be and decides that variable's type by that.
If you give it literals, it will just infer the default type it is given by the compiler.
You just need to know what the actual type is.
An numeric literal (without the decimal point) is always int unless you explicitly change its type.
int x = 2147483657; // 2147483657 is treated as an int.
// If it does not fit tough it will be truncated
// according to standard rules.
long x = 2147483657L; // The L suffix tells the compiler to treat it as a long.
// Here you will get the correct value assuming long is larger than int.
In your case:
for(auto i = 2147483647;i < 2147483657;) // is not going to work as i is always
// an int and literal overflows.
// Try correct types:
for(auto i = 2147483647L; i < 2147483657L;++i) //Now it should work correctly.
You are expecting too much out of auto. Your expectation is that auto will automatically deduce the type which is best for the manipulation that you are going to perform on your variable. This is semantic analysis and compilers are not expected to do that (most often, they cannot). They can't look forward into the way you are going to use the variable you declare later on in your program.
The auto keyword only saves you from the burden of explicitly writing on the left the type of the expression appearing on the right, avoiding possible redundancy and all problems connected with it (what if the type of the expression on the right changes?)
This said, all other answers are correct: if you want your variable i not to overflow, you should assign to it a long long literal (using the LL postfix).