This question already has answers here:
Is there some meaningful statistical data to justify keeping signed integer arithmetic overflow undefined?
(4 answers)
Why is unsigned integer overflow defined behavior but signed integer overflow isn't?
(6 answers)
Closed 3 years ago.
I am trying to understand the rational behind defining a signed overflow in C and C++ as undefined behavior. Presumably, this is to allow optimizations not otherwise possible. It is not clear to me, however, what those optimizations are.
I know that there is a C++20 proposal that would make signed integers in C++ more defined. At the time of this writing, however, this proposal also leaves a signed integer overflow as undefined.
Related
This question already has answers here:
Order of evaluation in C++ function parameters
(6 answers)
Closed 4 years ago.
As far as we know, the function argument evaluation order is not defined by c++ standard.
For example:
f(g(), h());
So we know it is undefined.
My question is, why cant c++ standard define the order of evaluation from left to right??
Because there is no good reason to do so.
The c++ standard generally only defines what is necessary and leaves the rest up to implementers.
This is why it produces fast code and can be compiled for many platforms.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I need a really short way to extract the last 4 digits of a hex number. So an input of 0x2479c should throw an output of 0x479c. I want to avoid converting and reconverting to binary.
Modulo division, which would generally work for decimal numbers, does not work in this case.
0x2479c modulo 0xffff = 0x479e
which isn't correct. I'm trying to achieve this is c/c++.
You should use a mask and then a byte wise 'and' with an other value. In your case :
0x2479c & 0x0ffff
Either use a mask
0x2479c & 0x0ffff
or the modulo operator
0x2479c % (0x10000);
You were off by one in the operand of modulo.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
In C++ are the following statements when comparing two integer values for equality the same? If not, why?
if(a == b)
...do
if(!(a ^ b))
...do
For integer values, yes. Obviously the xor operator will return not-zero if there are any bit differences between A and B, and ! will invert that. For integer data types, the conditions are equivalent.
For floating point values, because of how you can perform two mathematical operations that "should" give the same result, but they may be represented differently as floats, you should not use either of these to compare floats for equality, you should check whether they are the same to within a small margin of error (an "epsilon").
For pointers...I have no idea why you would want to do this to pointers. But if you really want to do it, then yes, they are the same.
However, there is no reason to do this. With optimizations enabled, they will compile to the same code, without, the first will likely be faster. Why would you use the less-clear !(a^b)?
The two comparisons are equivalent: a^b is 0 if and only if a==b, so !(a^b) is true if and only if a and b have the same value.
Whether you can call them "the same" depends on what you mean by two different operations being "same." They probably will not be compiled into the same code, and a==b is definitely easier to read.
This question already has answers here:
Undefined, unspecified and implementation-defined behavior
(9 answers)
Closed 9 years ago.
Is there any difference between Implementation dependant and undefined behaviour as for C/C++ standards?
Implementation dependent means that a certain construct differs from platform to platform but in a defined, well-specified manner. (e.g. the va_arg family of macros in C varies between posix and windows)
Undefined behaviour means that anything (literally) could happen. i.e. totally unspecified. (e.g. the behaviour of i = ++i).
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How do C/C++ compilers handle type casting between types with different value ranges?
What does the compiler do to perform a cast operation in C++?
Explain with some sample C++ code.
Standard sections
5.2.7, 5.2.8, 5.2.9, 5.2.10 and 5.2.11
give a good idea on how these casts work (which is what compiler and/or runtime implement).
reinterpret_cast is the only one whose behavior is kind of implementation defined.