Integer overflow not occuring : they restart from 0 - c++

I tried a simple code and found that integers variables are not overflowing instead it seems that latest C++ compiler has introduced a new functionality related to POD datatypes - if the variable crosses its max value, its values are restart from 0:
#include <iostream>
#include <cstdint>
#include <stdexcept>
int main()
{
try
{
for (uint8_t idx = 254 ; ; idx++)
{
std::cout << unsigned(idx) << std::endl;
}
}
catch(std::overflow_error e)
{
std::cout << "Error" << std::endl;
}
}
When I run the code the exception code is never executed - but is this the desired behavior?

Almost nothing throws std::overflow_error; overflow of unsigned values is already defined in the language standard to wrap around to 0, it's not considered an "exceptional" case, so no exception will ever be thrown for plain unsigned integer math. Per cppreference docs on integer arithmetic overflow:
Unsigned integer arithmetic is always performed modulo 2n
where n is the number of bits in that particular integer. E.g. for unsigned int, adding one to UINT_MAX gives ​0​, and subtracting one from ​0​ gives UINT_MAX.
Similarly, standard library modules rarely use it:
The only standard library components that throw this exception are std::bitset::to_ulong and std::bitset::to_ullong.
The mathematical functions of the standard library components do not throw this exception (mathematical functions report overflow errors as specified in math_errhandling). Third-party libraries, however, use this. For example, boost.math throws std::overflow_error if boost::math::policies::throw_on_error is enabled (the default setting).

That is the behavior for unsigned types, according to the standard:
Standard 6.7.1/4
Unsigned integers shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer.
Which inspires the footnote:
This implies that unsigned arithmetic does not overflow because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting unsigned integer type.

it seems that latest C++ compiler has introduced a new functionality
No. This is how C++ has always been specified.
if the variable crosses its max value, its values are restart from 0
If an unsigned integer result of a calculation is not representable by the type, then the result will instead be the representable value that is congruent with the mathematical result modulo 2n where n is the number of bits in the representation (i.e. congruent with the largest representable value + 1).
In other words, (largest representable unsigned integer + 1) is 0, just like you observed.
Note that this rule does not apply to signed integers. Overflowing a signed integer results in undefined behaviour.
P.S. No operation on a fundamental type is specified to throw an exception. Most of the standard library functions don't throw std::overflow_error either.

unsigned int does not overflow or underflow, that's the core difference between signed and unsigned types in C++. unsigned types behave according to modulo arithmetic (i.e., they "wrap around") [basic.fundamental]/4. If you want to provoke an integer overflow, use a signed integer type. But even then, an integer overflow does not throw an exception but just lead to undefined behavior…

Related

c++ safeness of code with implicit conversion between signed and unsigned

According to the rules on implicit conversions between signed and unsigned integer types, discussed here and here, when summing an unsigned int with a int, the signed int is first converted to an unsigned int.
Consider, e.g., the following minimal program
#include <iostream>
int main()
{
unsigned int n = 2;
int x = -1;
std::cout << n + x << std::endl;
return 0;
}
The output of the program is, nevertheless, 1 as expected: x is converted first to an unsigned int, and the sum with n leads to an integer overflow, giving the "right" answer.
In a code like the previous one, if I know for sure that n + x is positive, can I assume that the sum of unsigned int n and int x gives the expected value?
In a code like the previous one, if I know for sure that n + x is positive, can I assume that the sum of unsigned int n and int x gives the expected value?
Yes.
First, the signed value converted to unsigned, using modulo arithmetic:
If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n
where n is the number of bits used to represent the unsigned type).
Then two unsigned values will be added using modulo arithmetic:
Unsigned integers shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer.
This means that you'll get the expected answer.
Even, if the result would be negative in the mathematical sense, the result in C++ would be a number which is modulo-equal to the negative number.
Note that I've supposed here that you add two same-sized integers.
I think you can be sure and it is not implementation defined, although this statement requires some interpretations of the standard when it comes to systems that do not use two's complement for representing negative values.
First, let's state the things that are clear: unsigned integrals do not overflow but take on a modulo 2^nrOfBits-value (cf this online C++ standard draft):
6.7.1 Fundamental types
(7) Unsigned integers shall obey the laws of arithmetic modulo 2n
where n is the number of bits in the value representation of that
particular size of integer.
So it's just a matter of whether a negative value nv is converted correctly into an unsigned integral bit pattern nv(conv) such that x + nv(conv) will always be the same as x - nv. For the case of a system using two's complement, things are clear, since the two's complement is actually designed such that this arithmetic works immediately.
For systems using other representations of negative values, we'll have to read the standard carefully:
7.8 Integral conversions
(2) If the destination type is unsigned, the resulting value is the
least unsigned integer congruent to the source integer (modulo 2n
where n is the number of bits used to represent the unsigned type). [
Note: In a two’s complement representation, this conversion is
conceptual and there is no change in the bit pattern (if there is
notruncation). —endnote]
As the footnote explicitly says, that in a two's complement representation, there is no change in the bit pattern, we may assume that in systems other than 2s complement a real conversion will take place such that x + nv(conv) == x - nv.
So due to 7.8 (2), I'd say that your assumption is valid.

Why turbo c wraparound signed integer overflow every time though signed integer overflow is undefined?

I have made signed overflow many times but each times turbo c wraparound.
For example:
#include <stdio.h>
void main() {
int i = 100000;
printf("%d", i);
getch();
}
The output is -31072 which is the expected output if wraparound is done.
In binary 100000(dec) is 11000011010100000 and last 16 bits are store which
is 1000011010100000. In two complement representation 1000011010100000 is -31072.
Your example doesn't have contain any signed overflows, so there is no undefined behavior.
(Assuming INT_MAX is less than 100000.)
The assignment:
int i=100000;
performs an implicit conversion from a type long, which is the type of the integer constant 100000, to type int. This result of conversion is implementation-defined1 (or an implementation-defined trap is signal).
1(Quoted from: ISO/IEC 9899:201x 6.3.1.3 Signed and unsigned integers 3)
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.

C++ function with unsigned int parameter gets strange result where call it with negitive

I am new to C++, I am confused with C++'s behavior for the code below:
#include <iostream>
void hello(unsigned int x, unsigned int y){
std::cout<<x<<std::endl;
std::cout<<y<<std::endl;
std::cout<<x+y<<std::endl;
}
int main(){
int a = -1;
int b = 3;
hello(a,b);
return 1;
}
The x in the output is a very large integer:4294967295, I know that negative integer convert to unsigned will behave like this. But why x+y in the output is 2?
Contrary to the other answers, there is no undefined behavior here, and there is no overflow. Unsigned integers use modulo 2n arithmetic.
Section 4.7 paragraph 2 of the standard says "If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type)." This dictates that -1 is equal to the largest possible unsigned int (modulo 2n).
Section 3.9.1 paragraph 4 says "Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer." To make it clear what this means, the footnote to this clause says "This implies that unsigned arithmetic does not overflow because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting unsigned integer type."
In other words, converting -1 to 4294967295 is not just defined behavior, it is required behavior (assuming 32 bit integers). Similarly, adding 3 to that value and yielding 2 as a result is also required behavior. In this case, the value of n is irrelevant. The third value printed by hello() must be 2 or the implementation is not compliant with the standard.
Because unsigned int's will overflow. In other words, a=-1 (signed) which is 1 value below the maximum value for unsigned int's, 4294967295.
Then you add 3, the int will overflow and start at 0, so -1+3 =2
Passing negative number to unsigned int as parameter gives you an undefined behavior. The default int is a signed. It has a range of –2,147,483,648 to 2,147,483,647. Unsigned int range is from 0 to 4,294,967,295.
This isn't so much having to do with C++ but with how computers represent signed and unsigned numbers.
This is a good source on that. Basically, signed numbers are (usually) represented using two's complement, in which the most significant bit has a value of -2^n. In effect, what his means is that the positive numbers are represented the same in two's complement as they are in regular unsigned binary.
-1 is represented as all ones, which when interpreted as an unsigned integer will be the largest integer that can be represented (4294967295, when dealing with 32 bits).
One of the great things about using two complement to represent signed numbers is that you can perform addition and subtraction in the exact same way as with unsigned numbers and it will work out correctly, so long as the number does not exceed the bounds that can be represented. This isn't as easy with other forms such as signed-magnitude.
So, what this means is that because the result of -1 + 3 = 2, and because 2 is positive, it will be interpreted the same as if it were unsigned. Thus, it prints 2.

Curious arithmetic error- 255x256x256x256=18446744073692774400

I encountered a strange thing when I was programming under c++. It's about a simple multiplication.
Code:
unsigned __int64 a1 = 255*256*256*256;
unsigned __int64 a2= 255 << 24; // same as the above
cerr()<<"a1 is:"<<a1;
cerr()<<"a2 is:"<<a2;
interestingly the result is:
a1 is: 18446744073692774400
a2 is: 18446744073692774400
whereas it should be:(using calculator confirms)
4278190080
Can anybody tell me how could it be possible?
255*256*256*256
all operands are int you are overflowing int. The overflow of a signed integer is undefined behavior in C and C++.
EDIT:
note that the expression 255 << 24 in your second declaration also invokes undefined behavior if your int type is 32-bit. 255 x (2^24) is 4278190080 which cannot be represented in a 32-bit int (the maximum value is usually 2147483647 on a 32-bit int in two's complement representation).
C and C++ both say for E1 << E2 that if E1 is of a signed type and positive and that E1 x (2^E2) cannot be represented in the type of E1, the program invokes undefined behavior. Here ^ is the mathematical power operator.
Your literals are int. This means that all the operations are actually performed on int, and promptly overflow. This overflowed value, when converted to an unsigned 64bit int, is the value you observe.
It is perhaps worth explaining what happened to produce the number 18446744073692774400. Technically speaking, the expressions you wrote trigger "undefined behavior" and so the compiler could have produced anything as the result; however, assuming int is a 32-bit type, which it almost always is nowadays, you'll get the same "wrong" answer if you write
uint64_t x = (int) (255u*256u*256u*256u);
and that expression does not trigger undefined behavior. (The conversion from unsigned int to int involves implementation-defined behavior, but as nobody has produced a ones-complement or sign-and-magnitude CPU in many years, all implementations you are likely to encounter define it exactly the same way.) I have written the cast in C style because everything I'm saying here applies equally to C and C++.
First off, let's look at the multiplication. I'm writing the right hand side in hex because it's easier to see what's going on that way.
255u * 256u = 0x0000FF00u
255u * 256u * 256u = 0x00FF0000u
255u * 256u * 256u * 256u = 0xFF000000u (= 4278190080)
That last result, 0xFF000000u, has the highest bit of a 32-bit number set. Casting that value to a signed 32-bit type therefore causes it to become negative as-if 232 had been subtracted from it (that's the implementation-defined operation I mentioned above).
(int) (255u*256u*256u*256u) = 0xFF000000 = -16777216
I write the hexadecimal number there, sans u suffix, to emphasize that the bit pattern of the value does not change when you convert it to a signed type; it is only reinterpreted.
Now, when you assign -16777216 to a uint64_t variable, it is back-converted to unsigned as-if by adding 264. (Unlike the unsigned-to-signed conversion, this semantic is prescribed by the standard.) This does change the bit pattern, setting all of the high 32 bits of the number to 1 instead of 0 as you had expected:
(uint64_t) (int) (255u*256u*256u*256u) = 0xFFFFFFFFFF000000u
And if you write 0xFFFFFFFFFF000000 in decimal, you get 18446744073692774400.
As a closing piece of advice, whenever you get an "impossible" integer from C or C++, try printing it out in hexadecimal; it's much easier to see oddities of twos-complement fixed-width arithmetic that way.
The answer is simple -- overflowed.
Here Overflow occurred on int and when you are assigning it to unsigned int64 its converted in to 18446744073692774400 instead of 4278190080

C++ underflow and overflow

Hi I am new in here so please let me know if anything is wrong and I will try to better the next time .
I am trying to understand how underflow and overflow works in C++ .My understanding is if a variable's range is exceeded it will start from the other end of the range . Thus if minimum of short is -32768 and if we do a -1 to it the new value should be SHRT_MAX .(32767)
Here is my code:
#include<iostream.h>
#include<limits.h>
#include<conio.h>
int main ( void )
{
int testpositive =INT_MIN ;
short testnegative = SHRT_MIN ;
cout<< SHRT_MIN<<"\n";
cout << testnegative-1<<"\n";
cout << INT_MIN << "\n";
cout << testpositive-1 << "\n";
cout<<testpositive-2;
getch();
return 0;
}
The exact behavior on overflow/underflow is only specified for unsigned types.
Unsigned integers shall obey the laws of arithmetic modulo 2^n where n is the number of bits in the value representation of that particular size of integer.
Source: Draft N3690 §3.9.1 sentence 4
This implies that unsigned arithmetic does not overflow because a result that cannot be represented by the resulting
unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the
resulting unsigned integer type.
Source: Draft N3690 Note 47 for §3.9.1
For normal signed integer types instead the C++ standard simply says than anything can happen.
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined
Source: Draft N3690 §5 sentence 4
If we're talking about x86 processor (or most other modern processors) indeed the behavior is exactly what you describe and for the CPU there is no difference between a signed value or an unsigned value (there are signed and unsigned operations, but the value themselves are just bits).
Note that compilers can assume (and most modern optimizing compilers actually DO assume) that no signed integer overflow can occur in a correct program and for example in code like:
int do_something();
int do_something_else();
void foo() {
int x = do_something();
int y = x + 1;
if (x < y) {
do_something();
} else {
do_something_else();
}
}
a compiler is free to skip the test and the else branch in the generated code completely because in a valid program a signed int x is always less than x+1 (as signed overflow cannot be considered valid behavior).
If you replace int with unsigned int however the compiler must generate code for the test and for the else branch because for unsigned types it's possible that x > x+1.
For example clang compiles the code for foo to
foo(): # #foo()
push rax
call do_something()
pop rax
jmp do_something() # TAILCALL
where you can see that the ode just calls do_something twice (except for the strange handling of rax) and no mention of do_something_else is actually present. More or less the same code is generated by gcc.
Signed overflows are undefined behavior in C++.
For example:
INT_MIN - 1
-INT_MIN
are expressions that invoke undefined behavior.
SHRT_MIN - 1 and -SHRT_MIN are not undefined behavior in an environment with 16-bit short and 32-bit int because with integer promotions the operand is promoted to int first. In an environment with 16-bit short and int, these expressions are also undefined behavior.
Typically yes. But since this is C++, and C++ is regulated by the C++ standard, you must know that overflows are undefined behavior.
Although what you stated probably applies on most platforms, it's in no way guaranteed, so don't rely on it.
The new value need not be SHRT_MAX it is undefined.