int a = 5 , b = 2 ;
double ans1 = a / b ; // ans1 = 2
cout << setprecision ( 4 ) << fixed << ans1 << endl ;
unsigned short int c = USHRT_MAX , d = USHRT_MAX ;
unsigned int ans2 = c + d ; // ans2 = 131070
cout << ans2 ;
What happens when the ( a / b ) is evaluated?
1) the result first stored in int ( variable type on the R.H.S ) and then converted into 64-bit double ( variable type on the L.H.S ) and then stored in the ans1 or first variables are converted into 64-bit double ( variable type on the L.H.S ) and then / takes place ?
2) if first evaluated as the variables type on R.H.S , then how the second one prints the correct ans ( because the ans is out of unsigned short int's limit )
In the first example, you first divide the two ints with truncating integer division and then assign the result of that (2) to ans1.
The second case is different in a subtle way: You chose integer types that are smaller than int. Thus, before any arithmetic operator acts on them, they both get converted to int (if that fits, as it does here, to unsigned int otherwise) while preserving their values, and on int the operation you showed does not overflow. Then you assign the result (which has type int) to the unsigned int, and since it is non-negative, nothing strange happens.
If you tried the second example with unsigned int instead of unsigned short, you could observe the "integer overflow" (which is no real overflow because unsigned arithmetic wraps).
Related
On my application, I receive two signed 32-bit int and I have to store them. I have to create a sort of counter and I don't know when it will be reset, but I'll receive big values and frequently. Beacause of that, in order to store these values, I decided to use two unsigned 64-bit int.
The following could be a simple version of the counter.
struct Counter
{
unsigned int elementNr;
unsigned __int64 totalLen1;
unsigned __int64 totalLen2;
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
totalLen1 += len1;
totalLen2 += len2;
}
}
}
I know that if a smaller type is casted to a bigger one (e.g. int to long) there should be no issues. However, passing from 32 bit rappresentation to 64 bit rappresentation and from signed to unsigned at the same time, is something new for me.
Reading around, I undertood that len1 should be expanded from 32 bit to 64 bit and then applied sign extension. Because the unsigned int and signen int have the same rank (Section 4.13), the latter should be converted.
If len1 stores a negative value, passing from signed to unsigned will return a wrong value, this is why I check the positivy at the beginning of the function. However, for positive values, there
should be no issues I think.
For clarity I could revrite UpdateCounter(int len1, int len2) like this
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
__int64 tmp = len1;
totalLen1 += static_cast<unsigned __int64>(tmp);
tmp = len2;
totalLen2 += static_cast<unsigned __int64>(tmp);
}
}
Might there be some side effects that I have not considered.
Is there another better and safer way to do that?
A little background, just for reference: binary operators such arithmetic addition work on operands of the same type (the specific CPU instruction to which is translated depends on the number representation that must be the same for both instruction operands).
When you write something like this (using fixed width integer types to be explicit):
int32_t a = <some value>;
uint64_t sum = 0;
sum += a;
As you already know this involves an implicit conversion, more specifically an
integral promotion according to integer conversion rank.
So the expression sum += a; is equivalent to sum += static_cast<uint64_t>(a);, so a is promoted having the lesser rank.
Let's see what happens in this example:
int32_t a = 60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=60 sum=160
So all is all ok as expected. Let's se what happens adding a negative number:
int32_t a = -60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=18446744073709551556 sum=40
The result is 40 as expected: this relies on the two's complement integer representation (note: unsigned integer overflow is not undefined behaviour) and all is ok, of course as long as you ensure that the sum does not become negative.
Coming back to your question you won't have any surprises if you always add positive numbers or at least ensuring that sum will never be negative... until you reach the maximum representable value std::numeric_limits<uint64_t>::max() (2^64-1 = 18446744073709551615 ~ 1.8E19).
If you continue to add numbers indefinitely sooner or later you'll reach that limit (this is valid also for your counter elementNr).
You'll overflow the 64 bit unsigned integer by adding 2^31-1 (2147483647) every millisecond for approximately three months, so in this case it may be advisable to check:
#include <limits>
//...
void UpdateCounter(const int32_t len1, const int32_t len2)
{
if( len1>0 )
{
if( static_cast<decltype(totalLen1)>(len1) <= std::numeric_limits<decltype(totalLen1)>::max()-totalLen1 )
{
totalLen1 += len1;
}
else
{// Would overflow!!
// Do something
}
}
}
When I have to accumulate numbers and I don't have particular requirements about accuracy I often use double because the maximum representable value is incredibly high (std::numeric_limits<double>::max() 1.79769E+308) and to reach overflow I would need to add 2^32-1=4294967295 every picoseconds for 1E+279 years.
Why when you do this:
int a = 1000000 , b = 1000000;
long long product = a * b;
cout<<product;
It gives some random trash value?Why do both a and b need to be long long in order to calculate it?
You are observing the effects of undefined behavior. This:
a * b
is an (arithmetic) expression of type int, as both operands a and b are of type int. Trying to store the value of 1000000000000 into an int results in the so-called signed integer overflow which is undefined behavior.
Either cast one of the operands to long long, thus causing the entire expression to become long long, which is sufficiently large to accept the value of 1000000000000:
#include <iostream>
int main()
{
int a = 1000000, b = 1000000;
auto product = static_cast<long long>(a) * b;
std::cout << product;
}
Or define one of the operands as long long:
#include <iostream>
int main()
{
long long a = 1000000;
int b = 1000000;
auto product = a * b;
std::cout << product;
}
Optionally, use unsigned long instead.
This will work fine. When you do multiplication of int and int if it gets out of limit then to put it in a long long variable you multiply the result with 1ll or 1LL. When you multiply the result with 1ll the result is converted to long long during calculation of product itself.
int a = 1000000 , b = 1000000;
long long product = 1ll * a * b;
cout << product;
From what I've gathered, assigning a fractional number to a double won't work properly unless either the numerator or the denominator is a floating point number, ( and by "not working properly", I mean that the decimals get cut off, I know that numbers can't be stored as fractions of course). However, I've tried type casting ints to doubles before assigning them to another double variable but it still doesn't work. It's not a big deal since I just had to do a minor work around, but why is this the case?
I added some coding I did while testing.
#include <iostream>
using namespace std;
double convert(int v) {
return v;
}
int main() {
int a = 5;
int b = 2;
double n;
n = convert(a) / convert(b);
cout << n << endl; // Decimals are stored
a = static_cast<double> (a);
b = static_cast<double> (b);
n = a / b;
cout << n << endl; // Decimals are cut off
a = (double) a;
b = (double) b;
n = a / b;
cout << n << endl; << // Decimals are cut off
double c = a;
double d = b;
n = c / d;
cout << n << endl; // Decimals are stored
return 0;
}
Output:
2.5
2
2
2.5
Because
a / b;
is integer division (because both operands are int) i.e. the output is an integer, whether the output is then assigned to double or anything else is irrelevant in the calculation of the result.
Because of integer division.
n = a / b;
Here a and b are integers so the result is also an integer, this is a rule of C++, so 5/2 == 2. The integer 2 then gets converted to a double which then prints as 2.
int a = 5;
a = static_cast<double> (a);
The first line creates an int variable named a and puts the value 5 in it. The second line explicitly converts the value of a to a double, then stores that converted value in a. However, a has type int, so there is an implicit conversion to int. That is, the second line is functionally equivalent to:
a = static_cast<int> ( static_cast<double> (a) );
So by the time you get to the division, you are back to integer arithmetic. To get the conversion to floating point to "stick" through your division, you need to avoid throwing it away. You could either assign the converted value to a new variable, as in
double aa = static_cast<double> (a);
or do the conversion in the same expression as the division
n = static_cast<double>(a) / b;
n = a / static_cast<double>(b);
n = static_cast<double>(a) / static_cast<double>(b);
Any of these three alternatives will trigger floating-point division.
int main() {
unsigned i = 5;
int j = -10;
double d = i + j;
long l = i + j;
int k = i + j;
std::cout << d << "\n"; //4.29497e+09
std::cout << l << "\n"; //4294967291
std::cout << k << "\n"; //-5
std::cout << i + j << "\n"; //4294967291
}
I believe signed int is promoted to unsigned before doing the arithmetic operators.
While -10 is converted to unsigned unsigned integer underflow (is this the correct term??) will occur and after addition it prints 4294967291.
Why this is not happening in the case of int k which print -5?
The process of doing the arithmetic operator involves a conversion to make the two values have the same type. The name for this process is finding the common type, and for the case of int and unsigned int, the conversions are called usual arithmetic conversions. The term promotion is not used in this particular case.
In the case of i + j, the int is converted to unsigned int, by adding UINT_MAX + 1 to it. So the result of i + j is UINT_MAX - 4, which on your system is 4294967291.
You then store this value in various data types; the only output that needs further explanation is k. The value UINT_MAX - 4 cannot fit in int. This is called out-of-range assignment and the resulting value is implementation-defined. On your system it apparently assigns the int value which has the same representation as the unsigned int value.
j will be converted to unsigned int before addition, and this happens in all your i + j. A quick experiment.
In the case of int k = i + j. As in the case of your implementation and mine, i + j produces: 4294967291. 4294967291 is larger than std::numeric_limits<int>::max(), the behavior is going to be implementation defined. Why not try assigning 4294967291 to an int?
#include <iostream>
int main(){
int k = 4294967291;
std::cout << k << std::endl;
}
Produces:
-5
As seen Here
It seems so strange. I found misunderstanding. I use gcc with char as signed char. I always thought that in comparison expressions(and other expressions) signed value converts to unsigned if necessary.
int a = -4;
unsigned int b = a;
std::cout << (b == a) << std::endl; // writes 1, Ok
but the problem is that
char a = -4;
unsigned char b = a;
std::cout << (b == a) << std::endl; // writes 0
what is the magic in comparison operator if it's not just bitwise?
According to the C++ Standard
6 If both operands are of arithmetic or enumeration type, the usual
arithmetic conversions are performed on both operands; each of the
operators shall yield true if the specified relationship is true and
false if it is false.
So in this expression
b == a
of the example
char a = -4;
unsigned char b = -a;
std::cout << (b == a) << std::endl; // writes 0
the both operands are converted to type int. As the result signed char propagets its signed bit and two values become unequal.
To demonstrate the effect try to run this simple example
{
char a = -4;
unsigned char b = -a;
std::cout << std::hex << "a = " << ( int )a << "'\tb = " << ( int )b << std::endl;
if ( b > a ) std::cout << "b is greater than a, that is b is positive and a is negative\n";
}
The output is
a = fffffffc' 'b = 4
b is greater than a, that is b is positive and a is negative
Edit: Only now I have seen that definitions of the variables have to look as
char a = -4;
unsigned char b = a;
that is the minus in the definition of b ahould not be present.
Since an (unsigned) int is at least 16 bits wide, let's use that for instructional purposes:
In the first case: a = 0xfffc, and b = (unsigned int) (a) = 0xfffc
Following the arithmetic conversion rules, the comparison is evaluated as:
((unsigned int) b == (unsigned int) a) or (0xfffc == 0xfffc), which is (1)
In the 2nd case: a = 0xfc, and b = (unsigned char) ((int) a) or:
b = (unsigned char) (0xfffc) = 0xfc i.e., sign-extended to (int) and truncated
Since and int can represent the range of both the signed char and unsigned char types, the comparison is evaluated as: (zero-extended vs. sign-extended)
((int) b == (int) a) or (0x00fc == 0xfffc), which is (0).
Note: The C and C++ integer conversion rules behave the same way in these cases. Of course, I'm assuming that the char types are 8 bit, which is typical, but only the minimum required.
They both output 0 because unsigned values can get converted to signed values, not viceversa (like you said).