Unsigned long long Fibonacci numbers negative? - c++

I've written a simple Fibonacci sequence generator that looks like:
#include <iostream>
void print(int c, int r) {
std::cout << c << "\t\t" << r << std::endl;
}
int main() {
unsigned long long int a = 0, b = 1, c = 1;
for (int r = 1; r <= 1e3; r += 1) {
print(c, r);
a = b;
b = c;
c = a + b;
}
}
However, as r gets around the value of 40, strange things begin to happen. c's value oscillate between negative and positive, despite the fact he's an unsigned integer, and of course the Fibonacci sequence can't be exactly that.
What's going on with unsigned long long integers?
Does c get too large even for a long long integer?

You have a narrowing conversion here print(c, r); where you defined print to take only int's and here you pass an unsigned long long. It is implementation defined.
Quoting the C++ Standard Draft:
4.4.7:3: If the destination type is signed, the value is
unchanged if it can be represented in the destination type; otherwise,
the value is implementation-defined.
But what typically happens is that: from the unsigned long long, only the bits that are just enough to fit into an int are copied to your function. The truncated int is stored in Twos complements, depending on the value of the Most Significant Bit. you get such alternation.
Change your function signature to capture unsigned long long
void print(unsigned long long c, int r) {
std::cout << c << "\t\t" << r << std::endl;
}
BTW, see Mohit Jain's comment to your question.

Related

code to find the value of nCr shows the answer to some values as 0( Ex:30 15)

The following is the code:
#include
using namespace std;
int factorial(int num){
unsigned long long int fact=1;
for (int i = num; i >=1; i--)
{
fact=fact*i;
}
return fact;
}
int main()
{
unsigned long long int n,r,value;
cout<<"Enter a number whose nCr value is to be calculated (n and r respectively): ";
cin>>n>>r;
unsigned long long int a=factorial(n);
unsigned long long int b=factorial(r);
unsigned long long int c=factorial(n-r);
value=a/(b*c);
cout<<"The value of nCr is : "<<value;
return 0;
}
Why do I get the answer to some of the inputs like (30 15),(30 12), etc as 0.
30! is a very large 33-digit number, so that's overflowing the int variable your program is trying to store it in. If you print it out, you'll see the actual value that gets stored in a is smaller than the value of b*c in the denominator of the final computation, so value=a/(b*c); gets truncated to 0 by integer division.
Even if you return the result of factorial as an unsigned long long int the result of 30! will overflow, since it can only store 64 bits (and that's compiler dependent).
#include "stdafx.h"
#include <iostream>
unsigned long long int factorial(int num) {
unsigned long long int fact = 1;
for (int i = num; i >= 1; i--)
{
fact = fact * i;
}
return fact;
}
int main()
{
unsigned long long int n, r, value;
std::cout << "Enter a number whose nCr value is to be calculated (n and r respectively): ";
std::cin >> n >> r;
unsigned long long int a = factorial(n);
std::cout << "n! = " << a << std::endl;
unsigned long long int b = factorial(r);
std::cout << "r! = " << b << std::endl;
unsigned long long int c = factorial(n - r);
std::cout << "(n-r)! = " << c << std::endl;
std::cout << "r!(n-r)! = " << b*c << std::endl;
value = a / (b*c);
std::cout << "The value of nCr is : " << value << std::endl;
system("pause");
return 0;
}
Output:
Enter a number whose nCr value is to be calculated (n and r respectively): 30 12
n! = 9682165104862298112
r! = 479001600
(n-r)! = 6402373705728000
r!(n-r)! = 12940075575627743232
The value of nCr is : 0
Press any key to continue . . .
The main issue with your code is return type of factorial Method it should be same as the return type of "fact".
Second issue with code is that it cannot handle huge number above i.e, max value of unsigned long long int "18,446,744,073,709,551,615"(https://www.tutorialspoint.com/cplusplus/cpp_data_types.htm).
So just change the data types of variables fact,a,b,c and factorial method to "long double" which can accomodate 12 bytes of data.
Code below is just modified for tracing purpose... you can skip the line which you don't need. Be careful with data types. Code is modified as per your requirement for huge calculations.
Please reply if you have any confusion. And up-vote my answer if it looks right to you.
You can remove std:: from the code if not need by your compiler.
#include <iostream>
long double factorial(int num){
long double fact=1;
for (int i = num; i >=1; i--)
{
fact=fact*i;
}
return fact;
}
int main()
{
unsigned long long int n=0,r=0;
long double value=0;
std::cout<<"Enter a number whose nCr value is to be calculated (n and r respectively): ";
std::cin>>n>>r;
std::cout<<n;
std::cout<<r;
long double a=factorial(n);
long double b=factorial(r);
long double c=factorial(n-r);
std::cout<<"\na="<<a;
std::cout<<"\nb="<<b;
std::cout<<"\nc="<<c;
long double d = b*c;
std::cout<<"\nd="<<d;
value=(unsigned long long int)(a/d);
std::cout<<"\nThe value of nCr is : "<<value;
return 0;
`enter code here`}

Sum signed 32-bit int with unsigned 64bit int

On my application, I receive two signed 32-bit int and I have to store them. I have to create a sort of counter and I don't know when it will be reset, but I'll receive big values and frequently. Beacause of that, in order to store these values, I decided to use two unsigned 64-bit int.
The following could be a simple version of the counter.
struct Counter
{
unsigned int elementNr;
unsigned __int64 totalLen1;
unsigned __int64 totalLen2;
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
totalLen1 += len1;
totalLen2 += len2;
}
}
}
I know that if a smaller type is casted to a bigger one (e.g. int to long) there should be no issues. However, passing from 32 bit rappresentation to 64 bit rappresentation and from signed to unsigned at the same time, is something new for me.
Reading around, I undertood that len1 should be expanded from 32 bit to 64 bit and then applied sign extension. Because the unsigned int and signen int have the same rank (Section 4.13), the latter should be converted.
If len1 stores a negative value, passing from signed to unsigned will return a wrong value, this is why I check the positivy at the beginning of the function. However, for positive values, there
should be no issues I think.
For clarity I could revrite UpdateCounter(int len1, int len2) like this
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
__int64 tmp = len1;
totalLen1 += static_cast<unsigned __int64>(tmp);
tmp = len2;
totalLen2 += static_cast<unsigned __int64>(tmp);
}
}
Might there be some side effects that I have not considered.
Is there another better and safer way to do that?
A little background, just for reference: binary operators such arithmetic addition work on operands of the same type (the specific CPU instruction to which is translated depends on the number representation that must be the same for both instruction operands).
When you write something like this (using fixed width integer types to be explicit):
int32_t a = <some value>;
uint64_t sum = 0;
sum += a;
As you already know this involves an implicit conversion, more specifically an
integral promotion according to integer conversion rank.
So the expression sum += a; is equivalent to sum += static_cast<uint64_t>(a);, so a is promoted having the lesser rank.
Let's see what happens in this example:
int32_t a = 60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=60 sum=160
So all is all ok as expected. Let's se what happens adding a negative number:
int32_t a = -60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=18446744073709551556 sum=40
The result is 40 as expected: this relies on the two's complement integer representation (note: unsigned integer overflow is not undefined behaviour) and all is ok, of course as long as you ensure that the sum does not become negative.
Coming back to your question you won't have any surprises if you always add positive numbers or at least ensuring that sum will never be negative... until you reach the maximum representable value std::numeric_limits<uint64_t>::max() (2^64-1 = 18446744073709551615 ~ 1.8E19).
If you continue to add numbers indefinitely sooner or later you'll reach that limit (this is valid also for your counter elementNr).
You'll overflow the 64 bit unsigned integer by adding 2^31-1 (2147483647) every millisecond for approximately three months, so in this case it may be advisable to check:
#include <limits>
//...
void UpdateCounter(const int32_t len1, const int32_t len2)
{
if( len1>0 )
{
if( static_cast<decltype(totalLen1)>(len1) <= std::numeric_limits<decltype(totalLen1)>::max()-totalLen1 )
{
totalLen1 += len1;
}
else
{// Would overflow!!
// Do something
}
}
}
When I have to accumulate numbers and I don't have particular requirements about accuracy I often use double because the maximum representable value is incredibly high (std::numeric_limits<double>::max() 1.79769E+308) and to reach overflow I would need to add 2^32-1=4294967295 every picoseconds for 1E+279 years.

I ran into a weird bug in c++ where a statement calculating an addition of 2 small integers overflow into a long long value

I recently ran into this weird C++ bug that I could not understand. Here's my code:
#include <bits/stdc++.h>
using namespace std;
typedef vector <int> vi;
typedef pair <int, int> ii;
#define ff first
#define ss second
#define pb push_back
const int N = 2050;
int n, k, sum = 0;
vector <ii> a;
vi pos;
int main (void) {
cin >> n >> k;
for (int i = 1; i < n+1; ++i) {
int val;
cin >> val;
a.pb(ii(val, i));
}
cout << a.size()-1 << " " << k << " " << a.size()-k-1 << "\n";
}
When I tried out with test:
5 5
1 1 1 1 1
it returned:
4 5 4294967295
but when I changed the declaration from:
int n, k, sum = 0;
to:
long long n, k, sum = 0;
then the program returned the correct value which was:
4 5 -1
I could not figure out why the program behaved like that since -1 should not exceed an integer value. Can anyone explain this to me? I'm really appreciated your kind helps.
Thanks
Obviously, on your machine, your size_t is a 32-bit integer, whereas long long is 64 bit. size_t always is an unsigned type, so you get:
cout << a.size() - 1
// ^ unsigned ^ promoted to unsigned
// output as uint32_t
// ^ (!)
a.size() - k - 1
// ^ promoted to long long, as of smaller size!
// -> overall expression is int64_t
// ^ (!)
You would not have seen any difference in the two values printed (would have been 18446744073709551615) if size_t was 64 bit as well, as then the signed long long k (int64_t) would have promoted to unsigned (uint64_t) instead.
Be aware that static_cast<UnsignedType>(-1) always evaluates (according to C++ conversion rules) to std::numeric_limits<UnsignedType>::max()!
Side note about size_t: This is defined as an unsigned integral type large enough to hold the maximum size you can allocate on your system for an object, so the size in bits is hardware dependent and in the end, correlates with the size in bits of the memory address bus (first power of two not smaller than).
vector::size returns size_t (unsigned), the expression a.size()-k-1 evaluates to an unsigned type, so you end up with an underflow.

Integer promotion unsigned in c++

int main() {
unsigned i = 5;
int j = -10;
double d = i + j;
long l = i + j;
int k = i + j;
std::cout << d << "\n"; //4.29497e+09
std::cout << l << "\n"; //4294967291
std::cout << k << "\n"; //-5
std::cout << i + j << "\n"; //4294967291
}
I believe signed int is promoted to unsigned before doing the arithmetic operators.
While -10 is converted to unsigned unsigned integer underflow (is this the correct term??) will occur and after addition it prints 4294967291.
Why this is not happening in the case of int k which print -5?
The process of doing the arithmetic operator involves a conversion to make the two values have the same type. The name for this process is finding the common type, and for the case of int and unsigned int, the conversions are called usual arithmetic conversions. The term promotion is not used in this particular case.
In the case of i + j, the int is converted to unsigned int, by adding UINT_MAX + 1 to it. So the result of i + j is UINT_MAX - 4, which on your system is 4294967291.
You then store this value in various data types; the only output that needs further explanation is k. The value UINT_MAX - 4 cannot fit in int. This is called out-of-range assignment and the resulting value is implementation-defined. On your system it apparently assigns the int value which has the same representation as the unsigned int value.
j will be converted to unsigned int before addition, and this happens in all your i + j. A quick experiment.
In the case of int k = i + j. As in the case of your implementation and mine, i + j produces: 4294967291. 4294967291 is larger than std::numeric_limits<int>::max(), the behavior is going to be implementation defined. Why not try assigning 4294967291 to an int?
#include <iostream>
int main(){
int k = 4294967291;
std::cout << k << std::endl;
}
Produces:
-5
As seen Here

Why can't I use a long long int type ? c++

I try
long long int l = 42343254325322343224;
but to no avail. Why does it tell me, "integer constant is too long." I am using the long long int type which should be able to hold more than 19 digits. Am I doing something wrong here or is there a special secret I do not know of just yet?
Because it's more, on my x86_64 system, of 2^64
// 42343254325322343224
// maximum for 8 byte long long int (2^64) 18446744073709551616
// (2^64-1 maximum unsigned representable)
std::cout << sizeof(long long int); // 8
you shouldn't confuse the number of digits with the number of bits necessary to represent a number
Take a look at Boost.Multiprecision at Boost.Multiprecision
It defines templates and classes to handle larger numbers.
Here is the example from the Boost tutorial:
#include <boost/multiprecision/cpp_int.hpp>
using namespace boost::multiprecision;
int128_t v = 1;
// Do some fixed precision arithmetic:
for(unsigned i = 1; i <= 20; ++i)
v *= i;
std::cout << v << std::endl; // prints 20!
// Repeat at arbitrary precision:
cpp_int u = 1;
for(unsigned i = 1; i <= 100; ++i)
u *= i;
std::cout << u << std::endl; // prints 100!
It seems that the value of the integer literal exceeds the acceptable value for type long long int
Try the following program that to determine maximum values of types long long int and unsigned long long int
#include <iostream>
#include <limits>
int main()
{
std::cout << std::numeric_limits<long long int>::max() << std::endl;
std::cout << std::numeric_limits<unsigned long long int>::max() << std::endl;
return 0;
}
I have gotten the following results at www.ideone.com
9223372036854775807
18446744073709551615
You can compare it with the value you specified
42343254325322343224
Take into account that in general case there is no need to specify suffix ll for a integer decimal literal that is so big that can be stored only in type long long int The compiler itself will determine the most appropriate type ( int or long int or long long int ) for the integral decimal literal.