Weird output as the numbers get bigger in Fibonacci sequence - c++

I noticed in my fibonacci sequence that I'm getting negative numbers after a certain point:
267914296 433494437 701408733 1134903170
1836311903 -1323752223 512559680 -811192543 -298632863
Does this have to do with the limited range of "int"? or is there something wrong with my code?
Here is the code:
using std::cout;
int main()
{
int n = 50, f1 = 0, f2 = 1, fn = 0, i = 0;
cout << "0 ";
for (i = 0; i < n; i++)
{
fn = f1 + f2;
f2 = f1;
f1 = fn;
cout << fn << " ";
}

Yes, this has to do with the limited range of int. This is called rollover or overflow, and works just like the odometer in your car. Once the number passes its highest possible value, it rolls over to its lowest possible value (which for int is a negative number). Consider using an unsigned int or long unsigned int, though the second one is not neccessarily longer (it's platform-dependent). A long double can hold even bigger numbers. If you'd like to use an arbitrarily large number (as big as you want), you can find appropriate libraries in answers to this question.

I'll bet it does have something to do with the range of int. you're probably overflowing
An integer normally has 32 bits, and one of those bits is the sign, so if you have a number like
01111111111111111111111111111111
which is a little bit over 2 billion, and you add 2 to it, then you get
10000000000000000000000000000001
which is negative(the first number is the sign, 0 is positive and 1 is negative)
If you want to store more numbers, you can use long ints.

Try using "long int" instead of "int".

Related

How to do 32 digit decimal floating point number multiplication in C++?

I have two numbers which are 32 digit decimal floating point numbers, like 1.2345678901234567890123456789012, I want to get the multiplication which is also 32 digit decimal floating point number. Is there any efficient way to do this?
Just use boost::multiprecision. You can use arbitrary precision but there is a typedef cpp_bin_float_50 which is a float with 50 decimal places.
Example for multiplying to big decimal numbers:
#include <iostream>
#include <boost/multiprecision/cpp_bin_float.hpp>
int main(){
boost::multiprecision::cpp_bin_float_50 val1("1.2345678901234567890123456789012");
boost::multiprecision::cpp_bin_float_50 val2("2.2345678901234567890123456789012");
std::cout << std::setprecision(std::numeric_limits< boost::multiprecision::cpp_bin_float_50>::max_digits10);
std::cout << val1*val2 << std::endl;
}
Output:
2.7587257654473404640618808351577828416864868162811293
Use the usual grade school algorithm (long multiplication). If you used 3 ints (instead of 4):
A2A1A0 * B2B1B0 = A2*B2 A2*B1 A2*B0
A1*B2 A1*B1 A1*B0
A0*B2 A0*B1 A0*B0
Every multiplication will have a 2-int result. You have to sum every column on the right side, and propagate carry. In the end, you'll have a 6-int result (if inputs are 4-int, then the result is 8-int). You can then round this 8-int result. This is how you can handle the mantissa part. The exponents should just be added together.
I recommend you to divide a problem into two parts:
multiplying a long number with an int
adding the result from 1. into the final result
You'll need something like this as a workhorse (note that this code assumes that int is 32-bit, long long is 64-bit):
void wideMul(unsigned int &hi, unsigned int &lo, unsigned int a, unsigned int b) {
unsigned long long int r = (unsigned long long int)a*b;
lo = (unsigned int)r;
hi = (unsigned int)(r>>32);
}
Note: that if you had larger numbers, there are faster algorithms.

Can n %= m ever return negative value for very large nonnegative n and m?

This question is regarding the modulo operator %. We know in general a % b returns the remainder when a is divided by b and the remainder is greater than or equal to zero and strictly less than b. But does the above hold when a and b are of magnitude 10^9 ?
I seem to be getting a negative output for the following code for input:
74 41 28
However changing the final output statement does the work and the result becomes correct!
#include<iostream>
using namespace std;
#define m 1000000007
int main(){
int n,k,d;
cin>>n>>k>>d;
if(d>n)
cout<<0<<endl;
else
{
long long *dp1 = new long long[n+1], *dp2 = new long long[n+1];
//build dp1:
dp1[0] = 1;
dp1[1] = 1;
for(int r=2;r<=n;r++)
{
dp1[r] = (2 * dp1[r-1]) % m;
if(r>=k+1) dp1[r] -= dp1[r-k-1];
dp1[r] %= m;
}
//build dp2:
for(int r=0;r<d;r++) dp2[r] = 0;
dp2[d] = 1;
for(int r = d+1;r<=n;r++)
{
dp2[r] = ((2*dp2[r-1]) - dp2[r-d] + dp1[r-d]) % m;
if(r>=k+1) dp2[r] -= dp1[r-k-1];
dp2[r] %= m;
}
cout<<dp2[n]<<endl;
}
}
changing the final output statement to:
if(dp2[n]<0) cout<<dp2[n]+m<<endl;
else cout<<dp2[n]<<endl;
does the work, but why was it required?
By the way, the code is actually my solution to this question
This is a limit imposed by the range of int.
int can only hold values between –2,147,483,648 to 2,147,483,647.
Consider using long long for your m, n, k, d & r variables. If possible use unsigned long long if your calculations should never have a negative value.
long long can hold values from –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
while unsigned long long can hold values from 0 to 18,446,744,073,709,551,615. (2^64)
The range of positive values is approximately halved in signed types compared to unsigned types, due to the fact that the most significant bit is used for the sign; When you try to assign a positive value greater than the range imposed by the specified Data Type the most significant bit is raised and it gets interpreted as a negative value.
Well, no, modulo with positive operands does not produce negative results.
However .....
The int type is only guaranteed by the C standards to support values in the range -32767 to 32767, which means your macro m is not necessarily expanding to a literal of type int. It will fit in a long though (which is guaranteed to have a large enough range).
If that's happening (e.g. a compiler that has a 16-bit int type and a 32-bit long type) the results of your modulo operations will be computed as long, and may have values that exceed what an int can represent. Converting that value to an int (as will be required with statements like dp1[r] %= m since dp1 is a pointer to int) gives undefined behaviour.
Mathematically, there is nothing special about big numbers, but computers only have a limited width to write down numbers in, so when things get too big you get "overflow" errors. A common analogy is the counter of miles traveled on a car dashboard - eventually it will show as all 9s and roll round to 0. Because of the way negative numbers are handled, standard signed integers don't roll round to zero, but to a very large negative number.
You need to switch to larger variable types so that they overflow less quickly - "long int" or "long long int" instead of just "int", the range doubling with each extra bit of width. You can also use unsigned types for a further doubling, since no range is used for negatives.

Why can't I see all the significant digits when displaying vector<long double>?

I have a console app with a function that divides integers of a Fibonacci series, demonstrating how the ratio in any Fibonacci series approaches Φ . I have simliar code written in Go and inC++11. InGo (or a scientific calculator), the function returns values of int64 and the results show a precision of up to 16 digits in an Ubuntu Terminal Session, for example:
1.6180339937902115
In C++11 I can never see more that 5 digits of precision in the results usingcout. The results are declared aslong double in a function like this:
typedef unsigned long long int ULInt;
typedef std::vector< ULInt> ULIntV;
std::vector<long double > CalcSequenceRatio( const ULIntV& fib )
{
std::vector<long double> result;
for ( int i = 0; i != fib.size( ); i ++ )
{
if ( i == ( fib.size( ) - 1 ) )
{
result[i] = 0;
break;
}
long double n = fib[i + 1];
long double n2 = fib[i];
long double q = n / n2;
result.push_back( q );
}
return result;
}
Although the vectorfib passed into CalcSequenceRatio( const ULIntV& fib ) contains over 100 entries, after 16 entries, all values in the result set are displayed as
1.61803
The rest of the value is being rounded although in Go (or in a calculator), I can see that the actual values are extended to at least 16 digits of precision.
How can I make CalcSequenceRatio() return more precise values? Is there is problem because going from long long int to long double is a downcast? Do I need to pass the fib series as vector<long double>? What's wrong?
Edit:
This question has been marked a duplicate, but this is not really correct, because the question does not deal directly with cout: There are other factors that might have made a difference, although the analysis proves that cout is the problem. I posted the correct answer:
The problem is with cout, and here is the solution... as explained in
the other question...
It sounds like you want to use: std::numeric_limits<T>::max_digits10 for distinct, 'round-trip' conversions - in conjunction with std::setprecision.
e.g., for float this is typically (9) => or 1.8 format. double is typically (17) => 1.16
A long double is typically implemented as an 80 bit extended precision type on x86, or a 128 bit quad precision type, with (21) => 1.20 and (36) => 1.35 formats respectively. However the long double is only required to provide at least as much precision as a double.
There's a good series of notes on related subjects here.
The problem here is withstd::cout.
I fixed it using std::setprecision(50), as explained in How do I print a double value with full precision using cout? That shows me values like this:
1.6180339887498948482072100296669248109537875279784
To make it flexible, I gave the user the option to enter the desired level of precision:
void printGolden( const std::vector<long double>& golden )
{
cout << "Enter desired precision:" << endl;
int precision{};
cin >> precision;
std::cout << std::setprecision( precision );
for ( auto i : golden )
{
std::cout << i << "; ";
}
}

Issue with pow and round - answers are not equivalent

I'm having an issue creating a function that checks if a root can be simplified. In this example, I'm trying to simplify the cube root of 108, and the first number that this should work for is 27.
In order to do this, I am calling pow() with the number being the index (in this case, 27), and the power being (1/power), which in this instance is 3. I then compare that to the rounded answer of pow(index,(1/power)), which should also be 3.
Included is a picture of my problem, but basically, I am getting two answers that are equivalent to 3, yet my program is not recognizing them as equal. It seems to be working elsewhere in my program, but will not work here. Any suggestions as to why?
int inside = insideVal;
int currentIndex = index;
int coeff = co;
double insideDbl = pow(index, (1/(double)power));
double indexDbl = round(pow(index,(1/(double)power)));
cout<<insideDbl<< " " << indexDbl <<endl;
//double newPow = (1/(double)power);
vector<int> storedInts = storeNum;
if(insideDbl == indexDbl){
if(inside % currentIndex == 0){
storedInts.push_back(currentIndex);
return rootNumerator(inside/currentIndex, currentIndex, coeff, power, storedInts);
}
else{
return rootNumerator(inside, currentIndex + 1, coeff, power, storedInts);
}
}
else if(currentIndex < inside){
return rootNumerator(inside, currentIndex + 1, coeff, power, storedInts);
}
I tried to add a picture, but my reputation apparently wasn't high enough. In my console, I am getting "3 3" for the line that reads cout<<insideDbl<< " " << indexDbl <<endl;
EDIT:
Alright, so if the answers aren't exact, why does the same type of code work elsewhere in my program? Taking the 4th Root of 16 (which should equal 2) works using this segment of code:
else if( pow(initialNumber, (1/initialPower)) == round(pow(initialNumber,(1/initialPower)))){
int simplifiedNum = pow(initialNumber, (1/initialPower));
cout<<simplifiedNum;
Value* simplifiedVal = new RationalNumber(simplifiedNum);
return simplifiedVal;
}
despite the fact that the conditions are exactly the same as the ones that I'm having trouble with.
Well you are a victim of finite precision floating point arithmetic.
What happened?
This if(insideDbl == indexDbl), is very dangerous and misleading. It is in fact a question whether (Note: I made up the exact numbers but I can give you precise ones) 3.00000000000001255 is the same as 2.999999999999996234. I put 14 0s and 14 9s. So technically the difference goes beyond 15 most significant places. This is important.
Now if you write insideDbl == indexDbl, the compiler compares the binary representantions of them. Which are clearly different. However, when you simply print them, the default precision is like 5 or 6 significant digits, so they get rounded, and seem to be the same.
How to check it?
Try printing them with:
typedef std::numeric_limits< double > dbl_limits;
cout.precision(dbl::max_digits10);
cout << "Does " << insideDbl << " == " << indexDbl << "?\n";
This will set the precision, to the number of digits, the are necessary to differentiate two numbers. Please note that this is higher than the guaranteed precision of computation! That is the root of confusion.
I would also encourage reading numeric_limits. Especially about digits10, and max_digits10.
Why sometimes it works?
Because sometimes two algorithms will end up using the same binary representation for the final results, and sometimes they won't.
Also 2 can be a special case, as I believe it can be actually represented exactly in binary form. I think (but won't put my head on it.) all powers of 2 (and their sums) can be, like 0,675 = 0,5+0,125 = 2^-1 + 2^-3. But please don't take it for granted unless someone else confirms it.
What can you do?
Stick to the precise computations. Using integers, or whatever. Or you could assume that everything 3.0 +/- 10^-10 is actually 3.0 (epsilon comparisons), which is very risky, to say the least, when you do care about precise math.
Tl;dr: You can never compare two floats or doubles for equality, even when mathematically you can prove the mentioned equality, because of the finite precision of computations. That is, unless you are actually interested in the same binary representation of the value, as opposed to the value itself. Sometimes this is the case.
I suspect that you'll do better by computing the prime factorisation of insideVal and taking the product of those primes that appear in a multiple of the root.
For example
108 = 22 × 33
and hence
3√108 = 3 × 3√22
and
324 = 22 × 34
and hence
3√324 = 3 × 3√(22 × 3)
You can use trial division to construct the factorisation.
Edit A C++ implementation
First we need an integer overload for pow
unsigned long
pow(unsigned long x, unsigned long n)
{
unsigned long p = 1;
while(n!=0)
{
if(n%2!=0) p *= x;
n /= 2;
x *= x;
}
return p;
}
Note that this is simply the peasant algorithm applied to powers.
Next we need to compute the prime numbers in sequence
unsigned long
next_prime(const std::vector<unsigned long> &primes)
{
if(primes.empty()) return 2;
unsigned long p = primes.back();
unsigned long i;
do
{
++p;
i = 0;
while(i!=primes.size() && primes[i]*primes[i]<=p && p%primes[i]!=0) ++i;
}
while(i!=primes.size() && primes[i]*primes[i]<=p);
return p;
}
Note that primes is expected to contain all of the prime numbers less than the one we're trying to find and that we can quit checking once we reach a prime greater than the square root of the candidate p since that could not possibly be a factor.
Using these functions, we can calculate the factor that we can take outside the root with
unsigned long
factor(unsigned long x, unsigned long n)
{
unsigned long f = 1;
std::vector<unsigned long> primes;
unsigned long p = next_prime(primes);
while(pow(p, n)<=x)
{
unsigned long i = 0;
while(x%p==0)
{
++i;
x /= p;
}
f *= pow(p, (i/n));
primes.push_back(p);
p = next_prime(primes);
}
return f;
}
Applying this to your example
std::cout << factor(108, 3) << std::endl; //output: 3
gives the expected result. For another example, try
std::cout << factor(3333960000UL, 4) << std::endl; //output: 30
which you can confirm is correct by noting that
3333960000 = 304 × 4116
and checking that 4116 doesn't have any factor that is a power of 4.

Is it legal to use char overflow in C++ code

Good day, colleagues!
I need to obtain cyclic series on successive numbers from 0 to 255. Is it legal to use unsigned char overflow like this:
unsigned char test_char = 0;
while (true) {
std::cout << test_char++ << " ";
}
Or will be more safely to use this code:
int test_int = 0;
while (true) {
std::cout << test_int++ % 256 << " ";
}
Of course, in real code there will be reasonable condition instead of while (true).
3.9.1/4 "Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer"
"This implies that unsigned arithmetic does not overflow because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting unsigned integer type"
So, yes it is legal. And the second form is preferred, since it's more readable.
Even though sizeof(char) will always be 1, it is not necessary that a char will be exactly 8 bits. (I am guessing unsigned char will be similar).
So of the two, if given a choice, I would prefer the latter as the former might not even be correct.
btw, You probably intended unsigned int instead of int for the latter? Modulus with negative numbers could get tricky (after the int overflows, as Jimmy noted). If I recollect correctly, I believe it is compiler dependent.
unsigned char, like all other unsigned integral types, follows modulo 2n arithmetic, so basically both your methods are equivalent. Use the first
There is no such thing as unsigned overflow, per 3.9.1/4 as quoted by Erik. However, as Moron says, it is possible that the modulus of the unsigned char number system is greater than 256.
Note that your expression does not store the result of % 256 back to test_int. The safe way to do this is
test_int = ( test_int + 1 ) % 256;
std::cout << test_int << " ";
the output of the 2 samples is completely different.
the first one will print characters (a b c d e f g h ...)
the second one will print integers (0 1 2 3 4 ... 255 0 ...)
anyway it depends if you have a a control for overflowexception (.NET) otherwise in old C++ it the value is always valid and goes from 0 to 255