Finding the fibonacci number of large number - c++

I wrote the following program for finding the modulus of large Fibonacci's number. This can solve large numbers but fails to compute in cases like fibo_dynamic(509618737,460201239,229176339) where a = 509618737, b = 460201239 and N = 229176339. Please help me to make this work.
long long fibo_dynamic(long long x,long long y,long long n, long long a[]){
if(a[n]!=-1){
return a[n];
}else{
if(n==0){
a[n]=x;
return x;
}else if(n==1){
a[n]=y;
return y;
}else {
a[n]=fibo_dynamic(x,y,n-1,a)+fibo_dynamic(x,y,n-2,a);
return a[n];
}
}
}

The values will overflow because Fibonacci numbers increase very rapidly. Even for the original fibonacci series (where f(0) = 0 and f(1) = 1), the value of f(90) is more than 20 digits long which cannot be stored in any primitive data type in C++. You should probably use modulus operator (since you mentioned it in your question) to keep values within range like this:
a[n] = (fibo_dynamic(x,y,n-1,a) + fibo_dynamic(x,y,n-2,a)) % MOD;
It is safe to mod the value at every stage because mod operator has the following rule:
if a = b + c, then:
a % n = ((b % n) + (c % n)) % n
Also, you have employed the recursive version to calculate fibonacci numbers (though you have memoized the results of smaller sub-problems). This means there will be lots of recursive calls which adds extra overhead. Better to employ an iterative version if possible.
Next, you are indexing the array with variable n. So, I am assuming that the size of array a is atleast n. The value of n that is mentioned in the question is very large. You probably cannot declare an array of such large size in a local machine (considering an integer to be of size 4 bytes, the size of array a will be approximately 874 MB).
Finally, the complexity of your program is O(n). There is a technique to calculate n_th fibonacci number in O(log(n)) time. It is "Solving Recurrence relations using Matrix Exponentiation." Fibonacci numbers follow the following linear recurrence relation:
f(n) = f(n-1) + f(n-2) for n >= 2
Read this to understand the technique.

Related

Ways to go from a number to 0 the fastest way

So, I have a homework like this:
Given two number n and k that can reach the long long limit, we do such operation:
assign n = n / k if n is divisible by k
reduce n by 1 if n is not divisible by k
Find the smallest number of operations to go from n to 0.
This is my solution
#define ll long long
ll smallestSteps(ll n, ll k) {
int steps = 0;
if (n < k) return n;
else if (n == k) return 2;
else {
while (n != 0) {
if (n % k == 0) {
n /= k;
steps++;
}
else {
n--;
steps++;
}
}
return (ll)steps;
}
}
This solution is O(n/k) I think?
But I think that n and k could be extremely big, and thus the program could exceed the time limit of 1s. Is there any better way to do this?
Edit 1: I use ll for it to be shorter
The algorithm can be improved given these observations:
If n<k then k|(n-m) will never hold for any positive m. So the answer is n steps.
If (k|n) does not hold then the biggest number m, m<n for which it does is n - (n%k). So it takes n%k steps until (k|m) holds again.
Actually all that you need is to keep doing division with remainder using std::div (or rely on compiler to optimize) and increase steps by remainder+1.
steps=0
while(n>0)
mod = n%k
n = n/k
steps+=mod + 1
return steps
This can be done with an even simpler main program.
Convert n to base k. Let d be the number of digits in this number.
To get to 0, you will divide by k (d-1) times.
The number of times you subtract 1 is the digital sum of this number.
For instance, consider n=314, k=3.
314 in base 3 is 102122. This has 6 digits; the digital sum is 8.
You will have 6-1+8 steps ... 13 steps to 0.
Use your C++ packages to convert to the new base, convert the digits to integers, and do the array sum. This pushes all the shift-count work into module methods.
Granted this won't work for weird values of k, but you can also steal available conversion packages instead of writing your own.

Why does this recursive algorithm give the wrong answer on input 2,147,483,647?

I'm working on the following question:
Given a positive integer n and you can do operations as follow:
If n is even, replace n with n/2.
If n is odd, you can replace n with either n + 1 or n - 1.
What is the minimum number of replacements needed for n to become 1?
Here's the code I've come up with:
class Solution {
private:
unordered_map<int, long long> count_num;
public:
int integerReplacement(int n) {
count_num[1] = 0;count_num[2] = 1;count_num[3] = 2;
if(!count_num.count(n)){
if(n%2){
count_num[n] = min( integerReplacement((n-1)/2), integerReplacement((n+1)/2) )+2;
}else{
count_num[n] = integerReplacement(n/2) +1;
}
}
return(count_num[n]);
}
};
When the input is 2147483647, my code incorrectly outputs 33 instead of the correct answer, 32. Why is my code giving the wrong answer here?
I suspect that this is integer overflow. The number you've listed (2,147,483,647) is the maximum possible value that can fit into an int, assuming you're using a signed 32-bit integer, so if you add one to it, you overflow to INT_MIN, which is −2,147,483,648. From there, it's not surprising that you'd get the wrong answer, since this value isn't what you expected to get.
The overflow specifically occurs when you compute
integerReplacement((n+1)/2)
and so you'll need to fix that case to compute (n+1) / 2 without overflowing. This is a pretty extreme edge case, so I'm not surprised that the code tripped up on it.
One way to do this is to note that if n is odd, then (n + 1) / 2 is equal to (n / 2) + 1 (with integer division). So perhaps you could rewrite this as
integerReplacement((n / 2) + 1)
which computes the same value but doesn't have the overflow.

Given an integer n, return the number of ways it can be represented as a sum of 1s and 2s

For example:
5 = 1+1+1+1+1
5 = 1+1+1+2
5 = 1+1+2+1
5 = 1+2+1+1
5 = 2+1+1+1
5 = 1+2+2
5 = 2+2+1
5 = 2+1+2
Can anyone give a hint for a pseudo code on how this can be done please.
Honestly have no clue how to even start.
Also this looks like an exponential problem can it be done in linear time?
Thank you.
In the example you have provided order of addends is important. (See the last two lines in your example). With this in mind, the answer seems to be related to Fibonacci numbers. Let's F(n) be the ways n can be written as 1s and 2s. Then the last addened is either 1 or 2. So F(n) = F(n-1) + F(n-2). These are the initial values:
F(1) = 1 (1 = 1)
F(2) = 2 (2 = 1 + 1, 2 = 2)
This is actually the (n+1)th Fibonacci number. Here's why:
Let's call f(n) the number of ways to represent n. If you have n, then you can represent it as (n-1)+1 or (n-2)+2. Thus the ways to represent it are the number of ways to represent it is f(n-1) + f(n-2). This is the same recurrence as the Fibonacci numbers. Furthermore, we see if n=1 then we have 1 way, and if n=2 then we have 2 ways. Thus the (n+1)th Fibonacci number is your answer. There are algorithms out there to compute enormous Fibonacci numbers very quickly.
Permutations
If we want to know how many possible orderings there are in some set of size n without repetition (i.e., elements selected are removed from the available pool), the factorial of n (or n!) gives the answer:
double factorial(int n)
{
if (n <= 0)
return 1;
else
return n * factorial(n - 1);
}
Note: This also has an iterative solution and can even be approximated using the gamma function:
std::round(std::tgamma(n + 1)); // where n >= 0
The problem set starts with all 1s. Each time the set changes, two 1s are replaced by one 2. We want to find the number of ways k items (the 2s) can be arranged in a set of size n. We can query the number of possible permutations by computing:
double permutation(int n, int k)
{
return factorial(n) / factorial(n - k);
}
However, this is not quite the result we want. The problem is, permutations consider ordering, e.g., the sequence 2,2,2 would count as six distinct variations.
Combinations
These are essentially permutations which ignore ordering. Since the order no longer matters, many permutations are redundant. Redundancy per permutation can be found by computing k!. Dividing the number of permutations by this value gives the number of combinations:
Note: This is known as the binomial coefficient and should be read as "n choose k."
double combination(int n, int k)
{
return permutation(n, k) / factorial(k);
}
int solve(int n)
{
double result = 0;
if (n > 0) {
for ( int k = 0; k <= n; k += 1, n -= 1 )
result += combination(n, k);
}
return std::round(result);
}
This is a general solution. For example, if the problem were instead to find the number of ways an integer can be represented as a sum of 1s and 3s, we would only need to adjust the decrement of the set size (n-2) at each iteration.
Fibonacci numbers
The reason the solution using Fibonacci numbers works, has to do with their relation to the binomial coefficients. The binomial coefficients can be arranged to form Pascal's triangle, which when stored as a lower-triangular matrix, can be accessed using n and k as row/column indices to locate the element equal to combination(n,k).
The pattern of n and k as they change over the lifetime of solve, plot a diagonal when viewed as coordinates on a 2-D grid. The result of summing values along a diagonal of Pascal's triangle is a Fibonacci number. If the pattern changes (e.g., when finding sums of 1s and 3s), this will no longer be the case and this solution will fail.
Interestingly, Fibonacci numbers can be computed in constant time. Which means we can solve this problem in constant time simply by finding the (n+1)th Fibonacci number.
int fibonacci(int n)
{
constexpr double SQRT_5 = std::sqrt(5.0);
constexpr double GOLDEN_RATIO = (SQRT_5 + 1.0) / 2.0;
return std::round(std::pow(GOLDEN_RATIO, n) / SQRT_5);
}
int solve(int n)
{
if (n > 0)
return fibonacci(n + 1);
return 0;
}
As a final note, the numbers generated by both the factorial and fibonacci functions can be extremely large. Therefore, a large-maths library may be needed if n will be large.
Here is the code using backtracking which solves your problem. At each step, while remembering the numbers used to get the sum so far(using vectors here), first make a copy of them, first subtract 1 from n and add it to the copy then recur with n-1 and the copy of the vector with 1 added to it and print when n==0. then return and repeat the same for 2, which essentially is backtracking.
#include <stdio.h>
#include <vector>
#include <iostream>
using namespace std;
int n;
void print(vector<int> vect){
cout << n <<" = ";
for(int i=0;i<vect.size(); ++i){
if(i>0)
cout <<"+" <<vect[i];
else cout << vect[i];
}
cout << endl;
}
void gen(int n, vector<int> vect){
if(!n)
print(vect);
else{
for(int i=1;i<=2;++i){
if(n-i>=0){
std::vector<int> vect2(vect);
vect2.push_back(i);
gen(n-i,vect2);
}
}
}
}
int main(){
scanf("%d",&n);
vector<int> vect;
gen(n,vect);
}
This problem can be easily visualized as follows:
Consider a frog, that is present in front of a stairway. It needs to reach the n-th stair, but he can only jump 1 or 2 steps on the stairway at a time. Find the number of ways in which he can reach the n-th stair?
Let T(n) denote the number of ways to reach the n-th stair.
So, T(1) = 1 and T(2) = 2(2 one-step jumps or 1 two-step jump, so 2 ways)
In order to reach the n-th stair, we already know the number of ways to reach the (n-1)th stair and the (n-2)th stair.
So, once can simple reach the n-th stair by a 1-step jump from (n-1)th stair or a 2-step jump from (n-2)th step...
Hence, T(n) = T(n-1) + T(n-2)
Hope it helps!!!

Calculate this factorial term in C++ with basic datatypes

I am solving a programming problem, and in the end the problem boils down to calculating following term:
n!/(n1!n2!n3!....nm!)
n<50000
(n1+n2+n3...nm)<n
I am given that the final answer will fit in 8 byte. I am using C++. How should I calculate this. I am able to come up with some tricks but nothing concrete and generalized.
EDIT:
I would not like to use external libraries.
EDIT1 :
Added conditions and result will be definitely 64 bit int.
If the result is guaranteed to be an integer, work with the factored representation.
By the theorem of Legendre, you can express all these factorials by the sequence of exponents of the primes in the range (2,n).
By deducting the exponents of the factorials in the denominator from those in the numerator, you will obtain exponents for the whole quotient. The computation will then reduce to a product of primes that will never overflow the 8 bytes.
For example,
25! = 2^22.3^10.5^6.7^3.11^2.13.17.19.23
15! = 2^11.3^6.5^3.7^2.11.13
10! = 2^8.3^4.5^2.7
yields
25!/(15!.10!) = 2^3.5.11.17.19.23 = 3268760
The exponents of, say, 3 are found by
25/3 + 25/9 = 10
15/3 + 15/9 = 6
10/3 + 10/9 = 4
If all the input (not necessarily the output) is made of integers, you could try to count prime factors. You create an array of size sqrt(n) and fill it with the counts of each prime factor in n :
vector <int> v = vector <int> (sqrt(n)+1,0);
int m = 2;
while (m <=n) {
int i = 2;
int a = m;
while (a >1) {
while (a%i ==0) {
v[i] ++;
a/=i;
}
i++;
}
m++;
}
Then you iterate over the n_k (1 <= k <= m) and you decrease the count for each prime factor. This is pretty much the same code as above except that you replace the v[i]++ by v[i] --. Of course you need to call it with vector v previously obtained.
After that the vector v contains the list of count of prime factors in your expression and you just need to reconstruct the result as
int result = 1;
for (int i = 2; i < v.size(); v++) {
result *= pow(i,v[i]);
}
return result;
Note : you should use long long int instead of int above but I stick to int for simplicity
Edit : As mentioned in another answer, it would be better to use Legendre theorem to fill / unfill the vector v faster.
What you can do is to use the properties of the logarithm:
log(AB) = log(A) + log(B)
log(A/B) = log(A) - log(B)
and
X = e^(log(X))
So you can first compute the logarithm of your quantity, then exponentiate back:
log(N!/(n1!n2!...nk!)) = log(1) + ... + log(N) - [log(n1!) - ... log(nk!)]
then expand log(n1!) etc. so you end up writing everything in terms of logarithm of single numbers. Then take the exponential of your result to obtain the initial value of the factorial.
As #T.C. mentioned, this method may not be to accurate, although in typical scenarios you'll have many terms reduced. Alternatively, you expand each factorial into a list that stores the terms in its product, e.g. 6! will be stored in a list {1,2,3,4,5,6}. You do the same for the denominator terms. Then you start removing common elements. Finally, you can take gcd's and reduce everything to coprime factors, then compute the result.

Finding number of divisors of a big integer using prime/quadratic factorization (C#)

I'm trying to get the number of divisors of a 64 bit integer (larger than 32 bit)
My first method (for small numbers) was to divide the number until the resulting number was 1, count the number of matching primes and use the formula (1 + P1)(1+ P2)..*(1 + Pn) = Number of divisors
For example:
24 = 2 * 2 * 2 * 3 = 2^3 * 3^1
==> (3 + 1)*(1 + 1) = 4 * 2 = 8 divisors
public static long[] GetPrimeFactor(long number)
{
bool Found = false;
long i = 2;
List<long> Primes = new List<long>();
while (number > 1)
{
if (number % i == 0)
{
number /= i;
Primes.Add(i);
i = 1;
}
i++;
}
return Primes.ToArray();
}
But for large integers this method is taking to many iterations. I found a method called Quadratic sieve to make a factorization using square numbers. Now using my script this can be much easier because the numbers are much smaller.
My question is, how can I implement this Quadratic Sieve?
The quadatic sieve is a method of finding large factors of large numbers; think 10^75, not 2^64. The quadratic sieve is complicated even in simple pseudocode form, and much more complicated if you want it to be efficient. It is very much overkill for 64-bit integers, and will be slower than other methods that are specialized for such small numbers.
If trial division is too slow for you, the next step up in complexity is John Pollard's rho method; for 64-bit integers, you might want to trial divide up to some small limit, maybe the primes less than a thousand, then switch to rho. Here's simple pseudocode to find a single factor of n; call it repeatedly on the composite cofactors to complete the factorization:
function factor(n, c=1)
if n % 2 == 0 return 2
h := 1; t := 1
repeat
h := (h*h+c) % n
h := (h*h+c) % n
t := (t*t+c) % n
g := gcd(t-h, n)
while g == 1
if g is prime return g
return factor(g, c+1)
There are other ways to factor 64-bit integers, but this will get you started, and is probably sufficient for most purposes; you might search for Richard Brent's variant of the rho algorithm for a modest speedup. If you want to know more, I modestly recommend the essay Programming with Prime Numbers at my blog.