How to multiply matrices using 2D Array on Fibonacci Sequence? - c++

The problem I'm having is that I'm unsure how I can multiply matrices together by the same matrices over and over. What I'm trying to achieve is that I want to be able to update the matrices. Here is my code:
int fib3(int a, int b, int n) {
int num[2][2] = { {0,1}, {1,1} };
const int num2[2][2] = { {0,1}, {1,1} };
int factArray[2][1] = { {0}, {1} };
if (n == 0) {
return a;
}
else if (n == 1) {
return b;
}
else {
for (int i = 0; i <= n; i++) {
num[0][0] = ((num2[0][0] * 0) + num2[0][1] * 1);
num[0][1] = ((num2[0][0] * 1) + num2[0][1] * 1);
num[1][0] = ((num2[1][0] * 0) + num2[1][1] * 1);
num[1][1] = ((num2[1][0] * 1) + num2[1][1] * 1);
}
factArray[0][0] = ((num[0][0] * factArray[0][0]) + num[0][1] * factArray[1][0]);
factArray[1][0] = ((num[1][0] * factArray[0][0]) + num[1][1] * factArray[1][0]);
return factArray[0][0];
}
Here I would take the previous matrices and multiply it by a constant matrices, but I am unsure how to update the matrices as I do.
So the matrices is raised to some power.
So for example, I want to find f(5) the 5th Fibonacci sequence , which should be 5, and I am getting 1 as the result in the programming.

The formula in matrix representation is mainly of interest for theoretical analysis. The trick is that you can have always two elements of the sequence in the vector instead of having to refer to earlier elements of the sequence. However, to implement it I dont see the benefit compared to using the recursive formula. Condsider that
| 1 1 | | a | | a+b |
| 1 0 | * | b | = | a |
Hence the matrix multiplication effectively does exactly the same: add the last two elements, remeber current one (a).
That being said, your code has some problems:
you pass a and b but you only ever use them for the first and second element of the sequence. You dont need a and b. The initial values are already in the starting value of the matrix.
you have a loop, but in each iteration you calculate the same values and write them into the same array elements.
I cannot really follow the logic of your code. Why is there another multiplication after the loop? The matrix formula says, in a nutshell, "take some starting vector, apply a matrix n times, done". To be honest I cannot find that anywhere in your code ;)
If you insist on using matrix multiplications, I would suggest to stay away from c-style arrays. They don't like to be passed around. Use std::array instead. I have a slight aversion against nesting, hence I'd suggest to use
constexpr size_t N = 2;
using matrix = std::array<int,N*N>;
using vector = std::array<int,N>;
std::arrays can be returned with no pain:
vector multiply(const matrix& a,const vector& b) {
vector result;
auto ma = [&a](size_t row,size_t col) { return a[row*N+col];};
result[0] = ma(0,0)*b[0] + ma(0,1)*b[1];
result[1] = ma(1,0)*b[0] + ma(1,1)*b[1];
return result;
}
Now it should be straight-forward to implement the fibonacci sequence.
Spoiler Alert

Related

Normalizing 2D lines in Eigen C++

A line in the 2D plane can be represented with the implicit equation
f(x,y) = a*x + b*y + c = 0
= dot((a,b,c),(x,y,1))
If a^2 + b^2 = 1, then f is considered normalized and f(x,y) gives you the Euclidean (signed) distance to the line.
Say you are given a 3xK matrix (in Eigen) where each column represents a line:
Eigen::Matrix<float,3,Eigen::Dynamic> lines;
and you wish to normalize all K lines. Currently I do this a follows:
for (size_t i = 0; i < K; i++) { // for each column
const float s = lines.block(0,i,2,1).norm(); // s = sqrt(a^2 + b^2)
lines.col(i) /= s; // (a, b, c) /= s
}
I know there must be a more clever and efficient method for this that does not require looping. Any ideas?
EDIT: The following turns out being slower for optimized code... hmmm..
Eigen::VectorXf scales = lines.block(0,0,2,K).colwise().norm().cwiseInverse()
lines *= scales.asDiagonal()
I assume that this as something to do with creating KxK matrix scales.asDiagonal().
P.S. I could use Eigen::Hyperplane somehow, but the docs seem little opaque.

how to construct block diagonal matrix

In Matlab, this function blkdiag construct block diagonal matrix. For example, if I have
a = [ 2, 2;
2, 2]
Then blkdiag(a,a) will return this output
>> blkdiag(a,a)
ans =
2 2 0 0
2 2 0 0
0 0 2 2
0 0 2 2
Is there an alternative in Eigen Library for blkdiag? The size of the big matrix varies which means classical approaches won't work. I mean to directly construct a matrix like the aforementioned output.
A simple function like
MatrixXd blkdiag(const MatrixXd& a, int count)
{
MatrixXd bdm = MatrixXd::Zero(a.rows() * count, a.cols() * count);
for (int i = 0; i < count; ++i)
{
bdm.block(i * a.rows(), i * a.cols(), a.rows(), a.cols()) = a;
}
return bdm;
}
does the job.
If the argument sub-matrix a can be fixed-size or dynamic-size or an expression then the following is a better choice
template <typename Derived>
MatrixXd blkdiag(const MatrixBase<Derived>& a, int count)
{
MatrixXd bdm = MatrixXd::Zero(a.rows() * count, a.cols() * count);
for (int i = 0; i < count; ++i)
{
bdm.block(i * a.rows(), i * a.cols(), a.rows(), a.cols()) = a;
}
return bdm;
}
Your problem is already solved! Just see the eigen documentation for topleftcorner and bottomrightcorner in http://eigen.tuxfamily.org/dox/classEigen_1_1DenseBase.html#a6f5fc5fe9d3fb70e62d4a9b1795704a8 and http://eigen.tuxfamily.org/dox/classEigen_1_1DenseBase.html#a2b9618f3c9eb4d4c9813ae8f6a8e70c5 respectively.
All you have to do is assign a matrix to those places, more or less like this:
//Assuming A is the result and has the right size allocated with zeroes, and a is the matrix you have.
A.topLeftCorner(a.rows(),a.cols())=a;
same for bottom right corner, unless you want to flip matrix (try methods .reverse() and .transpose() to get the desired flip effect) a before copying it there.
You can also try the .block() function for better handling of the matrices.

Cut rectangle in minimum number of squares

I'm trying to solve the following problem:
A rectangular paper sheet of M*N is to be cut down into squares such that:
The paper is cut along a line that is parallel to one of the sides of the paper.
The paper is cut such that the resultant dimensions are always integers.
The process stops when the paper can't be cut any further.
What is the minimum number of paper pieces cut such that all are squares?
Limits: 1 <= N <= 100 and 1 <= M <= 100.
Example: Let N=1 and M=2, then answer is 2 as the minimum number of squares that can be cut is 2 (the paper is cut horizontally along the smaller side in the middle).
My code:
cin >> n >> m;
int N = min(n,m);
int M = max(n,m);
int ans = 0;
while (N != M) {
ans++;
int x = M - N;
int y = N;
M = max(x, y);
N = min(x, y);
}
if (N == M && M != 0)
ans++;
But I am not getting what's wrong with this approach as it's giving me a wrong answer.
I think both the DP and greedy solutions are not optimal. Here is the counterexample for the DP solution:
Consider the rectangle of size 13 X 11. DP solution gives 8 as the answer. But the optimal solution has only 6 squares.
This thread has many counter examples: https://mathoverflow.net/questions/116382/tiling-a-rectangle-with-the-smallest-number-of-squares
Also, have a look at this for correct solution: http://int-e.eu/~bf3/squares/
I'd write this as a dynamic (recursive) program.
Write a function which tries to split the rectangle at some position. Call the function recursively for both parts. Try all possible splits and take the one with the minimum result.
The base case would be when both sides are equal, i.e. the input is already a square, in which case the result is 1.
function min_squares(m, n):
// base case:
if m == n: return 1
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
return min { min_hor, min_ver }
To improve performance, you can cache the recursive results:
function min_squares(m, n):
// base case:
if m == n: return 1
// check if we already cached this
if cache contains (m, n):
return cache(m, n)
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
// put in cache and return
result := min { min_hor, min_ver }
cache(m, n) := result
return result
In a concrete C++ implementation, you could use int cache[100][100] for the cache data structure since your input size is limited. Put it as a static local variable, so it will automatically be initialized with zeroes. Then interpret 0 as "not cached" (as it can't be the result of any inputs).
Possible C++ implementation: http://ideone.com/HbiFOH
The greedy algorithm is not optimal. On a 6x5 rectangle, it uses a 5x5 square and 5 1x1 squares. The optimal solution uses 2 3x3 squares and 3 2x2 squares.
To get an optimal solution, use dynamic programming. The brute-force recursive solution tries all possible horizontal and vertical first cuts, recursively cutting the two pieces optimally. By caching (memoizing) the value of the function for each input, we get a polynomial-time dynamic program (O(m n max(m, n))).
This problem can be solved using dynamic programming.
Assuming we have a rectangle with width is N and height is M.
if (N == M), so it is a square and nothing need to be done.
Otherwise, we can divide the rectangle into two other smaller one (N - x, M) and (x,M), so it can be solved recursively.
Similarly, we can also divide it into (N , M - x) and (N, x)
Pseudo code:
int[][]dp;
boolean[][]check;
int cutNeeded(int n, int m)
if(n == m)
return 1;
if(check[n][m])
return dp[n][m];
check[n][m] = true;
int result = n*m;
for(int i = 1; i <= n/2; i++)
int tmp = cutNeeded(n - i, m) + cutNeeded(i,m);
result = min(tmp, result);
for(int i = 1; i <= m/2; i++)
int tmp = cutNeeded(n , m - i) + cutNeeded(n,i);
result = min(tmp, result);
return dp[n][m] = result;
Here is a greedy impl. As #David mentioned it is not optimal and is completely wrong some cases so dynamic approach is the best (with caching).
def greedy(m, n):
if m == n:
return 1
if m < n:
m, n = n, m
cuts = 0
while n:
cuts += m/n
m, n = n, m % n
return cuts
print greedy(2, 7)
Here is DP attempt in python
import sys
def cache(f):
db = {}
def wrap(*args):
key = str(args)
if key not in db:
db[key] = f(*args)
return db[key]
return wrap
#cache
def squares(m, n):
if m == n:
return 1
xcuts = sys.maxint
ycuts = sys.maxint
x, y = 1, 1
while x * 2 <= n:
xcuts = min(xcuts, squares(m, x) + squares(m, n - x))
x += 1
while y * 2 <= m:
ycuts = min(ycuts, squares(y, n) + squares(m - y, n))
y += 1
return min(xcuts, ycuts)
This is essentially classic integer or 0-1 knapsack problem that can be solved using greedy or dynamic programming approach. You may refer to: Solving the Integer Knapsack

C++ polynomials: indefinite integrals

I am trying to find the indefinite integral of a polynomial, however neither my maths nor my coding is great. My code compiles but I believe I have the wrong formula:
Polynomial Polynomial :: indefiniteIntegral() const
{
Polynomial Result;
Result.fDegree = fDegree + 1;
for ( int i = fDegree; i > 0 ; i--){
Result.fCoeffs[i] = pow(fCoeffs[i], (Result.fDegree)) / (Result.fDegree);
}
return Result;
}
Looks like what you want is
for ( int i = fDegree; i > 0; --i ) {
Result.fCoeffs[i] = fCoeffs[i-1] / static_cast<float>(i);
}
I don't know the underlying implementation of your class, so I don't know how you're implementing fCoeffs (if its doubles or floats) and if you need to worry about i being out of bounds. If its a vector then it definitely needs to be initialized to the right size; if its a map, then you may not need to.
Try something like
Polynomial Polynomial::indefiniteIntegral() const
{
Polynomial Result;
Result.fDegree = fDegree + 1;
for (int i = fDegree; i > 0 ; i--) {
Result.fCoeffs[i] = fCoeffs[i-1] / i;
}
Result.rCoeffs[0] = 0;
return Result;
}
Each monomial a x^i is stored as value a in fCoeffs[i], after integration it should be moved to fCoeffs[i+1], multiplied with 1/(i+1). The lowest coefficient is set to 0.
And yes, you better make sure there is room for the highest coefficient.
Example: [1 1] is 1 + x and should become C + x + 1/2 x^2 which is represented by [0 1 0.5], keeping in mind that we introduced an arbitrary constant.

How to compute sum of evenly spaced binomial coefficients

How to find sum of evenly spaced Binomial coefficients modulo M?
ie. (nCa + nCa+r + nCa+2r + nCa+3r + ... + nCa+kr) % M = ?
given: 0 <= a < r, a + kr <= n < a + (k+1)r, n < 105, r < 100
My first attempt was:
int res = 0;
int mod=1000000009;
for (int k = 0; a + r*k <= n; k++) {
res = (res + mod_nCr(n, a+r*k, mod)) % mod;
}
but this is not efficient. So after reading here
and this paper I found out the above sum is equivalent to:
summation[ω-ja * (1 + ωj)n / r], for 0 <= j < r; and ω = ei2π/r is a primitive rth root of unity.
What can be the code to find this sum in Order(r)?
Edit:
n can go upto 105 and r can go upto 100.
Original problem source: https://www.codechef.com/APRIL14/problems/ANUCBC
Editorial for the problem from the contest: https://discuss.codechef.com/t/anucbc-editorial/5113
After revisiting this post 6 years later, I'm unable to recall how I transformed the original problem statement into mine version, nonetheless, I shared the link to the original solution incase anyone wants to have a look at the correct solution approach.
Binomial coefficients are coefficients of the polynomial (1+x)^n. The sum of the coefficients of x^a, x^(a+r), etc. is the coefficient of x^a in (1+x)^n in the ring of polynomials mod x^r-1. Polynomials mod x^r-1 can be specified by an array of coefficients of length r. You can compute (1+x)^n mod (x^r-1, M) by repeated squaring, reducing mod x^r-1 and mod M at each step. This takes about log_2(n)r^2 steps and O(r) space with naive multiplication. It is faster if you use the Fast Fourier Transform to multiply or exponentiate the polynomials.
For example, suppose n=20 and r=5.
(1+x) = {1,1,0,0,0}
(1+x)^2 = {1,2,1,0,0}
(1+x)^4 = {1,4,6,4,1}
(1+x)^8 = {1,8,28,56,70,56,28,8,1}
{1+56,8+28,28+8,56+1,70}
{57,36,36,57,70}
(1+x)^16 = {3249,4104,5400,9090,13380,9144,8289,7980,4900}
{3249+9144,4104+8289,5400+7980,9090+4900,13380}
{12393,12393,13380,13990,13380}
(1+x)^20 = (1+x)^16 (1+x)^4
= {12393,12393,13380,13990,13380}*{1,4,6,4,1}
{12393,61965,137310,191440,211585,203373,149620,67510,13380}
{215766,211585,204820,204820,211585}
This tells you the sums for the 5 possible values of a. For example, for a=1, 211585 = 20c1+20c6+20c11+20c16 = 20+38760+167960+4845.
Something like that, but you have to check a, n and r because I just put anything without regarding about the condition:
#include <complex>
#include <cmath>
#include <iostream>
using namespace std;
int main( void )
{
const int r = 10;
const int a = 2;
const int n = 4;
complex<double> i(0.,1.), res(0., 0.), w;
for( int j(0); j<r; ++j )
{
w = exp( i * 2. * M_PI / (double)r );
res += pow( w, -j * a ) * pow( 1. + pow( w, j ), n ) / (double)r;
}
return 0;
}
the mod operation is expensive, try avoiding it as much as possible
uint64_t res = 0;
int mod=1000000009;
for (int k = 0; a + r*k <= n; k++) {
res += mod_nCr(n, a+r*k, mod);
if(res > mod)
res %= mod;
}
I did not test this code
I don't know if you reached something or not in this question, but the key to implementing this formula is to actually figure out that w^i are independent and therefore can form a ring. In simpler terms you should think of implement
(1+x)^n%(x^r-1) or finding out (1+x)^n in the ring Z[x]/(x^r-1)
If confused I will give you an easy implementation right now.
make a vector of size r . O(r) space + O(r) time
initialization this vector with zeros every where O(r) space +O(r) time
make the first two elements of that vector 1 O(1)
calculate (x+1)^n using the fast exponentiation method. each multiplication takes O(r^2) and there are log n multiplications therefore O(r^2 log(n) )
return first element of the vector.O(1)
Complexity
O(r^2 log(n) ) time and O(r) space.
this r^2 can be reduced to r log(r) using fourier transform.
How is the multiplication done, this is regular polynomial multiplication with mod in the power
vector p1(r,0);
vector p2(r,0);
p1[0]=p1[1]=1;
p2[0]=p2[1]=1;
now we want to do the multiplication
vector res(r,0);
for(int i=0;i<r;i++)
{
for(int j=0;j<r;j++)
{
res[(i+j)%r]+=(p1[i]*p2[j]);
}
}
return res[0];
I have implemented this part before, if you are still cofused about something let me know. I would prefer that you implement the code yourself, but if you need the code let me know.