Divide and conquer algorithm: find the minimum of a matrix - c++

I'm starting to learn how to implement divide and conquer algorithms, but I'm having some serious trouble with this exercise.
I have written an algorithm which finds the minimum value in a given vector using the divide and conquer method:
int minimum (int v[], int inf, int sup)
{
int med, m1, m2;
if (inf == sup)
return v[inf];
med = (inf+sup)/2;
m1 = minimum (v, inf, med);
m2 = minimum (v, med+1, sup);
if (m1 < m2)
return m1;
else
return m2;
}
And it works. Now, I have to do the same exercise on a matrix, but I'm getting lost. Specifically, I have been told to do the following:
Let n = 2^k. Consider a nxn square matrix. Calculate its minimum value using a recursive function
double (minmatrix(Matrix M))
return minmatrix2(M, 0, 0, M.row);
(the Matrix type is given, and as you can imagine M.row gives the number of rows and columns of the matrix). Use an auxiliary function
double minmatrix2(Matrix M, int i, int j, int m)
This has to be done use a recursive divide and conquer algorithm.
So.. I can't figure out a way of doing it. I have been given the suggestion of splitting the matrix in 4 parts each time (from (i,j) to (i+m/2, j+m/2), from (i+m/2,j) to (i+m,j+m/2), from (i,j+m/2) to (i+m/2,j+m), from (i+m/2,j+m/2) to (i+m,j+m)) and try to implement a code working in a similar way to the one I have written for the array.. but I just seem to be unable to do it. Any suggestions? Even if you don't want to post a complete answer, just give me some indications. I really want to understand this.
EDIT: All right, I've done this. I'm posting the code I have used just in case someone else has the same doubt.
double minmatrix2(Matrix M, int i, int j, int m)
{
int a1, a2, a3, a4;
if (m == 1)
return M[i][j];
else
a1 = minmatrix2(M, i, j, m/2);
a2 = minmatrix2(M, i+(m/2), j, m/2);
a3 = minmatrix2(M, i, j+(m/2), m/2);
a4 = minmatrix2(M, i+(m/2), j+(m/2), m/2);
if (min (a1, a2) < min (a3, a4))
return min (a1, a2);
else
return min (a3, a4);
}
(function min defined elsewhere)

Consider that a 2D matrix in C or C++ is often implemented as accessor functions on top of a 1D array. You already know how to do this for a 1D array, so the only difference is how you address the cells. If you do this, your performance will intrinsically be optimal because you will address neighboring cells together.
Alternatively, consider that a 2D matrix has two dimensions N and M. Just break it in half along the larger dimension repeatedly until the larger dimension is less than X, some reasonable value to stop and do the actual computation sequentially. This is not entirely optimal because you will have to "skip" over parts of the matrix as you address memory.
A final idea is to divide by the major dimension first, then the minor one. In C this means divide by rows until you have single rows, then run your 1D array algorithm on each row. This produces roughly optimal performance.

Related

A better way to access n-d array element with a 1-d index array in C++?

Recently, I'm doing something about C++ pointers, I got this question when I want to access elements in multi-dimensional array with a 1-dimensional array which contains index.
Say I have a array arr, which is a 4-dimensional array with all elements set to 0 except for arr[1][2][3][4] is 1, and a array idx which contains index in every dimension for arr, I can access this element by using arr[idx[0]][idx[1]][idx[2]][idx[3]], or by using *(*(*(*(arr + idx[0]) + idx[1]) + idx[2]) + idx[3]).
The question comes with when n is large, this would be not so good, so I wonder if there is a better way to work with multi-dimensional accessing?
#include <bits/stdc++.h>
using namespace std;
#define N 10
int main()
{
int arr[N][N][N][N] = {0};
int idx[4] = {1, 2, 3, 4};
arr[1][2][3][4] = 1;
cout<<"Expected: "<<arr[1][2][3][4]<<" at "<<&arr[1][2][3][4]<<endl;
cout<<"Got with ****: ";
cout<<*(*(*(*(arr + idx[0]) + idx[1]) + idx[2]) + idx[3])<<endl;
return 0;
}
output
Expected: 1 at 0x7fff54c61f28
Got with ****: 1
The way you constructor your algorithm for indexing a multi dimensional array will vary depending on the language of choice; you have tagged this question with both C and C++. I will stick with the latter since my answer would pertain to C++. For a little while now I've been working on something similar but different so this becomes an interesting question as I was building a multipurpose multidimensional matrix class template.
What I have discovered about higher levels of multi dimensional vectors and matrices is that the order of 3 repetitiously works miracles in understanding the nature of higher dimensions. Think of this in the geometrical perspective before considering the algorithmic software implementation side of it.
Mathematically speaking Let's consider the lowest dimension of 0 with the first shape that is a 0 Dimensional object. This happens to be any arbitrary point where this point can have an infinite amount of coordinate location properties. Points such as p0(0), p1(1), p2(2,2), p3(3,3,3),... pn(n,n,...n) where each of these objects point to a specific locale with the defined number of dimensional attributes. This means that there is no linear distance such as length, width, or height and conversely a magnitude in any direction or dimension where this shape or object that has no bounds of magnitude does not define any area, volume or higher dimensions of volume. Also with these 0 dimensional points there is no awareness of direction which also implies that there is no angle of rotation that defines magnitude. Another thing to consider is that any arbitrary point is also the zero vector. Another thing to help in understand this is by the use of algebraic polynomials such that f(x) = mx+b which is linear is a One Dimensional equation, shape(in this case a line) or graph, f(x) = x^2 is Two Dimensional, f(x) = x^3 is Three Dimensional, f(x) = x^4 is Four Dimensional and so on up to f(x) = x^n where this would be N Dimensional. Length or Magnitude, Direction or Angle of Rotation, Area, Volume, and others can not be defined until you relate two distinct points to give you at least 1 line segment or vector with a specified direction. Once you have an implied direction you then have slope.
When looking at operations in mathematics the simplest is addition and it is nothing more than a linear translation and once you introduce addition you also introduce all other operations such as subtraction, multiplication, division, powers, and radicals; once you have multiplication and division you define rotation, angles of rotation, area, volume, rates of change, slope (also tangent function), which thus defines geometry and trigonometry which then also leads into integrations and derivatives. Yes, we have all had our math lessons but I think that this is important in to understanding how to construct the relationships of one order of magnitude to another, which then will help us to work through higher dimensional orders with ease once you know how to construct it. Once you can understand that even your higher orders of operations are nothing more than expansions of addition and subtraction you will begin to learn that their continuous operations are still linear in nature it is just that they expand into multiple dimensions.
Early I stated that the order of 3 repetitiously works miracles so let me explain my meaning. Since we perceive things on a daily basis in the perspective of 3D; we can only visualize 3 distinct vectors that are orthogonal to each other giving you our natural 3 Dimensions of Space such as Left & Right, Forward & Backward giving you the Horizontal axis and planes and Up & Down giving you the Vertical axis and planes. We can not visualize anything higher so dimensions of the order of x^4, x^5, x^6 etc... we can not visualize but yet they do exist. If we begin to look at the graphs of the mathematical polynomials we can begin to see a pattern between odd and even functions where x^4, x^6, x^8 are similar where they are nothing more than expansions of x^2 and functions of x^5, x^7 & x^9 are nothing more than expansions of x^3. So I consider the first few dimensions as normal: Zero - Point, 1st - Linear, 2nd - Area, and 3rd - Volume and as for the 4th and higher dimensions I call all of them Volumetric.
So if you see me use Volume then it relates directly to the 3rd Dimension where if I refer to Volumetric it relates to any Dimension higher than the 3rd. Now lets consider a matrix such that you have seen in regular algebra where the common matrices are defined by MxN. Well this is a 2D flat matrix that has M * N elements and this matrix also has an area of M * N as well. Let's expand to a higher dimensional matrix such as MxNxO this is a 3D Matrix with M * N * O elements and now has M * N * O Volume. So when you visualize this think of the MxN 2D part as being a page to a book and the O components represents each page of a book or slice of a box. The elements of these matrices can be anything from a simple value, to an applied operation, to an equation, system of equations, sets or just an arbitrary object as in a storage container. So now when we have a matrix that is of the 4th order such as MxNxOxP this now has a 4th dimensional aspect but the easiest way to visualize this is that This would be a 1 dimensional array or vector to where all of its P elements would be a 3D Matrix of a Volume of MxNxO. When you have a matrix of MxNxOxPxQ now you have a 2D Area Matrix of PxQ where each of those elements are a MxNxO Volume Matrix. Then again if you have a MxNxOxPxQxR you now have a 6th dimensional matrix and this time you have a 3D Volume Matrix where each of the PxQxR elements are in fact 3D Matrices of MxNxO. And once you go higher and higher this patter repeats and merges again. So the order of how arbitrary matrices behave is that these dimensionalities repeat: 1D are Linear Vectors or Matrices, 2D are Area or Planar Matrices and 3D is Volume Matrices and any thing of a higher repeats this process compressing the previous step of Volumes thus the terminology of Volumetric Matrices. Take a Look at this table:
// Order of Magnitude And groupings
-----------------------------------
Linear Area Volume
x^1 x^2 x^3
x^4 x^5 x^6
x^7 x^8 x^9
x^10 x^11 x^12
... ... ...
----------------------------------
Now it is just a matter of using a little bit of calculus to know which order of magnitude to index into which higher level of dimensionality. Once you know a specific dimension it is simple to take multiple derivatives to give you a linear expression; then traverse the space, then integrate to the same orders of the multiple derivatives to give the results. This should eliminate a good amount of intermediate work by at first ignoring the least significant lower dimensions in a high dimensional order. If you are working in something that has 12 dimensions you can assume that the first 3 dimensions that define the first set of volume is packed tight being an element to another 3D Volumetric Matrix and then once again that 2d order of Volumetric Matrix is itself an element of another 3D Volumetric Matrix. Thus we have a repeating pattern and now it's just a matter of apply this to construct an algorithm and once you have an algorithm; it should be quite easy to implement the methods in any programmable language. So you may have to have a 3 case switch to determine which algorithmic approach to use knowing the overall dimensionality of your matrix or n-d array where one handles orders of linearity, another to handle area, and the final to handle volumes and if they are 4th+ then the overall process becomes recursive in nature.
I figured out a way to solve this myself.
The idea is that use void * pointers, we know that every memory cell holds value or an address of a memory cell, so we can directly compute the offset of the target to the base address.
In this case, we use void *p = arr to get the base address of the n-d array, and then loop over the array idx, to calculate the offset.
For arr[10][10][10][10], the offset between arr[0] and arr[1] is 10 * 10 * 10 * sizeof(int), since arr is 4-d, arr[0] and arr[1] is 3-d, so there is 10 * 10 * 10 = 1000 elements between arr[0] and arr[1], after that, we should know that the offset between two void * adjacent addresses is 1 byte, so we should multiply sizeof(int) to get the correct offset, according to this, we finally get the exact address of the memory cell we want to access.
Finally, we have to cast void * pointer to int * pointer and access the address to get the correct int value, that's it!
With void *(not so good)
#include <bits/stdc++.h>
using namespace std;
#define N 10
int main()
{
int arr[N][N][N][N] = {0};
int idx[4] = {1, 2, 3, 4};
arr[1][2][3][4] = 1;
cout<<"Expected: "<<arr[1][2][3][4]<<" at "<<&arr[1][2][3][4]<<endl;
cout<<"Got with ****: ";
cout<<*(*(*(*(arr + idx[0]) + idx[1]) + idx[2]) + idx[3])<<endl;
void *p = arr;
for(int i = 0; i < 4; i++)
p += idx[i] * int(pow(10, 3-i)) * sizeof(int);
cout<<"Got with void *:";
cout<<*((int*)p)<<" at "<<p<<endl;
return 0;
}
Output
Expected: 1 at 0x7fff5e3a3f18
Got with ****: 1
Got with void *:1 at 0x7fff5e3a3f18
Notice:
There is a warning when compiling it, but I choose to ignore it.
test.cpp: In function 'int main()':
test.cpp:23:53: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
p += idx[i] * int(pow(10, 3-i)) * sizeof(int);
Use char * instead of void *(better)
Since we want to manipulate pointer byte by byte, it would be better to use char * to replace void *.
#include <bits/stdc++.h>
using namespace std;
#define N 10
int main()
{
int arr[N][N][N][N] = {0};
int idx[4] = {1, 2, 3, 4};
arr[1][2][3][4] = 1;
cout<<"Expected: "<<arr[1][2][3][4]<<" at "<<&arr[1][2][3][4]<<endl;
char *p = (char *)arr;
for(int i = 0; i < 4; i++)
p += idx[i] * int(pow(10, 3-i)) * sizeof(int);
cout<<"Got with char *:";
cout<<*((int*)p)<<" at "<<(void *)p<<endl;
return 0;
}
Output
Expected: 1 at 0x7fff4ffd7f18
Got with char *:1 at 0x7fff4ffd7f18
With int *(In this specific case)
I have been told it's not a good practice for void * used in arithmetic, it would be better to use int *, so I cast arr into int * pointer and also replace pow.
#include <bits/stdc++.h>
using namespace std;
#define N 10
int main()
{
int arr[N][N][N][N] = {0};
int idx[4] = {1, 2, 3, 4};
arr[1][2][3][4] = 1;
cout<<"Expected: "<<arr[1][2][3][4]<<" at "<<&arr[1][2][3][4]<<endl;
cout<<"Got with ****: ";
cout<<*(*(*(*(arr + idx[0]) + idx[1]) + idx[2]) + idx[3])<<endl;
int *p = (int *)arr;
int offset = 1e3;
for(int i = 0; i < 4; i++)
{
p += idx[i] * offset;
offset /= 10;
}
cout<<"Got with int *:";
cout<<*p<<" at "<<p<<endl;
return 0;
}
Output
Expected: 1 at 0x7fff5eaf9f08
Got with ****: 1
Got with int *:1 at 0x7fff5eaf9f08

Calculating probability using std::binomial_distribution

Is it possible to calculate the probability of obtaining n success in k trials using std::binomial_distribution? How?
Not really no.
std::binomial_distribution is more of an adapter for random number generation rather than being something that can provide a cumulative density function.
You can implement a cumulative density function in a few lines of code yourself or have a search for a good mathematics library. (Use a Pascal's triangle approach as opposed to calculating large factorial numbers).
To implement such PMF, like Binomial
PMF(p, n,k) = n!/(k!*(n-k)!) p^k (1-p)^(n-k)
Good trick is to compute logarithm of it and the exponentiate it
For log of the factorial you have to use log of gamma function
Along the lines (not tested!)
double logChoose(int n, int k) {
return std::lgamma(double(n+1)) - std::lgamma(double(k+1)) - std::lgamma(double(n-k+1));
}
double PMFBinomial(double p, int n, int k) {
double lgr = logChoose(n, k) + double(k)*std::log(p) + double(n-k)*std::log(1-p);
return std::exp(lgr);
}

Sum of product: can we vectorize the following in C++? (using Eigen or other libraries)

UPDATE: the (sparse) three-dimensional matrix v in my question below is symmetric: v(i1,i2,i3) = v(j1,j2,j3) where (j1,j2,j3) is any of the 6 permutations of (i1,i2,i3), i.e.
v(i1,i2,i3) = v(i1,i3,i2) = v(i2,i3,i1) = v(i2,i1,i3) = v(i3,i1,i2) = v(i3,i2,i1).
Moreover, v(i1,i2,i3) != 0 only when i1 != i2 && i1 != i3 && i2 != i3.
E.g. v(i,i,j) = 0, v(i, k, k) = 0, v(k, j, k) = 0, etc...
I thought that with this additional information, I could already get a significant speed-up by doing the following:
Remark: v contains duplicate values (a triplet (i,j,k) has 6 permutations, and the values of v for these 6 are the same).
So I defined a more compact matrix uthat contains only non-duplicates of v. The indices of u are (i1,i2,i3) where i1 < i2 < i3. The length of u is equal to the length of v divided by 6.
I computed the sum by iterating over the new value vector and the new index vectors.
With this, I only got a little speed-up. I realized that instead of iterating N times doing a multiplication each time, I iterated N/6 times doing 6 multiplications each time, and that's pretty much the same as before :(
Hope somebody could come up with a better solution.
--- (Original question) ---
In my program I have an expensive operation that is repeated every iteration.
I have three n-dimensional vectors x1, x2 and x3 that are supposed to change every iteration.
I have four N-dimensional vectors I1, I2, I3 and v that are pre-defined and will not change, where:
I1, I2 and I3 contain the indices of respectively x1, x2 and x3 (the elements in I_i are between 0 and n-1)
v is a vector of values.
For example:
We can see v as a (reshaped) sparse three-dimensional matrix, each index k of v corresponds to a triplet (i1,i2,i3) of indices of x1, x2, x3.
I want to compute at each iteration three n-dimensional vectors y1, y2 and y3 defined by:
y1[i1] = sum_{i2,i3} v(i1,i2,i3)*x2(i2)*x3(i3)
y2[i2] = sum_{i1,i3} v(i1,i2,i3)*x1(i1)*x3(i3)
y3[i3] = sum_{i1,i2} v(i1,i2,i3)*x1(i1)*x2(i2)
More precisely what the program does is:
Repeat:
Compute y1 then update x1 = f(y1)
Compute y2 then update x2 = f(y2)
Compute y3 then update x3 = f(y3)
where f is some external function.
I would like to know if there is a C++ library that helps me to do so as fast as possible. Using for loops is just too slow.
Thank you very much for your help!
Update: Looks like it's not easy to get a better solution than the straight-forward for loops. If the vector of indices I1 above is ordered in non-decreasing order, can we compute y1 faster?
For example: I1 = [0 0 0 0 1 1 2 2 2 3 3 3 ... n n].
The simple answer is no, at least, not trivially. Your access pattern (e.g. x2(i2)*x3(i3)) does not (at least at compile time) access contiguous memory, but rather has a layer of indirection. Due to this, SIMD instructions are pretty useless, as they work on chunks of memory. What you may want to consider doing is creating a copy of xM sorted according to iM, removing the layer of indirection. This should reduce the number of cache misses in that xM(iM) generates and since it's accessed N times, that may reduce some of the wall time (assuming N is large).
If maximal accuracy is not critical, you may want to consider using a FFT method instead of the convolution (at least, that's how I understood your question. Feel free to correct me if I'm wrong).
Assuming you are doing a convolution and the vectors (a and b, same size as in your question) are large, the result (c) can be calculated naïvely as
// O(n^2)
for(int i = 0; i < c.size(); i++)
c(i) = a(i) * b.array();
Using the convolution theorem, you could take the Fourier transform of both a and b and perform an element wise multiplication and then take the inverse Fourier transform of the result to get c (will probably differ a little):
// O(n log(n)); note A, B, and C are vectors of complex floating point numbers
fft.fwd(a, A);
fft.fwd(b, B);
C = A.array() * B.array();
fft.inv(C, c);

Points on the same line

I was doing a practice question and it was something like this,We are given N pair of coordinates (x,y) and we are given a central point too which is (x0,y0).We were asked to find maximum number of points lying on a line passing from (x0,y0).
My approach:- I tried to maintain a hash map having slope as the key and I thought to get the maximum second value to get maximum number of points on the same line.Something like this
mp[(yi-y0)/(xi-x0))]++; //i from 0 to n
And iterating map and doing something line this
if(it->second >max) //it is the iterator
max=it->second;
and printing max at last;
Problem With my approach- Whenever I get (xi-x0) as 0 I get runtime error.I also tried atan(slope) so that i would get degrees instead of some not defined value but still its not working.
What i expect->How to remove this runtime error and is my approach correct for finding maximum points on a line passing from a point(x0,y0).
P.S -My native language is not english so please ignore if something goes wrong.I tried my best to make everything clear If i am not clear enough please tell me
I'm assuming no other points have the same coordinates as your "origin".
If all your coordinates happen to be integers, you can keep a rational number (i.e. a pair of integers, i.e. a numerator and a denominator) as the slope, instead of a single real number.
The slope is DeltaY / DeltaX, so all you have to do is keep the pair of numbers separate. You just need to take care to divide the pair by their greatest common divisor, and handle the case where DeltaX is zero. For example:
pair<int, int> CalcSlope (int x0, int y0, int x1, int y1)
{
int dx = abs(x1 - x0), dy = abs(y1 - y0);
int g = GCD(dx, dy);
return {dy / g, dx / g};
}
Now just use the return value of CalcSlope() as your map key.
In case you need it, here's one way to calculate the GCD:
int GCD (int a, int b)
{
if (0 == b) return a;
else return gcd(b, a % b);
}
You have already accepted an answer, but I would like to share my approach anyway. This method uses the fact that three points a, b, and c are covariant if and only if
(a.first-c.first)*(b.second-c.second) - (a.second-c.second)*(b.first-c.first) == 0
You can use this property to create a custom comparison object like this
struct comparePoints {
comparePoints(int x0 = 0, int y0 = 0) : _x0(x0), _y0(y0) {}
bool operator()(const point& a, const point& b) {
return (a.first-_x0)*(b.second-_y0) - (b.first-_x0)*(a.second-_y0) < 0;
}
private:
int _x0, _y0;
};
which you can then use as a comparison object of a map according to
comparePoints comparator(x0, y0);
map<pair<int, int>, int, comparePoints> counter(comparator);
You can then add points to this map similar to what you did before:
if (!(x == x0 && y == y0))
counter[{x,y}]++;
By using comparitor as a comparison object, two keys a, b in the map are considered equal if !comparator(a, b) && !comparator(b,a), which is true if and only if a, b and {x0,y0} are collinear.
The advantage of this method is that you don't need to divide the coordinates which avoids rounding errors and problems with dividing by zero, or calculate the atan which is a costly operation.
Move everything so that the zero point is at the origin:
(xi, yi) -= (x0, y0)
Then for each point (xi, yi), find the greatest common divisor of xi and yi and divide both numbers by it:
k = GCD(xi, yi)
(xi', yi`) = (yi/k, yi/k)
Now points that are on the same ray will map to equal points. If (xi, yi) is on the same ray as (xj, yj) then (xi', yi') = (xj', yj').
Now find the largest set of equal points (don't forget any (xi, yi) = (0, 0)) and you have your answer.
You've a very original approach here !
Nevertheless, a vertical line has a infinite slope and this is the problem here: dividing by 0 is not allowed.
Alternative built on your solution (slope):
...
int mpvertical=0; // a separate couner for verticals
if (xi-x0)
mp[(yi-y0)/(xi-x0))]++;
else if (yi-y0)
mpvertical++;
// else the point (xi,yi) is the point (x0,y0): it shall not be counted)
This is cumbersome, because you have everything in the map plus this extra counter. But it will be exact. A workaround could be to count such points in mp[std::numeric_limits<double>::max()], but this would be an approximation.
Remark: the case were xi==x0 AND yi==y0 corresponds to your origin point. These points have to be discarded as they belong to every line line.
Trigonomic alternative (angle):
This uses the general atan2 formula used to converting cartesian coordinates into polar coordinates, to get the angle:
if (xi!=x0 && yi!=y0) // the other case can be ignored
mp[ 2*atan((yi-y0)/((xi-x0)+sqrt(pow(xi-x0,2)+pow(yi-y0,2)))) ]++;
so your key for mp will be an angle between -pi and +pi. No more extra case, but slightly more calculations.
You can hide these extra details and use the slighltly more optimized build in function:
if (xi!=x0 && yi!=y0) // the other case can be ignored
mp[ atan2(yi-y0, xi-x0) ]++;
you can give this approach a try
struct vec2
{
vec2(float a,float b):x(a),y(b){}
float x,y;
};
bool isColinear(vec2 a, vec2 b, vec2 c)
{
return fabs((a.y-b.y)*(a.x-c.x) - (a.y-c.y)*(a.x-b.x)) <= 0.000001 ;
}

minimal cost sum

i want to to calculate minimum sum in given two dimensional array
#include<iostream>
#include<limits.h>
using namespace std;
#define R 3
#define C 3
int Min(int x,int y,int z){
if(x<y){
return (x<z)?x:z;
}
else
return (y<z)?y:z;
}
int mincost(int cost[R][C],int m,int n){
int i,j;
int t[R][C];
t[0][0]=cost[0][0];
for(i=1;i<=m;i++)
t[i][0]=t[i-1][0]+cost[i][0];
for(j=1;j<=n;j++)
t[0][j]=t[0][j-1]+cost[0][j];
for(i=1;i<=m;i++){
for(j=1;j<=n;j++){
t[i][j]=Min(t[i-1][j-1],t[i-1][j],t[i][j-1]+cost[i][j]);
}
}
return t[m][n];
}
int main(){
int cost[R][C]={{1,2,3},
{4,8,2},
{1,5,3}};
cout<<mincost(cost,2,2)<<endl;
return 0;
}
from starting point (0,0) to some point (m,n) for this array it equals 8,but output shows me 1,why?what is wrong with this code?
algorithm in words
Given a cost matrix cost[][] and a position (m, n) in cost[][], write a function that returns cost of minimum cost path to reach (m, n) from (0, 0). Each cell of the matrix represents a cost to traverse through that cell. Total cost of a path to reach (m, n) is sum of all the costs on that path (including both source and destination). You can only traverse down, right and diagonally lower cells from a given cell, i.e., from a given cell (i, j), cells (i+1, j), (i, j+1) and (i+1, j+1) can be traversed. You may assume that all costs are positive integers.
I see that this is a dynamic programming solution.
you have a typo here:
t[i][j]=Min(t[i-1][j-1],t[i-1][j],t[i][j-1]+cost[i][j]);
it should be:
t[i][j]=Min(t[i-1][j-1],t[i-1][j],t[i][j-1]) + cost[i][j];
basically it worked like t[i][j] = t[i-1][j-1].
Note: A good way to debug these problems is to print the intermediate matrix (here: t).
Given that t[0][0] = cost[0][0] = 1
then
for(i=1;i<=m;i++){
for(j=1;j<=n;j++){
t[i][j]=Min(t[i-1][j-1],t[i-1][j],t[i][j-1]+cost[i][j]);
}
for i=1, j=1
t[1][1] = Min(t[0][0], t[0][1], t[1][0]+cost[1][1]) = Min(1, ...) = 1
for i=2 j=2
t[2][2] = Min(t[1][1], t[1][2], t[2][1]+cost[2][2]) = Min(1, ...) = 1
Min(t[i-1][j-1],t[i-1][j],t[i][j-1]+cost[i][j])
should probably be
Min(t[i-1][j-1],t[i-1][j],t[i][j-1]) +cost[i][j]
Just guessing, it's hard to read your intent, but looks like a pathfinding algorithm. Your code wasn't adding the cost properly to diagonal or horizontal movement, and since the cost of the beginning was one, that was also your result. This should return a cost of eleven for your sample.