Finding the efficiency of a search? algorithm. C++ - c++

I've been told to find the efficiency of this code, and we've been ~1 hour (me and my partner) trying to find out what this code really does.
We supposed this is a search algorithm, but we can't really find a way to make it work w/o getting into an infinite loop:
int busq(int *v, int x, int b, int a){
int m1, m2;
int result;
m1 = (b+a) / 3;
m2 = 2*m1;
if (v[m1] == x)
result = m1;
else
if (v[m2] == x)
result = m2;
else
if (x<v[m1])
result = busq(v, x, b, m1-1);
else
if (x>v[m2])
result = busq(v, x, m2+1, a);
else
result = busq(v, x, m1+1, m2-1);
return result;
}
That's all we are given, no value for the parameters a,b or x, not the size of *v (the vector) or the content of the vector.
It's supposed to be possible to solve it like this.
If anything we want to know what this code does, but if you can tell us the efficiency, it will be appreciated as well. (We use the O() notation E.J.: O(1), O(n^2)...)

It's basically ternary search. v has to be a sorted array, x is the value searched for and b ist the begin of the range and a is the end (exclusive).
The function attempts to divide the range into three about equal partitions at m1, m2 (which are both calculated wrong and only work if you search for the first element) and checks whether x lies on the bounds. If not, it recurses with the partition x has to lie in.
The code can be fixed with
m1=b+(a-b)/3;
m2=b+(a-b)*2/3;
Then, the efficiency should be O(log n)

Related

Minimum Coin Change Problem(Top-Down Approach)

I have coded a top-down approach to solve the famous minimum coin change problem as shown in the code below. But the code runs into segmentation fault when the money is close to 44000. I have three varieties of coin {1,4,5}. I don't know what is going on? I suspect I am running out of stack memory. But 44000 seems a small value. So, I tested it on an online IDE. But it seems to works perfectly there. I am running my code on NetBeans 8.2(on a laptop of 8GB RAM). Please help me
Following is the snippet of my function:
//A top-down approach
int change_tpd(int m, vector<int>&coins, vector<int>&dp)
{
if(dp[m]!=-1)
return dp[m];
else if(m==0)
dp[m] = 0;
else
{
int x = INT_MAX;
for(int i=0;i<coins.size();++i)
{
if(m-coins[i]>=0)
x = min(x,change_tpd(m-coins[i],coins,dp));
}
dp[m] = 1+x;
}
return dp[m];
}
Maybe you could reduce the depth of the search tree by doing something like this:
Say that you have a set C = {c1, ..., cn} of N coins sorted by decreasing order of their values, with respective weights X = {x1, ..., xn}
We are trying to minimize Sum_{1<=i<=N}{xi} where Prod_{1<=i<=N}{ci*xi} = V
Given that we are trying to minimum the sum of xi, any solution of form S=...+cixi+...+cjxj+... where xj>=ci and i<j is dominated by a solution of form S'=...+ci(xi+cj)+...+cj(xj-ci)+...
By extention the form of a dominant solution is xj<(ci-1) for every 1<=i<j<=N, and even more restrictive xj<(ci/gcd(ci,cj))-1 for every 1<=i<j<=N (does anyone get the reference for this or for anything else below?)
We obtain that way the upper bound vector U of X with xi<=ui for every 1<=i<=N.
In vector U, u1 is infinite and all the others values are bounded; as the result we can easily compute the maximum value Z = \Sum_{2<=i<=N}{ci*ui} we can make without using any coin c1 for a dominant solution
By extension the resolution of V>Z can be reduce to the resolution of V' = V - ip((V-Z+c1-1)/c1)*c1 with ip(r) the integer part of r, and by increasing the result by ip((V-Z+c1-1)/c1).
In your instance, C = {5, 4, 1}, Z = 19;
V = 44000 > Z, V' = 44000 - ip((44000-17)/5)*5 = 15, and the result is incremented by 8797
This also makes the DP array way smaller.

Backtracking algorithm gets stuck

I have this problem of a matrix (map) that starting from top-left corner, I want to find the less heavier path to bottom-right corner. It has the condition that it can only move right, down or right-down.
This is an example:
matrix example
I need to solve the problem with backtracking, but I can't tell if I'm doing it well.
This code is able to solve matrix sizes up to 10x10, but when I try a 20x20 matrix, it gets stuck (or at least that's what I think after hours).
/*
* i, j -> matrix iterators.
* n, m -> matrix height and width
* map -> matrix
* actualPath, bestPath -> vectors for representing the path later
* actual -> actual path weight
* best -> best path weight
*/
int backtracking(int i, int j, const int &n, const int &m,
const vector<vector<int>> &map,
vector<vector<int>> &actualPath,
vector<vector<int>> &bestPath,
int best) {
recursiveCalls++;
int actual = 0;
//Bottom-right corner
if(i == (n-1) && j == (m-1)) {
return map[i][j];
}
//Last row, only right
else if(i == (n-1)) {
actual = map[i][j] +
backtracking(i, (j+1), n, m, map, actualPath, bestPath, best, profundidad);
}
//Last column, only down
else if(j == (m-1)) {
actual = map[i][j] +
backtracking((i+1), j, n, m, map, actualPath, bestPath, best, profundidad);
}
else {
int downRight = backtracking((i+1), (j+1), n, m, map, actualPath, bestPath, best, profundidad);
int right = backtracking(i, (j+1), n, m, map, actualPath, bestPath, best, profundidad);
int down = backtracking((i+1), j, n, m, map, actualPath, bestPath, best, profundidad);
actual = map[i][j] + minimo(downRight, right, down);
}
if(actual < best) {
best = actual;
bestPath = actualPath;
}
return best;
}
Is it possible that it gets stuck because I don't use bounds? Or is it bad implemented?
I don't know what I'm doing wrong. I think I understand this algorithm but I guess I don't know how to implement it for this problem...
Although backtracking will give you the correct answer here. It is not the fastest solution in this case.
You are doing a lot duplicate work here, which are not necessary. Straightforward Backtracking is not useful in this case. Lets take a look at this example,
suppose the grid size is 10X10.
one of the trees of the backtrackting stared from (0,0) -> (0,1)
another started from (0,0) -> (1,0)
and another started from (0,0) -> (1,1)
When the 1st traversal reaches point (5,5) it will keep finding all possible ways to go to (9,9). Now the 2nd traversal when that reaches (5,5) it will do the exact same work the first traversal did from this stage, and so will the 3rd traversal. So these duplicate steps are the places where you are exhausting your program and your code is taking way too long to execute. Your code is not stuck its just running for very long time. You can memoize the results easily to optimize the time here.
So if you can save the value that you found when you first reached a point (i,j) as save[i][j], then when some other traversal reaches this same point(i,j) it can decide not to traverse any further and use this save[i][j] for its own. This way you can make the code lot more faster.
This way it will become more of dynamic programming than backtracking, and even a grid of size 10000X10000 will take around few seconds to give you the results.
In this answer I only described how to find the value of the path towards min value, if you want to find the path thats also possible using the same DP solution.

Sum the odd positioned and the even positioned integers in an array

What is the most elegant way to sum 'each number on odd position' with 'each number on even position multiplied by 3'? I must obide this prototype
int computeCheckSum(const int* d)
My first try was to use this but my idea was flawed. I can't find a way to tell which element is even this way.
int sum=0;
for_each(d,
d+11,
[&sum](const int& i){sum+=(i%2==1)?3*i:i;}
);
example
1 2 3 4 5
1+2*3+3+4*3+5=27
I can't find a way to tell which element is even this way.
If you insist on using for_each (there's no reason to do that here), then you track the index separately:
int computeCheckSum(const int* d, int count)
{
int sum=0;
int pos=1;
std::for_each(d, d+count,
[&sum,&pos](const int& value) { sum += pos++ % 2 ? value : value * 3; } );
return sum;
}
Note I added a count parameter, so the function can work on arrays of any length. If you're feeling really perverse, you can remove that parameter and go back to hardcoding the length so the function only works arrays with 12 elements. But if you hope to be good at this some day, doing that should make you feel gross.
These things rarely become very "elegant" in C++ (it seems C++ is asymptotically approaching Perl on the "line noise" index) but since accumulate is a left fold, you can pass the index "along the fold":
int sum = std::accumulate(d,
d + 11,
std::make_pair(0,0), // (index, result)
[](std::pair<int, int> r, int x) {
r.second += r.first % 2 ? x : 3 * x;
r.first++;
return r;
}).second;
You were right. As Mud said, it was just a terrible function design. This is what I needed.
int computeCheckSum(){
int sum = 0;
bool multiplyBy3 = false;
for (auto i : m_digits){
sum += multiplyBy3 ? 3*i : i;
multiplyBy3 = !multiplyBy3;
}
return sum;
}
Mud's solution is correct using my flawed design. A simple for loop would probably be even a better solution, as everyone said.

Recursive Divide and Conquer Algorithm Modification

So in my textbook there is this block of code to find the maximum element in an array by using the divide and conquer recursive algorithm:
Item max(Item a[], int l, int r)
{
if (l == r) return a[1];
int m = (l+r)/2;
Item u = max(a, l, m);
Item v = max(a, m+1, r);
if (u > v) return u; else return v;
}
For one of the questions following the code, it asks me to modify that program so that I find the maximum element in an array by dividing an array of size N into one part of size k = 2^((lgN)-1) and another of size N-k (so that the size of at least one of the parts is a power of 2.
So I'm trying to solve that, and I just realized I wouldn't be able to do an exponent in code. How am I supposed to implement dividing one array into size k = 2^((lgN)-1)?
Both logs and exponentials can be computed using functions in the standard library.
But a simple solution would be to start at 1 and keep doubling until you reach a number bigger than desired. Going back one step then give you your answer.
(Of course the whole idea is mad - this algorithm is much more complex and slower than the obvious linear scan. But I'll assume there is some method in the madness.)
This finds maximum k being a power of 2 and less than the number of array items (so the array part is divided into two non-empty parts):
Item max(Item a[], int l, int r)
{
if (l == r) return a[r];
int s = r-l, k = 1;
while (2*k <= s)
k = 2*k;
Item u = max(a, l, l+k-1);
Item v = max(a, l+k, r);
return u > v ? u : v;
}
However this is not necessarily the best possible choice. For example you might want to seek such k which is closest to the half of the array's length (for 10 items that would be k=4 instead of 8).
Or you may try to partition the array into two parts both with lengths being powers of 2 (if possible, for 10 items it would be 8+2)...

using std::nth_element in eigen and a related interrogation

I'm teaching myself c++ and eigen in one go,
so maybe this is an easy question.
Given n and 0 "<" m "<" n, and an n-vector d of floats. To make it concrete:
VectorXf d = VectorXf::Random(n)
i would like to have a m-vector d_prim onf integers that contains
the indexes of all the entries of d that are smaller or equal than
the m-th largest entry of d. Efficiency matters. if there are draws
in the data, then filling d_prim the first m entries of d that are
smaller than its m-th largest entry is fine (i really need the
index of m numbers that are not larger than the m^th largest entry
of d).
I've tried (naively):
float hst(VectorXf& d,int& m){
// VectorXf d = VectorXf::Random(n);
std::nth_element(d.data().begin(),d.data().begin()+m,d.data().end());
return d(m);
}
but there is two problems with it:
it doesn't work
even if it did work, i still have to pass over (a copy) of d once to find the indices
of those entries that are smaller than d(m). Is this necessary?
Best,
std::nth_element is what you want (contrary to what I said before). It does a partial so that the elements in the range [first, mth) are less than those in the range [mth, last). So after running nth_element all you have to do copy the first m elements to the new vector.
VextorXf d = VectorXf::Random(n);
VectorXi d_prim(m);
std::nth_element(d.data().begin(), d.data.begin() + m, d.data().end());
std::copy(d.data().begin(), d.data().begin() + m, d_prim.begin());
This answer has more info on algorithms to do this.
Putting together David Brown's and Kerrek SB answers i got this as "the most efficient proposal":
VectorXi hst(VectorXf& d,int& h){
VectorXf e = d;
VectorXi f(h);
int j=0;
std::nth_element(d.data(),d.data()+h,d.data()+d.size());
for(int i=0;i<d.size();i++){
if(e(i)<=d(h)){
f(j)=i;
j++;
if(j==h) break;
}
}
return f;
}