So in my textbook there is this block of code to find the maximum element in an array by using the divide and conquer recursive algorithm:
Item max(Item a[], int l, int r)
{
if (l == r) return a[1];
int m = (l+r)/2;
Item u = max(a, l, m);
Item v = max(a, m+1, r);
if (u > v) return u; else return v;
}
For one of the questions following the code, it asks me to modify that program so that I find the maximum element in an array by dividing an array of size N into one part of size k = 2^((lgN)-1) and another of size N-k (so that the size of at least one of the parts is a power of 2.
So I'm trying to solve that, and I just realized I wouldn't be able to do an exponent in code. How am I supposed to implement dividing one array into size k = 2^((lgN)-1)?
Both logs and exponentials can be computed using functions in the standard library.
But a simple solution would be to start at 1 and keep doubling until you reach a number bigger than desired. Going back one step then give you your answer.
(Of course the whole idea is mad - this algorithm is much more complex and slower than the obvious linear scan. But I'll assume there is some method in the madness.)
This finds maximum k being a power of 2 and less than the number of array items (so the array part is divided into two non-empty parts):
Item max(Item a[], int l, int r)
{
if (l == r) return a[r];
int s = r-l, k = 1;
while (2*k <= s)
k = 2*k;
Item u = max(a, l, l+k-1);
Item v = max(a, l+k, r);
return u > v ? u : v;
}
However this is not necessarily the best possible choice. For example you might want to seek such k which is closest to the half of the array's length (for 10 items that would be k=4 instead of 8).
Or you may try to partition the array into two parts both with lengths being powers of 2 (if possible, for 10 items it would be 8+2)...
Related
An array is said to have a majority element if more than half of its elements are the same. Is there a divide-and-conquer algorithm for determining if an array has a majority element?
I normally do the following, but it is not using divide-and-conquer. I do not want to use the Boyer-Moore algorithm.
int find(int[] arr, int size) {
int count = 0, i, mElement;
for (i = 0; i < size; i++) {
if (count == 0) mElement = arr[i];
if (arr[i] == mElement) count++;
else count--;
}
count = 0;
for (i = 0; i < size; i++) {
if (arr[i] == mElement) count++;
}
if (count > size / 2) return mElement;
return -1;
}
I can see at least one divide and conquer method.
Start by finding the median, such as with Hoare's Select algorithm. If one value forms a majority of the elements, the median must have that value, so we've just found the value we're looking for.
From there, find (for example) the 25th and 75th percentile items. Again, if there's a majority element, at least one of those would need to have the same value as the median.
Assuming you haven't ruled out there being a majority element yet, you can continue the search. For example, let's assume the 75th percentile was equal to the median, but the 25th percentile wasn't.
When then continue searching for the item halfway between the 25th percentile and the median, as well as the one halfway between the 75th percentile and the end.
Continue finding the median of each partition that must contain the end of the elements with the same value as the median until you've either confirmed or denied the existence of a majority element.
As an aside: I don't quite see how Boyer-Moore would be used for this task. Boyer-Moore is a way of finding a substring in a string.
There is, and it does not require the elements to have an order.
To be formal, we're dealing with multisets (also called bags.) In the following, for a multiset S, let:
v(e,S) be the multiplicity of an element e in S, i.e. the number of times it occurs (the multiplicity is zero if e is not a member of S at all.)
#S be the cardinality of S, i.e. the number of elements in S counting multiplicity.
⊕ be the multiset sum: if S = L ⊕ R then S contains all the elements of L and R counting multiplicity, i.e. v(e;S) = v(e;L) + v(e;R) for any element e. (This also shows that the multiplicity can be calculated by 'divide-and-conquer'.)
[x] be the largest integer less than or equal to x.
The majority element m of S, if it exists, is that element such that 2 v(m;S) > #S.
Let's call L and R a splitting of S if L ⊕ R = S and an even splitting if |#L - #R| ≤ 1. That is, if n=#S is even, L and R have exactly half the elements of S, and if n is odd, than one has cardinality [n/2] and the other has cardinality [n/2]+1.
For an arbitrary split of S into L and R, two observations:
If neither L nor R has a majority element, then S cannot: for any element e, 2 v(e;S) = 2 v(e;L) + 2 v(e;R) ≤ #L + #R = #S.
If one of L and R has a majority element m with multiplicity k, then it is the majority element of S only if it has multiplicity r in the other half, with 2(k+r) > #S.
The algorithm majority(S) below returns either a pair (m,k), indicating that m is the majority element with k occurrences, or none:
If S is empty, return none; if S has just one element m, then return (m,1). Otherwise:
Make an even split of S into two halves L and R.
Let (m,k) = majority(L), if not none:
a. Let k' = k + v(m;R).
b. Return (m,k') if 2 k' > n.
Otherwise let (m,k) = majority(R), if not none:
a. Let k' = k + v(m;L).
b. Return (m,k') if 2 k' > n.
Otherwise return none.
Note that the algorithm is still correct even if the split is not an even one. Splitting evenly though is likely to perform better in practice.
Addendum
Made the terminal case explicit in the algorithm description above. Some sample C++ code:
struct majority_t {
int m; // majority element
size_t k; // multiplicity of m; zero => no majority element
constexpr majority_t(): m(0), k(0) {}
constexpr majority_t(int m_,size_t k_): m(m_), k(k_) {}
explicit operator bool() const { return k>0; }
};
static constexpr majority_t no_majority;
size_t multiplicity(int x,const int *arr,size_t n) {
if (n==0) return 0;
else if (n==1) return arr[0]==x?1:0;
size_t r=n/2;
return multiplicity(x,arr,r)+multiplicity(x,arr+r,n-r);
}
majority_t majority(const int *arr,size_t n) {
if (n==0) return no_majority;
else if (n==1) return majority_t(arr[0],1);
size_t r=n/2;
majority_t left=majority(arr,r);
if (left) {
left.k+=multiplicity(left.m,arr+r,n-r);
if (left.k>r) return left;
}
majority_t right=majority(arr+r,n-r);
if (right) {
right.k+=multiplicity(right.m,arr,r);
if (right.k>r) return right;
}
return no_majority;
}
A simpler divide and conquer algorithm works for the case that there exists more than 1/2 elements which are the same and there are n = 2^k elements for some integer k.
FindMost(A, startIndex, endIndex)
{ // input array A
if (startIndex == endIndex) // base case
return A[startIndex];
x = FindMost(A, startIndex, (startIndex + endIndex - 1)/2);
y = FindMost(A, (startIndex + endIndex - 1)/2 + 1, endIndex);
if (x == null && y == null)
return null;
else if (x == null && y != null)
return y;
else if (x != null && y == null)
return x;
else if (x != y)
return null;
else return x
}
This algorithm could be modified so that it works for n which is not exponent of 2, but boundary cases must be handled carefully.
Lets say the array is 1, 2, 1, 1, 3, 1, 4, 1, 6, 1.
If an array contains more than half of elements same then there should be a position where the two consecutive elements are same.
In the above example observe 1 is repeated more than half times. And the indexes(index start from 0) index 2 and index 3 have same element.
I've been working on a program that is supposed to test the performance of quick select algorithm under different group size setting. You find the pivot, the algorithm will divide all the elements into group of 5. Its supposed to find the median of each group and use the median of medians from all group as pivot. I'm having an issue with the smallest kth part. The errors that I'm getting is that n is not a constant variable so it cannot allocate the array and that it causes median to have an unknown size. What should I do to correct this?
int smallestKth(int ray[], int l, int r, int k)
{
if (k > 0 && k <= r - l + 1)
{
int n = r-l+1;
int i, median[(n+4)/5];
for (i=0; i<n/5; i++)
median[i] = medianFind(ray+l+i*5, 5);
if (i*5 < n)
{
median[i] = medianFind(ray+l+i*5, n%5);
i++;
}
int medOfMed = (i == 1)? median[i-1]:
smallestKth(median, 0, i-1, i/2);
int pivotPosition = part(ray, l, r, medOfMed);
if (pivotPosition-l == k-1)
return ray[pivotPosition];
if (pivotPosition-l > k-1)
return smallestKth(ray, l, pivotPosition-1, k);
return smallestKth(ray, pivotPosition+1, r, k-pivotPosition+l-1);
}
return INT_MAX;
}
int median[(n+4)/5]; is a nonstandard declaration supported by some compilers as an extension. Rather than using a Variable Length Array (VLA), you should use std::vector.
std::vector median((n+4)/5);
You don't need to make a new array to hold the medians. Just use a fifth of the original array.
One way to do that is to stride the array; represent the array as a starting pointer, a number of elements, and a stride, which is the distance between two consecutive elements. For example, once you've finished putting tbe median of each group of five in the right place in the array [start, n, stride], you can recurse on the array [start+2, (n+2)/5, 5*stride].
This was solved by creating a pointer for the median array.
int n = right-left+1;
int *median = new int[(n+4)/5];
I have a question about this problem.
Question
You are given a sequence a[0], a 1],..., a[N-1], and set of range (l[i], r[i]) (0 <= i <= Q - 1).
Calculate mex(a[l[i]], a[l[i] + 1],..., a[r[i] - 1]) for all (l[i], r[i]).
The function mex is minimum excluded value.
Wikipedia Page of mex function
You can assume that N <= 100000, Q <= 100000, and a[i] <= 100000.
O(N * (r[i] - l[i]) log(r[i] - l[i]) ) algorithm is obvious, but it is not efficient.
My Current Approach
#include <bits/stdc++.h>
using namespace std;
int N, Q, a[100009], l, r;
int main() {
cin >> N >> Q;
for(int i = 0; i < N; i++) cin >> a[i];
for(int i = 0; i < Q; i++) {
cin >> l >> r;
set<int> s;
for(int j = l; j < r; j++) s.insert(a[i]);
int ret = 0;
while(s.count(ret)) ret++;
cout << ret << endl;
}
return 0;
}
Please tell me how to solve.
EDIT: O(N^2) is slow. Please tell me more fast algorithm.
Here's an O((Q + N) log N) solution:
Let's iterate over all positions in the array from left to right and store the last occurrences for each value in a segment tree (the segment tree should store the minimum in each node).
After adding the i-th number, we can answer all queries with the right border equal to i.
The answer is the smallest value x such that last[x] < l. We can find by going down the segment tree starting from the root (if the minimum in the left child is smaller than l, we go there. Otherwise, we go to the right child).
That's it.
Here is some pseudocode:
tree = new SegmentTree() // A minimum segment tree with -1 in each position
for i = 0 .. n - 1
tree.put(a[i], i)
for all queries with r = i
ans for this query = tree.findFirstSmaller(l)
The find smaller function goes like this:
int findFirstSmaller(node, value)
if node.isLeaf()
return node.position()
if node.leftChild.minimum < value
return findFirstSmaller(node.leftChild, value)
return findFirstSmaller(node.rightChild)
This solution is rather easy to code (all you need is a point update and the findFisrtSmaller function shown above and I'm sure that it's fast enough for the given constraints.
Let's process both our queries and our elements in a left-to-right manner, something like
for (int i = 0; i < N; ++i) {
// 1. Add a[i] to all internal data structures
// 2. Calculate answers for all queries q such that r[q] == i
}
Here we have O(N) iterations of this loop and we want to do both update of the data structure and query the answer for suffix of currently processed part in o(N) time.
Let's use the array contains[i][j] which has 1 if suffix starting at the position i contains number j and 0 otherwise. Consider also that we have calculated prefix sums for each contains[i] separately. In this case we could answer each particular suffix query in O(log N) time using binary search: we should just find the first zero in the corresponding contains[l[i]] array which is exactly the first position where the partial sum is equal to index, and not to index + 1. Unfortunately, such arrays would take O(N^2) space and need O(N^2) time for each update.
So, we have to optimize. Let's build a 2-dimensional range tree with "sum query" and "assignment" range operations. In such tree we can query sum on any sub-rectangle and assign the same value to all the elements of any sub-rectangle in O(log^2 N) time, which allows us to do the update in O(log^2 N) time and queries in O(log^3 N) time, giving the time complexity O(Nlog^2 N + Qlog^3 N). The space complexity O((N + Q)log^2 N) (and the same time for initialization of the arrays) is achieved using lazy initialization.
UP: Let's revise how the query works in range trees with "sum". For 1-dimensional tree (to not make this answer too long), it's something like this:
class Tree
{
int l, r; // begin and end of the interval represented by this vertex
int sum; // already calculated sum
int overriden; // value of override or special constant
Tree *left, *right; // pointers to children
}
// returns sum of the part of this subtree that lies between from and to
int Tree::get(int from, int to)
{
if (from > r || to < l) // no intersection
{
return 0;
}
if (l <= from && to <= r) // whole subtree lies within the interval
{
return sum;
}
if (overriden != NO_OVERRIDE) // should push override to children
{
left->overriden = right->overriden = overriden;
left->sum = right->sum = (r - l) / 2 * overriden;
overriden = NO_OVERRIDE;
}
return left->get(from, to) + right->get(from, to); // split to 2 queries
}
Given that in our particular case all queries to the tree are prefix sum queries, from is always equal to 0, so, one of the calls to children always return a trivial answer (0 or already computed sum). So, instead of doing O(log N) queries to the 2-dimensional tree in the binary search algorithm, we could implement an ad-hoc procedure for search, very similar to this get query. It should first get the value of the left child (which takes O(1) since it's already calculated), then check if the node we're looking for is to the left (this sum is less than number of leafs in the left subtree) and go to the left or to the right based on this information. This approach will further optimize the query to O(log^2 N) time (since it's one tree operation now), giving the resulting complexity of O((N + Q)log^2 N)) both time and space.
Not sure this solution is fast enough for both Q and N up to 10^5, but it may probably be further optimized.
For example:
5 = 1+1+1+1+1
5 = 1+1+1+2
5 = 1+1+2+1
5 = 1+2+1+1
5 = 2+1+1+1
5 = 1+2+2
5 = 2+2+1
5 = 2+1+2
Can anyone give a hint for a pseudo code on how this can be done please.
Honestly have no clue how to even start.
Also this looks like an exponential problem can it be done in linear time?
Thank you.
In the example you have provided order of addends is important. (See the last two lines in your example). With this in mind, the answer seems to be related to Fibonacci numbers. Let's F(n) be the ways n can be written as 1s and 2s. Then the last addened is either 1 or 2. So F(n) = F(n-1) + F(n-2). These are the initial values:
F(1) = 1 (1 = 1)
F(2) = 2 (2 = 1 + 1, 2 = 2)
This is actually the (n+1)th Fibonacci number. Here's why:
Let's call f(n) the number of ways to represent n. If you have n, then you can represent it as (n-1)+1 or (n-2)+2. Thus the ways to represent it are the number of ways to represent it is f(n-1) + f(n-2). This is the same recurrence as the Fibonacci numbers. Furthermore, we see if n=1 then we have 1 way, and if n=2 then we have 2 ways. Thus the (n+1)th Fibonacci number is your answer. There are algorithms out there to compute enormous Fibonacci numbers very quickly.
Permutations
If we want to know how many possible orderings there are in some set of size n without repetition (i.e., elements selected are removed from the available pool), the factorial of n (or n!) gives the answer:
double factorial(int n)
{
if (n <= 0)
return 1;
else
return n * factorial(n - 1);
}
Note: This also has an iterative solution and can even be approximated using the gamma function:
std::round(std::tgamma(n + 1)); // where n >= 0
The problem set starts with all 1s. Each time the set changes, two 1s are replaced by one 2. We want to find the number of ways k items (the 2s) can be arranged in a set of size n. We can query the number of possible permutations by computing:
double permutation(int n, int k)
{
return factorial(n) / factorial(n - k);
}
However, this is not quite the result we want. The problem is, permutations consider ordering, e.g., the sequence 2,2,2 would count as six distinct variations.
Combinations
These are essentially permutations which ignore ordering. Since the order no longer matters, many permutations are redundant. Redundancy per permutation can be found by computing k!. Dividing the number of permutations by this value gives the number of combinations:
Note: This is known as the binomial coefficient and should be read as "n choose k."
double combination(int n, int k)
{
return permutation(n, k) / factorial(k);
}
int solve(int n)
{
double result = 0;
if (n > 0) {
for ( int k = 0; k <= n; k += 1, n -= 1 )
result += combination(n, k);
}
return std::round(result);
}
This is a general solution. For example, if the problem were instead to find the number of ways an integer can be represented as a sum of 1s and 3s, we would only need to adjust the decrement of the set size (n-2) at each iteration.
Fibonacci numbers
The reason the solution using Fibonacci numbers works, has to do with their relation to the binomial coefficients. The binomial coefficients can be arranged to form Pascal's triangle, which when stored as a lower-triangular matrix, can be accessed using n and k as row/column indices to locate the element equal to combination(n,k).
The pattern of n and k as they change over the lifetime of solve, plot a diagonal when viewed as coordinates on a 2-D grid. The result of summing values along a diagonal of Pascal's triangle is a Fibonacci number. If the pattern changes (e.g., when finding sums of 1s and 3s), this will no longer be the case and this solution will fail.
Interestingly, Fibonacci numbers can be computed in constant time. Which means we can solve this problem in constant time simply by finding the (n+1)th Fibonacci number.
int fibonacci(int n)
{
constexpr double SQRT_5 = std::sqrt(5.0);
constexpr double GOLDEN_RATIO = (SQRT_5 + 1.0) / 2.0;
return std::round(std::pow(GOLDEN_RATIO, n) / SQRT_5);
}
int solve(int n)
{
if (n > 0)
return fibonacci(n + 1);
return 0;
}
As a final note, the numbers generated by both the factorial and fibonacci functions can be extremely large. Therefore, a large-maths library may be needed if n will be large.
Here is the code using backtracking which solves your problem. At each step, while remembering the numbers used to get the sum so far(using vectors here), first make a copy of them, first subtract 1 from n and add it to the copy then recur with n-1 and the copy of the vector with 1 added to it and print when n==0. then return and repeat the same for 2, which essentially is backtracking.
#include <stdio.h>
#include <vector>
#include <iostream>
using namespace std;
int n;
void print(vector<int> vect){
cout << n <<" = ";
for(int i=0;i<vect.size(); ++i){
if(i>0)
cout <<"+" <<vect[i];
else cout << vect[i];
}
cout << endl;
}
void gen(int n, vector<int> vect){
if(!n)
print(vect);
else{
for(int i=1;i<=2;++i){
if(n-i>=0){
std::vector<int> vect2(vect);
vect2.push_back(i);
gen(n-i,vect2);
}
}
}
}
int main(){
scanf("%d",&n);
vector<int> vect;
gen(n,vect);
}
This problem can be easily visualized as follows:
Consider a frog, that is present in front of a stairway. It needs to reach the n-th stair, but he can only jump 1 or 2 steps on the stairway at a time. Find the number of ways in which he can reach the n-th stair?
Let T(n) denote the number of ways to reach the n-th stair.
So, T(1) = 1 and T(2) = 2(2 one-step jumps or 1 two-step jump, so 2 ways)
In order to reach the n-th stair, we already know the number of ways to reach the (n-1)th stair and the (n-2)th stair.
So, once can simple reach the n-th stair by a 1-step jump from (n-1)th stair or a 2-step jump from (n-2)th step...
Hence, T(n) = T(n-1) + T(n-2)
Hope it helps!!!
I'm teaching myself c++ and eigen in one go,
so maybe this is an easy question.
Given n and 0 "<" m "<" n, and an n-vector d of floats. To make it concrete:
VectorXf d = VectorXf::Random(n)
i would like to have a m-vector d_prim onf integers that contains
the indexes of all the entries of d that are smaller or equal than
the m-th largest entry of d. Efficiency matters. if there are draws
in the data, then filling d_prim the first m entries of d that are
smaller than its m-th largest entry is fine (i really need the
index of m numbers that are not larger than the m^th largest entry
of d).
I've tried (naively):
float hst(VectorXf& d,int& m){
// VectorXf d = VectorXf::Random(n);
std::nth_element(d.data().begin(),d.data().begin()+m,d.data().end());
return d(m);
}
but there is two problems with it:
it doesn't work
even if it did work, i still have to pass over (a copy) of d once to find the indices
of those entries that are smaller than d(m). Is this necessary?
Best,
std::nth_element is what you want (contrary to what I said before). It does a partial so that the elements in the range [first, mth) are less than those in the range [mth, last). So after running nth_element all you have to do copy the first m elements to the new vector.
VextorXf d = VectorXf::Random(n);
VectorXi d_prim(m);
std::nth_element(d.data().begin(), d.data.begin() + m, d.data().end());
std::copy(d.data().begin(), d.data().begin() + m, d_prim.begin());
This answer has more info on algorithms to do this.
Putting together David Brown's and Kerrek SB answers i got this as "the most efficient proposal":
VectorXi hst(VectorXf& d,int& h){
VectorXf e = d;
VectorXi f(h);
int j=0;
std::nth_element(d.data(),d.data()+h,d.data()+d.size());
for(int i=0;i<d.size();i++){
if(e(i)<=d(h)){
f(j)=i;
j++;
if(j==h) break;
}
}
return f;
}