Divide and Conquer Method for peakFinder - divide-and-conquer

The Divide and Conquer method for finding a peak works a lot like this
find_peak(a,low,high):
mid = (low+high)/2
if a[mid-1] <= a[mid] >= a[mid+1] return mid // this is a peak;
if a[mid] < a[mid-1]
return find_peak(a,low,mid-1) // a peak must exist in A[low..mid-1]
if a[mid] < a[mid+1]
return find_peak(a,mid+1,high) // a peak must exist in A[mid+1..high]
So my Question is if we use this algorithm we may loose the other half where the peak may actually exist
Or are we assuming that the peak we find is one peak and the one in the other half is another therefore there might be Two peaks in a single array

I'll use this as peak definition "An array element is peak if it is NOT smaller than its neighbors."
This array has 2 peak elemets:
[10, 20, 15, 2, 23, 90, 67]
20 and 90
The algorithm posted above would would only return one value. Not all peak values, not even the biggest peak value - it simply finds some peak value.

Related

Pick a matrix cell according to its probability

I have a 2D matrix of positive real values, stored as follow:
vector<vector<double>> matrix;
Each cell can have a value equal or greater to 0, and this value represents the possibility of the cell to be chosen. In particular, for example, a cell with a value equals to 3 has three times the probability to be chosen compared to a cell with value 1.
I need to select N cells of the matrix (0 <= N <= total number of cells) randomly, but according to their probability to be selected.
How can I do that?
The algorithm should be as fast as possible.
I describe two methods, A and B.
A works in time approximately N * number of cells, and uses space O(log number of cells). It is good when N is small.
B works in time approximately (number of cells + N) * O(log number of cells), and uses space O(number of cells). So, it is good when N is large (or even, 'medium') but uses a lot more memory, in practice it might be slower in some regimes for that reason.
Method A:
The first thing you need to do is normalize the entries. (It's not clear to me if you assume they are normalized or not.) That means, sum all the entries and divide by the sum. (This part is potentially slow, so it's better if you assume or require that it already happened.)
Then you sample like this:
Choose a random [i,j] entry of the matrix (by choosing i,j each uniformly randomly from the range of integers 0 to n-1).
Choose a uniformly random real number p in the range [0, 1].
Check if matrix[i][j] > p. If so, return the pair [i][j]. If not, go back to step 1.
Why does this work? The probability that we end at step 3 with any particular output, is equal to, the probability that [i][j] was selected (this is the same for each entry), times the probality that the number p was small enough. This is proportional to the value matrix[i][j], so the sampling is choosing each entry with the correct proportions. It's also possible that at step 3 we go back to the start -- does that bias things? Basically, no. The reason is, suppose we arbitrarily choose a number k and then consider the distribution of the algorithm, conditioned on stopping exactly after k rounds. Conditioned on the assumption that we stop at the k'th round, no matter what value k we choose, the distribution we sample has to be exactly right by the above argument. Since if we eliminate the case that p is too small, the other possibilities all have their proportions correct. Since the distribution is perfect for each value of k that we might condition on, and the overall distribution (not conditioned on k) is an average of the distributions for each value of k, the overall distribution is perfect also.
If you want to analyze the number of rounds that typically needed in a rigorous way, you can do it by analyzing the probability that we actually stop at step 3 for any particular round. Since the rounds are independent, this is the same for every round, and statistically, it means that the running time of the algorithm is poisson distributed. That means it is tightly concentrated around its mean, and we can determine the mean by knowing that probability.
The probability that we stop at step 3 can be determined by considering the conditional probability that we stop at step 3, given that we chose any particular entry [i][j]. By the formulas for conditional expectation, you get that
Pr[ stop at step 3 ] = sum_{i,j} ( 1/(n^2) * Matrix[i,j] )
Since we assumed the matrix is normalized, this sum reduces to just 1/n^2. So, the expected number of rounds is about n^2 (that is, n^2 up to a constant factor) no matter what the entries in the matrix are. You can't hope to do a lot better than that I think -- that's about the same amount of time it takes to just read all the entries of the matrix, and it's hard to sample from a distribution that you cannot even read all of.
Note: What I described is a way to correctly sample a single element -- to get N elements from one matrix, you can just repeat it N times.
Method B:
Basically you just want to compute a histogram and sample inversely from it, so that you know you get exactly the right distribution. Computing the histogram is expensive, but once you have it, getting samples is cheap and easy.
In C++ it might look like this:
// Make histogram
typedef unsigned int uint;
typedef std::pair<uint, uint> upair;
typedef std::map<double, upair> histogram_type;
histogram_type histogram;
double cumulative = 0.0f;
for (uint i = 0; i < Matrix.size(); ++i) {
for (uint j = 0; j < Matrix[i].size(); ++j) {
cumulative += Matrix[i][j];
histogram[cumulative] = std::make_pair(i,j);
}
}
std::vector<upair> result;
for (uint k = 0; k < N; ++k) {
// Do a sample (this should never repeat... if it does not find a lower bound you could also assert false quite reasonably since it means something is wrong with rand() implementation)
while(1) {
double p = cumulative * rand(); // Or, for best results use std::mt19937 or boost::mt19937 and sample a real in the range [0,1] here.
histogram_type::iterator it = histogram::lower_bound(p);
if (it != histogram.end()) {
result.push_back(it->second);
break;
}
}
}
return result;
Here the time to make the histogram is something like number of cells * O(log number of cells) since inserting into the map takes time O(log n). You need an ordered data structure in order to get cheap lookup N * O(log number of cells) later when you do repeated sampling. Possibly you could choose a more specialized data structure to go faster, but I think there's only limited room for improvement.
Edit: As #Bob__ points out in comments, in method (B) a written there is potentially going to be some error due to floating point round-off if the matrices are quite large, even using type double, at this line:
cumulative += Matrix[i][j];
The problem is that, if cumulative is much larger than Matrix[i][j] beyond what the floating point precision can handle then these each time this statement is executed you may observe significant errors which accumulate to introduce significant inaccuracy.
As he suggests, if that happens, the most straightforward way to fix it is to sort the values Matrix[i][j] first. You could even do this in the general implementation to be safe -- sorting these guys isn't going to take more time asymptotically than you already have anyways.

How to check if a sequence of numbers has a increasing/decreasing trend in C++

What's the best way to check if a sequence of numbers has an increasing or decreasing trend?
I know that I could pick the first and last value of the sequence, and check their difference, but I'd like a somewhat more robust check. This means that I want to be able to tolerate a minority of increasing values within a mostly decreasing sequence, and viceversa.
More specifically, the numbers are stored as
vector<int> mySequence;
A few more details about the number sequences that I am dealing with:
All the numbers within the sequence have the same order of magnitude. This means that no sequence like the following can appear: [45 38 320 22 12 6].
By descending trend I mean that most or all the numbers within the sequence are lesser than the previous one. (The opposite applies for ascending trend). As a consequence, the following sequence is to be considered as descending: [45 42 38 32 28 34 26 20 12 8 48]
I would accumulate the number of increases vs number of decreases, which should give you an idea of whether there's an overall trend to increase or decrease.
You probably could look into trend estimation and some type of regression like linear regression.
It depends of course on the specific application of yours, but in general it sounds like a fitting problem.
I think you can simply calculate the median of your sequence and check if it is greater than the first value.
This is ONE way, not THE way.
Another way, always considering average medium, you can check the number of ascending and descending values in the sequence.
int trend = 0;
int avg = mySequence[0];
int size = mySequence.size();
for (int i=0; i < size - 1; ++i) {
if(i > 0) {
avg = (avg + mySequence[i]) / 2;
}
(mySequence[i+1] - avg) > 0 ? ++trend; --trend;
}
One possibility would be to count the number of ascending and descending values in the sequence:
int trend = 0;
for (int i=0;i<mySequence.size()-1;++i)
{
diff = mySequence[i+1] - mySequence[i];
if (diff > 0)
{
trend++;
}
else if (diff < 0)
{
trend--;
}
}
The sequence you give in example will end with trend equal to -6
I would most probably try to split the sequence into multiple segments, as you said the values do not differ dramatically - see piecewise regression
and to interpret the segments as your business needs.
You will need a vector for storing the segments, each segment having start/end index, some sort of median value, etc - see also where to split a piecewise regression
I suggest using methods from mathematical analysis (e.g. integral and differential calculus) applied to discrete integer sequences.
One way is then to compute rolling averages and see if those averages increase or decrease. Natural and easy ;)

Fastest way to find median in dynamically growing range

Can anyone suggest any methods or link to implementations of fast median finding for dynamic ranges in c++? For example, suppose that for iterations in my program the range grows, and I want to find the median at each run.
Range
4
3,4
8,3,4
2,8,3,4
7,2,8,3,4
So the above code would ultimately produce 5 median values for each line.
The best you can get without also keeping track of a sorted copy of your array is re-using the old median and updating this with a linear-time search of the next-biggest value. This might sound simple, however, there is a problem we have to solve.
Consider the following list (sorted for easier understanding, but you keep them in an arbitrary order):
1, 2, 3, 3, 3, 4, 5
// *
So here, the median is 3 (the middle element since the list is sorted). Now if you add a number which is greater than the median, this potentially "moves" the median to the right by one half index. I see two problems: How can we advance by one half index? (Per definition, the median is the mean value of the next two values.) And how do we know at which 3 the median was, when we only know the median was 3?
This can be solved by storing not only the current median but also the position of the median within the numbers of same value, here it has an "index offset" of 1, since it's the second 3. Adding a number greater than or equal to 3 to the list changes the index offset to 1.5. Adding a number less than 3 changes it to 0.5.
When this number becomes less than zero, the median changes. It also have to change if it goes beyond the count of equal numbers (minus 1), in this case 2, meaning the new median is more than the last equal number. In both cases, you have to search for the next smaller / next greater number and update the median value. To always know what the upper limit for the index offset is (in this case 2), you also have to keep track of the count of equal numbers.
This should give you a rough idea of how to implement median updating in linear time.
I think you can use a min-max-median heap. Each time when the array is updated, you just need log(n) time to find the new median value. For a min-max-median heap, the root is the median value, the left tree is a min-max heap, while the right side is a max-min heap. Please refer the paper "Min-Max Heaps and Generailized Priority Queues" for the details.
Fins some code below, I have reworked this stack to give your necessary output
private void button1_Click(object sender, EventArgs e)
{
string range = "7,2,8,3,4";
decimal median = FindMedian(range);
MessageBox.Show(median.ToString());
}
public decimal FindMedian(string source)
{
// Create a copy of the input, and sort the copy
int[] temp = source.Split(',').Select(m=> Convert.ToInt32(m)).ToArray();
Array.Sort(temp);
int count = temp.Length;
if (count == 0) {
throw new InvalidOperationException("Empty collection");
}
else if (count % 2 == 0) {
// count is even, average two middle elements
int a = temp[count / 2 - 1];
int b = temp[count / 2];
return (a + b) / 2m;
}
else {
// count is odd, return the middle element
return temp[count / 2];
}
}

How can I reserve memory for a very large sieve?

I want to generate primes by sieving up to 100,000,000 but declaring a bool array of this range is crashing my program.
This is my code:
long long i,j,n;
bool prime[100000000+1];
prime[1]=prime[0]=false;
for(i=2;i<=100000000;i++){
prime[i]=true;
}
for(i=2;i<=100000000;i++){
if(prime[i]==false){
continue;
}
for(j=i*2;j<=100000000;j+=i){
prime[j]=false;
}
}
How can I solve this problem?
The size of the array prime is 100 MB and declaring so big array on the stack is not allowed. Try placing the array in global scope to allocated it on the heap or alternatively allocate it using new(in C++) or malloc(in C). Don't forget to free the memory after that!
Variables can be stored in three different memory areas: static memory, automatic memory, dynamic memory. Automatic memory (non-static local variables) has limited size, you crossed it, and this crashed the program. The alternative is to mark your array static, which will place your array in static storage, or use dynamic memory.
Since this is tagged C++...
Use std::vector which is simple to use and uses dynamic memory.
#include <vector>
//...
//...
long long i,j,n;
std::vector<bool> prime(100000000+1, true);
prime[1]=prime[0]=false;
for(i=2;i<=100000000;i++){
if(prime[i]==false){
continue;
}
for(j=i*2;j<=100000000;j+=i){
prime[j]=false;
}
}
std::vector<bool> uses "bit-efficient" representation, which means that std::vector here will take about eight1 times less memory than traditional array.
std::bitset is similiar, but is constant in size, and you have to mark it static to avoid taking space in automatic memory.
You haven't asked, but Erastostenes Sieve is not the fastest algorithm for calculating a lot of prime numbers. It seems that Sieve of Atkin is faster and uses less memory.
1 - When your system has 8-bit bytes.
You should not make a single monolithic sieve of that size. Instead, use a segmented Sieve of Eratosthenes to perform the sieving in successive segments. At the first segment, the smallest multiple of each sieving prime that is within the segment is calculated, then multiples of the sieving primes are marked composite in the normal way; when all the sieving primes have been used, the remaining unmarked number in the segment are prime. Then, for the next segment, the smallest multiple of each sieving prime is the multiple that ended the sieving in the prior segment, and so the sieving continues until finished.
Consider the example of sieving from 100 to 200 in segments of 20; the 5 sieving primes are 3, 5, 7, 11 and 13. In the first segment from 100 to 120, the bitarray has 10 slots, with slot 0 corresponding to 101, slot k corresponding to 100 + 2*k* + 1, and slot 9 corresponding to 119. The smallest multiple of 3 in the segment is 105, corresponding to slot 2; slots 2+3=5 and 5+3=8 are also multiples of 3. The smallest multiple of 5 is 105 at slot 2, and slot 2+5=7 is also a multiple of 5. The smallest multiple of 7 is 105 at slot 2, and slot 2+7=9 is also a multiple of 7. And so on.
Function primes takes arguments lo, hi and delta; lo and hi must be even, with lo < hi, and lo must be greater than the square root of hi. The segment size is twice delta. Array ps of length m contains the sieving primes less than the square root of hi, with 2 removed since even numbers are ignored, calculated by the normal Sieve of Eratosthenes. Array qs contains the offset into the sieve bitarray of the smallest multiple in the current segment of the corresponding sieving prime. After each segment, lo advances by twice delta, so the number corresponding to an index i of the sieve bitarray is lo + 2 i + 1.
function primes(lo, hi, delta)
sieve := makeArray(0..delta-1)
ps := tail(primes(sqrt(hi)))
m := length(ps)
qs := makeArray(0..m-1)
for i from 0 to m-1
qs[i] := (-1/2 * (lo + ps[i] + 1)) % ps[i]
while lo < hi
for i from 0 to delta-1
sieve[i] := True
for i from 0 to m-1
for j from qs[i] to delta step ps[i]
sieve[j] := False
qs[i] := (qs[i] - delta) % ps[i]
for i from 0 to delta-1
t := lo + 2*i + 1
if sieve[i] and t < hi
output t
lo := lo + 2*delta
For the sample given above, this is called as primes(100, 200, 10). In the example given above, qs is initially [2,2,2,10,8], corresponding to smallest multiples 105, 105, 105, 121 and 117, and is reset for the second segment to [1,2,6,0,11], corresponding to smallest multiples 123, 125, 133, 121 and 143.
The value of delta is critical; you should make delta as large as possible so long at it fits in cache memory, for speed. Use your language's library for the bitarray, so that you only take a single bit for each sieve location. If you need a simple Sieve of Eratosthenes to calculate the sieving primes, this is my favorite:
function primes(n)
sieve := makeArray(2..n, True)
for p from 2 to n step 1
if sieve(p)
output p
for i from p * p to n step p
sieve[i] := False
You can see more algorithms involving prime numbers at my blog.

C/C++ implementation of an algorithm similar to subset sum

The problem is simpler than knapsack (or a type of it, without values and only positive weights). The problem consists of checking whether a number can be a combination of others. The function should return true or false.
For example,
112 and a list with { 17, 100, 101 } should return false, 469 with the same list should return true, 35 should return false, 119 should return true, etc...
Edit: subset sum problem would be more accurate for this than knapsack.
This is a special case of the Subset Sum problem, with sets that only contain one negative number (i.e., express 112 and { 17, 100, 101 } as { -112, 17, 100, 101 }). There's a few algorithms on the Wikipedia page, http://en.wikipedia.org/wiki/Subset_sum_problem.
An observation that will help you is that if your list is {a, b, c...} and the number you want to test is x, then x can be written as a sum of a sublist only if either x or x-a can be written as a sum of the sublist {b, c, ...}. This lets you write a very simple recursive algorithm to solve the problem.
edit: here is some code, taking into account the comments below. Not tested so probably buggy; and not necessarily the fastest. But for a small dataset it will get the job done neatly.
bool is_subset_sum(int x, std::list::const_iterator start, std::list::const_iterator end)
{
// for a 1-element list {a} we just need to test a|x
if (start == end) return (x % *start == 0);
// if x is small enough we don't need to bother testing x - a
if (x<a) return is_subset_sum (x, start+1, end);
// the default case. Note that the shortcut properties of || means the process ends as soon as we get a positive.
return (is_subset_sum (x, start+1, end) || is_subset_sum (x-a, start, end));
}
Note that positive results become denser as the queried number becomes larger. For example, all numbers greater than 100^2 can be generated by { 17, 100, 101 }. So the optimal algorithm may depend upon whether the queried number is much greater than the set's members. You might look into field theory.
At the least, you know the result is always false if the greatest common divisor of the set is not in the query, and that can be checked in negligible time.
If the number to reach is not too large, you can probably generate all the reachable numbers from the set that fall in the range [1,N].
Problem: Reach N using the elements in the list L, where N is small enough not to worry about a vector of size N elements' size.
Algorithm:
Generate a vector V of size N
For each element l in the list L
For each reachable element v in V
mark all elements v + n*l in V as reachable