Improve searching through unsorted list - c++

My code spends 40% of its time searching through unsorted vectors. More specifically, the searching function my_search repeatedly receives a single unsorted vector of length N, where N can take any values between 10 and 100,000. The weights associated with each element have relatively little variance (e.g. [ 0.8, 0.81, 0.85, 0.78, 0.8, 0.7, 0.84, 0.82, ...]).
The algorithm my_search starts by summing all the weights for each object and then sample an average of N elements (as many as the length of the vector) with replacements. The algorithm is quite similar to
int sum_of_weight = 0;
for(int i=0; i<num_choices; i++) {
sum_of_weight += choice_weight[i];
}
int rnd = random(sum_of_weight);
for(int i=0; i<num_choices; i++) {
if(rnd < choice_weight[i])
return i;
rnd -= choice_weight[i];
}
from this post.
I could sort the vector before searching but takes a time of the order of O(N log N) (depending on the sort algorithm used) and I doubt (but might be wrong as I haven't tried) that I would gain much time especially as the weights have little variance.
Another solution would be to store information of how much weight there is before a series of points. For example, while summing the vector, every N/10 elements, I could store the information of how much weights has been summed yet. Then, I could first compare rnd to these 10 breakpoints and search in only a tenth of the total length of the vector.
Would this be a good solution?
Is there a name for the process I described?
How can I estimate what is the right number of breakpoints to store as a function of N?
Is there a better solution?

log(N) Solution
{
std::vector<double> sums;
double sum_of_weight = 0;
for(int i=0; i<num_choices; i++) {
sum_of_weight += choice_weight[i];
sums.push_back(sum_of_weight);
}
std::vector<double>::iterator high = std::upper_bound(sums.begin(), sums.end(), random(sum_of_weight));
return std::distance(sums.begin(), high);
}
Essentially the same idea you have for a better way to solve it, but rather than store only a 10th of the elements, store all of them and use binary search to find the index of the one closest to your value.
Analysis
Even though this solution is O(logN), you really have to ask yourself if it's worth it. Is it worth it to have to create an extra vector, thus accumulating extra clock cycles to store things in the vector, the time it takes for vectors to resize, the time it takes to call a function to perform binary search, etc?
As I was writing the above, I realised you can use a deque instead and that will almost get rid of the performance hit from having to resize and copy contents of vectors without affecting the O(1) lookup of vectors.
So I guess the question remains, is it worth it to copy over the elements into another container and then only do an O(logN) search?
Conclusion
TBH, I don't think you've gained much from this optimization. In fact I think you gained an overhead of O(logN).

Related

efficiently mask-out exactly 30% of array with 1M entries

My question's header is similar to this link, however that one wasn't answered to my expectations.
I have an array of integers (1 000 000 entries), and need to mask exactly 30% of elements.
My approach is to loop over elements and roll a dice for each one. Doing it in a non-interrupted manner is good for cache coherency.
As soon as I notice that exactly 300 000 of elements were indeed masked, I need to stop. However, I might reach the end of an array and have only 200 000 elements masked, forcing me to loop a second time, maybe even a third, etc.
What's the most efficient way to ensure I won't have to loop a second time, and not being biased towards picking some elements?
Edit:
//I need to preserve the order of elements.
//For instance, I might have:
[12, 14, 1, 24, 5, 8]
//Masking away 30% might give me:
[0, 14, 1, 24, 0, 8]
The result of masking must be the original array, with some elements set to zero
Just do a fisher-yates shuffle but stop at only 300000 iterations. The last 300000 elements will be the randomly chosen ones.
std::size_t size = 1000000;
for(std::size_t i = 0; i < 300000; ++i)
{
std::size_t r = std::rand() % size;
std::swap(array[r], array[size-1]);
--size;
}
I'm using std::rand for brevity. Obviously you want to use something better.
The other way is this:
for(std::size_t i = 0; i < 300000;)
{
std::size_t r = rand() % 1000000;
if(array[r] != 0)
{
array[r] = 0;
++i;
}
}
Which has no bias and does not reorder elements, but is inferior to fisher yates, especially for high percentages.
When I see a massive list, my mind always goes first to divide-and-conquer.
I won't be writing out a fully-fleshed algorithm here, just a skeleton. You seem like you have enough of a clue to take decent idea and run with it. I think I only need to point you in the right direction. With that said...
We'd need an RNG that can return a suitably-distributed value for how many masked values could potentially be below a given cut point in the list. I'll use the halfway point of the list for said cut. Some statistician can probably set you up with the right RNG function. (Anyone?) I don't want to assume it's just uniformly random [0..mask_count), but it might be.
Given that, you might do something like this:
// the magic RNG your stats homework will provide
int random_split_sub_count_lo( int count, int sub_count, int split_point );
void mask_random_sublist( int *list, int list_count, int sub_count )
{
if (list_count > SOME_SMALL_THRESHOLD)
{
int list_count_lo = list_count / 2; // arbitrary
int list_count_hi = list_count - list_count_lo;
int sub_count_lo = random_split_sub_count_lo( list_count, mask_count, list_count_lo );
int sub_count_hi = list_count - sub_count_lo;
mask( list, list_count_lo, sub_count_lo );
mask( list + sub_count_lo, list_count_hi, sub_count_hi );
}
else
{
// insert here some simple/obvious/naive implementation that
// would be ludicrous to use on a massive list due to complexity,
// but which works great on very small lists. I'm assuming you
// can do this part yourself.
}
}
Assuming you can find someone more informed on statistical distributions than I to provide you with a lead on the randomizer you need to split the sublist count, this should give you O(n) performance, with 'n' being the number of masked entries. Also, since the recursion is set up to traverse the actual physical array in constantly-ascending-index order, cache usage should be as optimal as it's gonna get.
Caveat: There may be minor distribution issues due to the discrete nature of the list versus the 30% fraction as you recurse down and down to smaller list sizes. In practice, I suspect this may not matter much, but whatever person this solution is meant for may not be satisfied that the random distribution is truly uniform when viewed under the microscope. YMMV, I guess.
Here's one suggestion. One million bits is only 128K which is not an onerous amount.
So create a bit array with all items initialised to zero. Then randomly select 300,000 of them (accounting for duplicates, of course) and mark those bits as one.
Then you can run through the bit array and, any that are set to one (or zero, if your idea of masking means you want to process the other 700,000), do whatever action you wish to the corresponding entry in the original array.
If you want to ensure there's no possibility of duplicates when randomly selecting them, just trade off space for time by using a Fisher-Yates shuffle.
Construct an collection of all the indices and, for each of the 700,000 you want removed (or 300,000 if, as mentioned, masking means you want to process the other ones) you want selected:
pick one at random from the remaining set.
copy the final element over the one selected.
reduce the set size.
This will leave you with a random subset of indices that you can use to process the integers in the main array.
You want reservoir sampling. Sample code courtesy of Wikipedia:
(*
S has items to sample, R will contain the result
*)
ReservoirSample(S[1..n], R[1..k])
// fill the reservoir array
for i = 1 to k
R[i] := S[i]
// replace elements with gradually decreasing probability
for i = k+1 to n
j := random(1, i) // important: inclusive range
if j <= k
R[j] := S[i]

Merging K Sorted Arrays/Vectors Complexity

While looking into the problem of merging k sorted contiguous arrays/vectors and how it differs in implementation from merging k sorted linked lists I found two relatively easy naive solutions for merging k contiguous arrays and a nice optimized method based off of pairwise-merging that simulates how mergeSort() works. The two naive solutions I implemented seem to have the same complexity, but in a big randomized test I ran it seems one is way more inefficient than the other.
Naive merging
My naive merging method works as follows. We create an output vector<int> and set it to the first of k vectors we are given. We then merge in the second vector, then the third, and so on. Since a typical merge() method that takes in two vectors and returns one is asymptotically linear in both space and time to the number of elements in both vectors the total complexity will be O(n + 2n + 3n + ... + kn) where n is the average number of elements in each list. Since we're adding 1n + 2n + 3n + ... + kn I believe the total complexity is O(n*k^2). Consider the following code:
vector<int> mergeInefficient(const vector<vector<int> >& multiList) {
vector<int> finalList = multiList[0];
for (int j = 1; j < multiList.size(); ++j) {
finalList = mergeLists(multiList[j], finalList);
}
return finalList;
}
Naive selection
My second naive solution works as follows:
/**
* The logic behind this algorithm is fairly simple and inefficient.
* Basically we want to start with the first values of each of the k
* vectors, pick the smallest value and push it to our finalList vector.
* We then need to be looking at the next value of the vector we took the
* value from so we don't keep taking the same value. A vector of vector
* iterators is used to hold our position in each vector. While all iterators
* are not at the .end() of their corresponding vector, we maintain a minValue
* variable initialized to INT_MAX, and a minValueIndex variable and iterate over
* each of the k vector iterators and if the current iterator is not an end position
* we check to see if it is smaller than our minValue. If it is, we update our minValue
* and set our minValue index (this is so we later know which iterator to increment after
* we iterate through all of them). We do a check after our iteration to see if minValue
* still equals INT_MAX. If it has, all iterators are at the .end() position, and we have
* exhausted every vector and can stop iterative over all k of them. Regarding the complexity
* of this method, we are iterating over `k` vectors so long as at least one value has not been
* accounted for. Since there are `nk` values where `n` is the average number of elements in each
* list, the time complexity = O(nk^2) like our other naive method.
*/
vector<int> mergeInefficientV2(const vector<vector<int> >& multiList) {
vector<int> finalList;
vector<vector<int>::const_iterator> iterators(multiList.size());
// Set all iterators to the beginning of their corresponding vectors in multiList
for (int i = 0; i < multiList.size(); ++i) iterators[i] = multiList[i].begin();
int k = 0, minValue, minValueIndex;
while (1) {
minValue = INT_MAX;
for (int i = 0; i < iterators.size(); ++i){
if (iterators[i] == multiList[i].end()) continue;
if (*iterators[i] < minValue) {
minValue = *iterators[i];
minValueIndex = i;
}
}
iterators[minValueIndex]++;
if (minValue == INT_MAX) break;
finalList.push_back(minValue);
}
return finalList;
}
Random simulation
Long story short, I built a simple randomized simulation that builds a multidimensional vector<vector<int>>. The multidimensional vector starts with 2 vectors each of size 2, and ends up with 600 vectors each of size 600. Each vector is sorted, and the sizes of the larger container and each child vector increase by two elements every iteration. I time how long it takes for each algorithm to perform like this:
clock_t clock_a_start = clock();
finalList = mergeInefficient(multiList);
clock_t clock_a_stop = clock();
clock_t clock_b_start = clock();
finalList = mergeInefficientV2(multiList);
clock_t clock_b_stop = clock();
I then built the following plot:
My calculations say the two naive solutions (merging and selecting) both have the same time complexity but the above plot shows them as very different. At first I rationalized this by saying there may be more overhead in one vs the other, but then realized that the overhead should be a constant factor and not produce a plot like the following. What is the explanation for this? I assume my complexity analysis is wrong?
Even if two algorithms have the same complexity (O(nk^2) in your case) they may end up having enormously different running times depending upon your size of input and the 'constant' factors involved.
For example, if an algorithm runs in n/1000 time and another algorithm runs in 1000n time, they both have the same asymptotic complexity but they shall have very different running times for 'reasonable' choices of n.
Moreover, there are effects caused by caching, compiler optimizations etc that may change the running time significantly.
For your case, although your calculation of complexities seem to be correct, but in the first case, the actual running time shall be (nk^2 + nk)/2 whereas in the second case, the running time shall be nk^2. Notice that the division by 2 may be significant because as k increases the nk term shall be negligible.
For a third algorithm, you can modify the Naive selection by maintaining a heap of k elements containing the first elements of all the k vectors. Then your selection process shall take O(logk) time and hence the complexity shall reduce to O(nklogk).

How do I print out vectors in different order every time

I'm trying to make two vectors. Where vector1 (total1) is containing some strings and vector2(total2) is containing some random unique numbers(that are between 0 and total1.size() - 1)
I want to make a program that print out total1s strings, but in different order every turn. I don't want to use iterators or something because I want to improve my problem solving capacity.
Here is the specific function that crash the program.
for (unsigned i = 0; i < total1.size();)
{
v1 = rand() % total1.size();
for (unsigned s = 0; s < total1.size(); ++s)
{
if (v1 == total2[s])
;
else
{
total2.push_back(v1);
++i;
}
}
}
I'm very grateful for any help that I can get!
Can I suggest you change of algorithm?. Because, even if your current one is correctly implemented ("s", in your code, must go from 0 to total2.size not total1.size and if element is found, break and generate a new random), it has the following drawback: assume vectors of 1.000.000 elements and you are trying the last random number. You have one probability in 1.000.000 of find a random number not previously used. That is a very small amount.Last but one number has a probability of 2 in 1.000.000 also small. In conclusion, your program will loop and expend lots of CPU resources.
Your best alternative is follow #NathanOliver suggestion and look for function std::shuffle. The manual page shows the implementation algorithm, that is what you are looking for.
Another simple algorithm, with some pros and cons, is:
init total2 with sequence 0,1,2,...,n where n is the size total1 - 1
choice two random numbers, i1 and i2, in range [0,n-1].
Swap elements i1 and i2 in total2.
repeat from (2) a fixed number of times "R".
This method allows to known a priori the necessary steps and to control the level of "randomness" of the final vector (bigger R is more random). However, it is far to be good in its randomness quality.
Another method, better in the probabilistic distribution:
fill a list L with number 0,1,2,...size total1-1.
choice a random number i between 0 and the size of list L - 1 .
Store in total2 the i-th element in list L.
Remove this element from L.
repeat from (2) until L is empty.
If you just want to shuffle vector<string> total1, you can do this without using helping vector<int> total2. Here is an implementation based on Fisher–Yates shuffle.
for(int i=n-1; i>=1; i--) {
int j=rand()%(i+1);
swap(total1[j], total1[i]); // your prof might not allow use of swap:)
}
If you must use vector<int> total2 then shuffle it using above algorithm. Next you can use it to create a new vector<string> result from total1 where result[i]=total1[total2[i]].

Big O calculation

int maxValue = m[0][0];
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
if ( m[i][j] >maxValue )
{
maxValue = m[i][j];
}
}
}
cout<<maxValue<<endl;
int sum = 0;
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
sum = sum + m[i][j];
}
}
cout<< sum <<endl;
For the above mentioned code I got O(n2) as the execution time growth
They way I got it was by:
MAX [O(1) , O(n2), O(1) , O(1) , O(n2), O(1)]
both O(n2) is for for loops. Is this calculation correct?
If I change this code as:
int maxValue = m[0][0];
int sum = 0;
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
if ( m[i][j] > maxValue )
{
maxValue = m[i][j];
}
sum += m[i][j];
}
}
cout<<maxValue<<endl;
cout<< sum <<endl;
Still Big O would be O(n2) right?
So does that mean Big O just an indication on how time will grow according to the input data size? and not how algorithm written?
This feels a bit like a homework question to me, but...
Big-Oh is about the algorithm, and specifically how the number of steps performed (or the amount of memory used) by the algorithm grows as the size of the input data grows.
In your case, you are taking N to be the size of the input, and it's confusing because you have a two-dimensional array, NxN. So really, since your algorithm only makes one or two passes over this data, you could call it O(n), where in this case n is the size of your two-dimensional input.
But to answer the heart of your question, your first code makes two passes over the data, and your second code does the same work in a single pass. However, the idea of Big-Oh is that it should give you the order of growth, which means independent of exactly how fast a particular computer runs. So, it might be that my computer is twice as fast as yours, so I can run your first code in about the same time as you run the second code. So we want to ignore those kinds of differences and say that both algorithms make a fixed number of passes over the data, so for the purposes of "order of growth", one pass, two passes, three passes, it doesn't matter. It's all about the same as one pass.
It's probably easier to think about this without thinking about the NxN input. Just think about a single list of N numbers, and say you want to do something to it, like find the max value, or sort the list. If you have 100 items in your list, you can find the max in 100 steps, and if you have 1000 items, you can do it in 1000 steps. So the order of growth is linear with the size of the input: O(n). On the other hand, if you want to sort it, you might write an algorithm that makes roughly a full pass over the data each time it finds the next item to be inserted, and it has to do that roughly once for each element in the list, so that's making n passes over your list of length n, so that's O(n^2). If you have 100 items in your list, that's roughly 10^4 steps, and if you have 1000 items in your list that's roughly 10^6 steps. So the idea is that those numbers grow really fast in comparison to the size of your input, so even if I have a much faster computer (e.g., a model 10 years better than yours), I might be able to to beat you in the max problem even with a list 2 or 10 or even 100 or 1000 times as long. But for the sorting problem with a O(n^2) algorithm, I won't be able to beat you when I try to take on a list that's 100 or 1000 times as long, even with a computer 10 or 20 years better than yours. That's the idea of Big-Oh, to factor out those "relatively unimportant" speed differences and be able to see what amount of work, in a more general/theoretical sense, a given algorithm does on a given input size.
Of course, in real life, it may make a huge difference to you that one computer is 100 times faster than another. If you are trying to solve a particular problem with a fixed maximum input size, and your code is running at 1/10 the speed that your boss is demanding, and you get a new computer that runs 10 times faster, your problem is solved without needing to write a better algorithm. But the point is that if you ever wanted to handle larger (much larger) data sets, you couldn't just wait for a faster computer.
The big O notation is an upper bound to the maximum amount of time taken to execute the algorithm based on the input size. So basically two algorithms can have slightly varying maximum running time but same big O notation.
what you need to understand is that for a running time function that is linear based on input size will have big o notation as o(n) and a quadratic function will always have big o notation as o(n^2).
so if your running time is just n, that is one linear pass, big o notation stays o(n) and if your running time is 6n+c that is 6 linear passes and a constant time c it still is o(n).
Now in the above case the second code is more optimized as the number of times you need to make the skip to memory locations for the loop is less. and hence this will give a better execution. but both the code would still have the asymptotic running time as o(n^2).
Yes, it's O(N^2) in both cases. Of course O() time complexity depends on how you have written your algorithm, but both the versions above are O(N^2). However, note that actually N^2 is the size of your input data (it's an N x N matrix), so this would be better characterized as a linear time algorithm O(n) where n is the size of the input, i.e. n = N x N.

Given a range, I have to calculate the frequency of numbers in an array. Is the given solution efficient?

#include<iostream.h>
int main()
{
int a[10]={1,2,3,5,2,3,1,5,3,1};
int i;
int c[10]={0};
for(i = 0 ; i < 10 ; i++)
c[a[i]]++;
for(i=0;i<10;i++)
cout<<i<<": "<<c[i]<<endl;
return 0;
}
The running time of the Algorithm is O(n) but its taking an extra space of O(n). Can I do better?
Thanks!
Depends on what is important to you - you can create an algorithm taking O(n^2) time, but O(1) space (using two loops, see code below), but you can't improve time complexity below O(n).
for(i=0;i<10;i++) {
count = 0;
for(j=0;j<10;j++)
if (c[j] == i) count++;
cout<<i<<": "<<count<<endl;
}
Another possiblity for O(1) space would be an in-place sort of the array and then traversing this once, which should have time complexity O(n log n) using in-place merge sort.
No you can't. That's the best you can do.
What is "efficient"? Show us you performance requirements and performance measurements. Then we can tell you if it's efficient. Until then this is a wide open question with lots of wrong answers and no right answer.
The answers thus far are correct only the word 'efficient' means 'runs as fast possible'.
Maybe you have a fast computer with little RAM.
You can always make a piece of code run faster or use less memory of less disk space or less.... if it is not 'efficient' enough, I have seen guys hand craft assembly to make it faster. Usually it's a waste of time and effort. Optimizing code that has not been profiled is a fools game.
If all the numbers are in range 1 to n then in can be done in O(n) time complexity and O(1) space complexity.
if there is an index X and array A such that A[X]=Y then add N to the value present at index Y.So A[Y] becomes A[Y]=original+N. Continuing this ,values will be of form (original+KN) where k>=0.To retrieve original element we can do (original+KN)%N since (x+kn)%n=x and count can be found by (original+KN)/N.