C++ Sort number using bitshifting operations - c++

I wanted to ask how it is possible to sort an integers digit by size using bitshifting operations.
Here is an example:
Input : 12823745
Output : 87543221
Basically sorting the digits from the high digits to the small digits
I heared it is possible without using the Bubblesort/Quicksort algorithms, but by using some bitshifting operations.
Does someone know how that can be achieved?

Quick sort and bubble sort are general purpose algorithms. As such the do not make any assumption on the data to be sorted. However, whenever we have additional information on the data we can use this to get something different (I do not say better/faster or anything like this because it is really hard to be better than something as simple and powerful as quick/bubble sort and it really depends on the specific situation what you need).
If there is only a limited number of elements to be sorted (only 10 different digits) one could use something like this:
#include <iostream>
#include <vector>
using namespace std;
typedef std::vector<int> ivec;
void sort(std::vector<int>& vec){
ivec count(10,0);
for (int i=0;i<vec.size();++i){count[vec[i]]++;}
ivec out;
for (int i=9;i>-1;--i){
for (int j=0;j<count[i];j++){
out.push_back(i);
}
}
vec = out;
}
void print(const ivec& vec){
for (int i=0;i<vec.size();++i){std::cout << vec[i];}
std::cout << std::endl;
}
int main() {
ivec vec {1,2,8,2,3,7,4,5};
sort1(vec);
print(vec);
return 0;
}
Note that this has complexity O(N). Further, this always works when set of possible elements has a finite size (not only for digits but not for floats). Unfortunately it is only practical for really small sizes.
Sometimes it is not sufficient to just count the elements. They might have some identity beside the value that has to be sorted. However, the above can be modified easily to work also in this case (needs quite some copies but still O(n)).
Actually I have no idea how your problem could be solved by using bitshift operations. However, I just wanted to point out that there is always a way not to use a general purpose algorithm when your data has nice properties (and sometimes it can be even more efficient).

Here is a solution - Implement bubble sort with loops and bitwise operations.
std::string unsorted = "37980965";
for(int i = 1; i < unsorted.size(); ++i)
for(int j = 0; j < i; ++j) {
auto &a = unsorted[i];
auto &b = unsorted[j];
(((a) >= (b)) || (((a) ^= (b)), ((b) ^= (a)), ((a) ^= (b))));
}
std::cout << unsorted ;
Notice that the comparison and swap happens without any branching and arithmetic operations. There are only comparison and bitwise operations done.

How about this one?
#include <iostream>
int main()
{
int digit[] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
unsigned int input = 12823745;
unsigned int output = 0;
while(input > 0) {
digit[input % 10]++;
input /= 10;
}
for(int i = 9; i >= 0; --i) {
while(digit[i] > 0) {
output = output * 10 + i;
digit[i]--;
}
}
std::cout << output;
}

Related

Trouble sieving primes from a large range

#include <cstdio>
#include <algorithm>
#include <cmath>
using namespace std;
int main() {
int t,m,n;
scanf("%d",&t);
while(t--)
{
scanf("%d %d",&m,&n);
int rootn=sqrt(double(n));
bool p[10000]; //finding prime numbers from 1 to square_root(n)
for(int j=0;j<=rootn;j++)
p[j]=true;
p[0]=false;
p[1]=false;
int i=rootn;
while(i--)
{
if(p[i]==true)
{
int c=i;
do
{
c=c+i;
p[c]=false;
}while(c+p[i]<=rootn);
}
};
i=0;
bool rangep[10000]; //used for finding prime numbers between m and n by eliminating multiple of primes in between 1 and squareroot(n)
for(int j=0;j<=n-m+1;j++)
rangep[j]=true;
i=rootn;
do
{
if(p[i]==true)
{
for(int j=m;j<=n;j++)
{
if(j%i==0&&j!=i)
rangep[j-m]=false;
}
}
}while(i--);
i=n-m;
do
{
if(rangep[i]==true)
printf("%d\n",i+m);
}while(i--);
printf("\n");
}
return 0;
system("PAUSE");
}
Hello I'm trying to use the sieve of Eratosthenes to find prime numbers in a range between m to n where m>=1 and n<=100000000. When I give input of 1 to 10000, the result is correct. But for a wider range, the stack is overflowed even if I increase the array sizes.
               
A simple and more readable implementation
void Sieve(int n) {
int sqrtn = (int)sqrt((double)n);
std::vector<bool> sieve(n + 1, false);
for (int m = 2; m <= sqrtn; ++m) {
if (!sieve[m]) {
cout << m << " ";
for (int k = m * m; k <= n; k += m)
sieve[k] = true;
}
}
for (int m = sqrtn; m <= n; ++m)
if (!sieve[m])
cout << m << " ";
}
Reason of getting error
You are declaring an enormous array as a local variable. That's why when the stack frame of main is pushed it needs so much memory that stack overflow exception is generated. Visual studio is tricky enough to analyze the code for projected run-time stack usage and generate exception when needed.
Use this compact implementation. Moreover you can have bs declared in the function if you want. Don't make implementations complex.
Implementation
typedef long long ll;
typedef vector<int> vi;
vi primes;
bitset<100000000> bs;
void sieve(ll upperbound) {
_sieve_size = upperbound + 1;
bs.set();
bs[0] = bs[1] = 0;
for (ll i = 2; i <= _sieve_size; i++)
if (bs[i]) { //if not marked
for (ll j = i * i; j <= _sieve_size; j += i) //check all the multiples
bs[j] = 0; // they are surely not prime :-)
primes.push_back((int)i); // this is prime
} }
call from main() sieve(10000);. You have primes list in vector primes.
Note: As mentioned in comment--stackoverflow is quite unexpected error here. You are implementing sieve but it will be more efficient if you use bistet instead of bool.
Few things like if n=10^8 then sqrt(n)=10^4. And your bool array is p[10000]. So there is a chance of accessing array out of bound.
I agree with the other answers,
saying that you should basically just start over. 
Do you even care why your code doesn’t work?  (You didn’t actually ask.)
I’m not sure that the problem in your code
has been identified accurately yet. 
First of all, I’ll add this comment to help set the context:
// For any int aardvark;
// p[aardvark] = false means that aardvark is composite (i.e., not prime).
// p[aardvark] = true means that aardvark might be prime, or maybe we just don’t know yet.
Now let me draw your attention to this code:
int i=rootn;
while(i--)
{
if(p[i]==true)
{
int c=i;
do
{
c=c+i;
p[c]=false;
}while(c+p[i]<=rootn);
}
};
You say that n≤100000000 (although your code doesn’t check that), so,
presumably, rootn≤10000, which is the dimensionality (size) of p[]. 
The above code is saying that, for every integer i
(no matter whether it’s prime or composite),
2×i, 3×i, 4×i, etc., are, by definition, composite. 
So, for c equal to 2×i, 3×i, 4×i, …,
we set p[c]=false because we know that c is composite.
But look closely at the code. 
It sets c=c+i and says p[c]=false
before checking whether c is still in range
to be a valid index into p[]. 
Now, if n≤25000000, then rootn≤5000. 
If i≤ rootn, then i≤5000, and, as long as c≤5000, then c+i≤10000. 
But, if n>25000000, then rootn>5000,†
and the sequence i=rootn;, c=i;, c=c+i;
can set c to a value greater than 10000. 
And then you use that value to index into p[]. 
That’s probably where the stack overflow occurs.
Oh, BTW; you don’t need to say if(p[i]==true); if(p[i]) is good enough.
To add insult to injury, there’s a second error in the same block:
while(c+p[i]<=rootn). 
c and i are ints,
and p is an array of bools, so p[i] is a bool —
and yet you are adding c + p[i]. 
We know from the if that p[i] is true,
which is numerically equal to 1 —
so your loop termination condition is while (c+1<=rootn);
i.e., while c≤rootn-1. 
I think you meant to say while(c+i<=rootn).
Oh, also, why do you have executable code
immediately after an unconditional return statement? 
The system("PAUSE"); statement cannot possibly be reached.
(I’m not saying that those are the only errors;
they are just what jumped out at me.)
______________
† OK, splitting hairs, n has to be ≥ 25010001
(i.e., 50012) before rootn>5000.

remove duplicates int number in a vector c++

I'm trying to remove the same integer numbers in a vector. My aim is to have only one copy them. Well I wrote a simple code, but it doesn't work properly. Can anyone help? Thanks in advance.
#include <iostream>
#include <vector>
using namespace std;
int main()
{
int a = 10, b = 10 , c = 8, d = 8, e = 10 , f = 6;
vector<int> vec;
vec.push_back(a);
vec.push_back(b);
vec.push_back(c);
vec.push_back(d);
vec.push_back(e);
vec.push_back(f);
for (int i=vec.size()-1; i>=0; i--)
{
for(int j=vec.size()-1; j>=0; j--)
{
if(vec[j] == vec[i-1])
vec.erase(vec.begin() + j);
}
}
for(int i=0; i<vec.size(); i++)
{
cout<< "vec: "<< vec[i]<<endl;
}
return 0;
}
Don't use a list for this. Use a set:
#include <set>
...
set<int> vec;
This will ensure you will have no duplicates by not adding an element if it already exists.
To remove duplicates it's easier if you sort the array first. The code below uses two different methods for removing the duplicates: one using the built-in C++ algorithms and the other using a loop.
#include <iostream>
#include <vector>
#include <iterator>
#include <algorithm>
using namespace std;
int main() {
int a = 10, b = 10 , c = 8, d = 8, e = 10 , f = 6;
vector<int> vec;
vec.push_back(a);
vec.push_back(b);
vec.push_back(c);
vec.push_back(d);
vec.push_back(e);
vec.push_back(f);
// Sort the vector
std::sort(vec.begin(), vec.end());
// Remove duplicates (v1)
std::vector<int> result;
std::unique_copy(vec.begin(), vec.end(), std::back_inserter(result));
// Print results
std::cout << "Result v1: ";
std::copy(result.begin(), result.end(), std::ostream_iterator<int>(cout, " "));
std::cout << std::endl;
// Remove duplicates (v2)
std::vector<int> result2;
for (int i = 0; i < vec.size(); i++) {
if (i > 0 && vec[i] == vec[i - 1])
continue;
result2.push_back(vec[i]);
}
// Print results (v2)
std::cout << "Result v2: ";
std::copy(result2.begin(), result2.end(), std::ostream_iterator<int>(cout, " "));
std::cout << std::endl;
return 0;
}
If you need to save initial order of numbers you can make a function that will remove duplicates using helper set<int> structure:
void removeDuplicates( vector<int>& v )
{
set<int> s;
vector<int> res;
for( int i = 0; i < v.size(); i++ ) {
int x = v[i];
if( s.find(x) == s.end() ) {
s.insert(x);
res.push_back(x);
}
}
swap(v, res);
}
The problem with your code is here:
for(int j=vec.size()-1; j>=0; j--)
{
if(vec[j] == vec[i-1])
vec.erase(vec.begin() + j);
}
there's going to be a time when j==i-1 and that's going to kill your algorithms and there will be a time when i-1 < 0 so you will get an out of boundary exception.
What you can do is to change your for loop conditions:
for (int i = vec.size() - 1; i>0; i--){
for(int j = i - 1; j >= 0; j--){
//do stuff
}
}
this way, your the two variables your comparing will never be the same and your indices will always be at least 0.
Others have already pointed to std::set. This is certainly simple and easy--but it can be fairly slow (quite a bit slower than std::vector, largely because (like a linked list) it consists of individually allocated nodes, linked together via pointers to form a balanced tree1.
You can (often) improve on that by using an std::unordered_set instead of a std::set. This uses a hash table2 instead of a tree to store the data, so it normally uses contiguous storage, and gives O(1) expected access time instead of the O(log N) expected for a tree.
An alternative that's often faster is to collect the data in the vector, then sort the data and use std::unique to eliminate duplicates. This tends to be best when you have two distinct phases of operation: first you collect all the data, then you need duplicates removed. If you frequently alternate between adding/deleting data, and needing a duplicate free set, then something like std::set or std::unordered_set that maintain the set without duplicates at all times may be more useful.
All of these also affect the order of the items. An std::set always maintains the items sorted in a defined order. With std::unique you need to explicit sort the data. With std::unordered_set you get the items sorted in an arbitrary order that's neither their original order nor is it sorted.
If you need to maintain the original order, but without duplicates, you normally end up needing to store the data twice. For example when you need to add a new item, you attempt to insert it into an std::unordered_set, then if and only if that succeeds, add it to the vector as well.
Technically, implementation as a tree isn't strictly required, but it's about the only possibility of which I'm aware that can meet the requirements, and all the implementations of which I'm aware are based on trees.
Again, other implementations might be theoretically possible, but all of which I'm aware use hashing--but in this case, enough of the implementation is exposed that avoiding a hash table would probably be even more difficult.
The body of a range for must not change the size of the sequence over which it is iterating..
you can remove duplicates before push_back
void push(std::vector<int> & arr, int n)
{
for(int i = 0; i != arr.size(); ++i)
{
if(arr[i] == n)
{
return;
}
}
arr.push_back(n);
}
... ...
push(vec, a);
push(vec, b);
push(vec, c);
...

efficient way to copy array with mask in c++

I have two arrays. One is "x" factor the size of the second one.
I need to copy from the first (bigger) array to the second (smaller) array only its x element.
Meaning 0,x,2x.
Each array sits as a block in the memory.
The array is of simple values.
I am currently doing it using a loop.
Is there any faster smarter way to do this?
Maybe with ostream?
Thanks!
You are doing something like this right?
#include <cstddef>
int main()
{
const std::size_t N = 20;
const std::size_t x = 5;
int input[N*x];
int output[N];
for(std::size_t i = 0; i < N; ++i)
output[i] = input[i*x];
}
well, I don't know any function that can do that, so I would use the for loop. This is fast.
EDIT: even faster solution (to avoid multiplications)(C++03 Version)
int* inputit = input;
int* outputit = output;
int* outputend = output+N;
while(outputit != outputend)
{
*outputit = *inputit;
++outputit;
inputit+=x;
}
if I get you right you want to copy every n-th element. the simplest solution would be
#include <iostream>
int main(int argc, char **argv) {
const int size[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
int out[5];
int *pout = out;
for (const int *i = &size[0]; i < &size[10]; i += 3) {
std::cout << *i << ", ";
*pout++ = *i;
if (pout > &out[4]) {
break;
}
}
std::cout << "\n";
for (const int *i = out; i < pout; i++) {
std::cout << *i << ", ";
}
std::cout << std::endl;
}
You can use copy_if and lambda in C++11:
copy_if(a.begin(), a.end(), b.end(), [&] (const int& i) -> bool
{ size_t index = &i - &a[0]; return index % x == 0; });
A test case would be:
#include <iostream>
#include <vector>
#include <algorithm> // std::copy_if
using namespace std;
int main()
{
std::vector<int> a;
a.push_back(0);
a.push_back(1);
a.push_back(2);
a.push_back(3);
a.push_back(4);
std::vector<int> b(3);
int x = 2;
std::copy_if(a.begin(), a.end(), b.begin(), [&] (const int& i) -> bool
{ size_t index = &i - &a[0]; return index % x == 0; });
for(int i=0; i<b.size(); i++)
{
std::cout<<" "<<b[i];
}
return 0;
}
Note that you need to use a C++11 compatible compiler (if gcc, with -std=c++11 option).
template<typename InIt, typename OutIt>
void copy_step_x(InIt first, InIt last, OutIt result, int x)
{
for(auto it = first; it != last; std::advance(it, x))
*result++ = *it;
}
int main()
{
std::array<int, 64> ar0;
std::array<int, 32> ar1;
copy_step_x(std::begin(ar0), std::end(ar0), std::begin(ar1), ar0.size() / ar1.size());
}
The proper and clean way of doing this is a loop like has been said before. A number of good answers here show you how to do that.
I do NOT recommend doing it in the following fashion, it depends on a lot of specific things, value range of X, size and value range of the variables and so on but for some you could do it like this:
for every 4 bytes:
tmp = copy a 32 bit variable from the array, this now contains the 4 new values
real_tmp = bitmask tmp to get the right variable of those 4
add it to the list
This only works if you want values <= 255 and X==4, but if you want something faster than a loop this is one way of doing it. This could be modified for 16bit, 32bit or 64bit values and every 2,3,4,5,6,7,8(64 bit) values but for X>8 this method will not work, or for values that are not allocated in a linear fashion. It won't work for classes either.
For this kind of optimization to be worth the hassle the code need to run often, I assume you've run a profiler to confirm that the old copy is a bottleneck before starting implementing something like this.
The following is an observation on how most CPU designs are unimaginative when it comes to this sort of thing.
On some OpenVPX you have the ability to DMA data from one processor to another. The one that I use has a pretty advanced DMA controller, and it can do this sort of thing for you.
For example, I could ask it to copy your big array to another CPU, but skipping over N elements of the array, just like you're trying to do. As if by magic the destination CPU would have the smaller array in its memory. I could also if I wanted perform matrix transformations, etc.
The nice thing is that it takes no CPU time at all to do this; it's all done by the DMA engine. My CPUs can then concentrate on harder sums instead of being tied down shuffling data around.
I think the Cell processor in the PS3 can do this sort of thing internally (I know it can DMA data around, I don't know if it will do the strip mining at the same time). Some DSP chips can do it too. But x86 doesn't do it, meaning us software programmers have to write ridiculous loops just moving data in simple patterns. Yawn.
I have written a multithreaded memcpy() in the past to do this sort of thing. The only way you're going to beat a for loop is to have several threads doing your for loop in several parallel chunks.
If you pick the right compiler (eg Intel's ICC or Sun/Oracles Sun Studio) they can be made to automatically parallelise your for loops on your behalf (so your source code doesn't change). That's probably the simplest way to beat your original for loop.

Need advice on improving my code: Search Algorithm

I'm pretty new at C++ and would need some advice on this.
Here I have a code that I wrote to measure the number of times an arbitrary integer x occurs in an array and to output the comparisons made.
However I've read that by using multi-way branching("Divide and conqurer!") techniques, I could make the algorithm run faster.
Could anyone point me in the right direction how should I go about doing it?
Here is my working code for the other method I did:
#include <iostream>
#include <cstdlib>
#include <vector>
using namespace std;
vector <int> integers;
int function(int vectorsize, int count);
int x;
double input;
int main()
{
cout<<"Enter 20 integers"<<endl;
cout<<"Type 0.5 to end"<<endl;
while(true)
{
cin>>input;
if (input == 0.5)
break;
integers.push_back(input);
}
cout<<"Enter the integer x"<<endl;
cin>>x;
function((integers.size()-1),0);
system("pause");
}
int function(int vectorsize, int count)
{
if(vectorsize<0) //termination condition
{
cout<<"The number of times"<< x <<"appears is "<<count<<endl;
return 0;
}
if (integers[vectorsize] > x)
{
cout<< integers[vectorsize] << " > " << x <<endl;
}
if (integers[vectorsize] < x)
{
cout<< integers[vectorsize] << " < " << x <<endl;
}
if (integers[vectorsize] == x)
{
cout<< integers[vectorsize] << " = " << x <<endl;
count = count+1;
}
return (function(vectorsize-1,count));
}
Thanks!
If the array is unsorted, just use a single loop to compare each element to x. Unless there's something you're forgetting to tell us, I don't see any need for anything more complicated.
If the array is sorted, there are algorithms (e.g. binary search) that would have better asymptotic complexity. However, for a 20-element array a simple linear search should still be the preferred strategy.
If your array is a sorted one you can use a divide to conquer strategy:
Efficient way to count occurrences of a key in a sorted array
A divide and conquer algorithm is only beneficial if you can either eliminate some work with it, or if you can parallelize the divided work parts accross several computation units. In your case, the first option is possible with an already sorted dataset, other answers may have addressed the problem.
For the second solution, the algorithm name is map reduce, which split the dataset in several subsets, distribute the subsets to as many threads or processes, and gather the results to "compile" them (the term is actually "reduce") in a meaningful result. In your setting, it means that each thread will scan its own slice of the array to count the items, and return its result to the "reduce" thread, which will add them up to return the final result. This solution is only interesting for large datasets though.
There are questions dealing with mapreduce and c++ on SO, but I'll try to give you a sample implementation here:
#include <utility>
#include <thread>
#include <boost/barrier>
constexpr int MAP_COUNT = 4;
int mresults[MAP_COUNT];
boost::barrier endmap(MAP_COUNT + 1);
void mfunction(int start, int end, int rank ){
int count = 0;
for (int i= start; i < end; i++)
if ( integers[i] == x) count++;
mresult[rank] = count;
endmap.wait();
}
int rfunction(){
int count = 0;
for (int i : mresults) {
count += i;
}
return count;
}
int mapreduce(){
vector<thread &> mthreads;
int range = integers.size() / MAP_COUNT;
for (int i = 0; i < MAP_COUNT; i++ )
mthreads.push_back(thread(bind(mfunction, i * range, (i+1) * range, i)));
endmap.wait();
return rfunction();
}
Once the integers vector has been populated, you call the mapreduce function defined above, which should return the expected result. As you can see, the implementation is very specialized:
the map and reduce functions are specific to your problem,
the number of threads used for map is static,
I followed your style and used global variables,
for convenience, I used a boost::barrier for synchronization
However this should give you an idea of the algorithm, and how you could apply it to similar problems.
caveat: code untested.

Optimized way to find M largest elements in an NxN array using C++

I need a blazing fast way to find the 2D positions and values of the M largest elements in an NxN array.
right now I'm doing this:
struct SourcePoint {
Point point;
float value;
}
SourcePoint* maxValues = new SourcePoint[ M ];
maxCoefficients = new SourcePoint*[
for (int j = 0; j < rows; j++) {
for (int i = 0; i < cols; i++) {
float sample = arr[i][j];
if (sample > maxValues[0].value) {
int q = 1;
while ( sample > maxValues[q].value && q < M ) {
maxValues[q-1] = maxValues[q]; // shuffle the values back
q++;
}
maxValues[q-1].value = sample;
maxValues[q-1].point = Point(i,j);
}
}
}
A Point struct is just two ints - x and y.
This code basically does an insertion sort of the values coming in. maxValues[0] always contains the SourcePoint with the lowest value that still keeps it within the top M values encoutered so far. This gives us a quick and easy bailout if sample <= maxValues, we don't do anything. The issue I'm having is the shuffling every time a new better value is found. It works its way all the way down maxValues until it finds it's spot, shuffling all the elements in maxValues to make room for itself.
I'm getting to the point where I'm ready to look into SIMD solutions, or cache optimisations, since it looks like there's a fair bit of cache thrashing happening. Cutting the cost of this operation down will dramatically affect the performance of my overall algorithm since this is called many many times and accounts for 60-80% of my overall cost.
I've tried using a std::vector and make_heap, but I think the overhead for creating the heap outweighed the savings of the heap operations. This is likely because M and N generally aren't large. M is typically 10-20 and N 10-30 (NxN 100 - 900). The issue is this operation is called repeatedly, and it can't be precomputed.
I just had a thought to pre-load the first M elements of maxValues which may provide some small savings. In the current algorithm, the first M elements are guaranteed to shuffle themselves all the way down just to initially fill maxValues.
Any help from optimization gurus would be much appreciated :)
A few ideas you can try. In some quick tests with N=100 and M=15 I was able to get it around 25% faster in VC++ 2010 but test it yourself to see whether any of them help in your case. Some of these changes may have no or even a negative effect depending on the actual usage/data and compiler optimizations.
Don't allocate a new maxValues array each time unless you need to. Using a stack variable instead of dynamic allocation gets me +5%.
Changing g_Source[i][j] to g_Source[j][i] gains you a very little bit (not as much as I'd thought there would be).
Using the structure SourcePoint1 listed at the bottom gets me another few percent.
The biggest gain of around +15% was to replace the local variable sample with g_Source[j][i]. The compiler is likely smart enough to optimize out the multiple reads to the array which it can't do if you use a local variable.
Trying a simple binary search netted me a small loss of a few percent. For larger M/Ns you'd likely see a benefit.
If possible try to keep the source data in arr[][] sorted, even if only partially. Ideally you'd want to generate maxValues[] at the same time the source data is created.
Look at how the data is created/stored/organized may give you patterns or information to reduce the amount of time to generate your maxValues[] array. For example, in the best case you could come up with a formula that gives you the top M coordinates without needing to iterate and sort.
Code for above:
struct SourcePoint1 {
int x;
int y;
float value;
int test; //Play with manual/compiler padding if needed
};
If you want to go into micro-optimizations at this point, the a simple first step should be to get rid of the Points and just stuff both dimensions into a single int. That reduces the amount of data you need to shift around, and gets SourcePoint down to being a power of two long, which simplifies indexing into it.
Also, are you sure that keeping the list sorted is better than simply recomputing which element is the new lowest after each time you shift the old lowest out?
(Updated 22:37 UTC 2011-08-20)
I propose a binary min-heap of fixed size holding the M largest elements (but still in min-heap order!). It probably won't be faster in practice, as I think OPs insertion sort probably has decent real world performance (at least when the recommendations of the other posteres in this thread are taken into account).
Look-up in the case of failure should be constant time: If the current element is less than the minimum element of the heap (containing the max M elements) we can reject it outright.
If it turns out that we have an element bigger than the current minimum of the heap (the Mth biggest element) we extract (discard) the previous min and insert the new element.
If the elements are needed in sorted order the heap can be sorted afterwards.
First attempt at a minimal C++ implementation:
template<unsigned size, typename T>
class m_heap {
private:
T nodes[size];
static const unsigned last = size - 1;
static unsigned parent(unsigned i) { return (i - 1) / 2; }
static unsigned left(unsigned i) { return i * 2; }
static unsigned right(unsigned i) { return i * 2 + 1; }
void bubble_down(unsigned int i) {
for (;;) {
unsigned j = i;
if (left(i) < size && nodes[left(i)] < nodes[i])
j = left(i);
if (right(i) < size && nodes[right(i)] < nodes[j])
j = right(i);
if (i != j) {
swap(nodes[i], nodes[j]);
i = j;
} else {
break;
}
}
}
void bubble_up(unsigned i) {
while (i > 0 && nodes[i] < nodes[parent(i)]) {
swap(nodes[parent(i)], nodes[i]);
i = parent(i);
}
}
public:
m_heap() {
for (unsigned i = 0; i < size; i++) {
nodes[i] = numeric_limits<T>::min();
}
}
void add(const T& x) {
if (x < nodes[0]) {
// reject outright
return;
}
nodes[0] = x;
swap(nodes[0], nodes[last]);
bubble_down(0);
}
};
Small test/usage case:
#include <iostream>
#include <limits>
#include <algorithm>
#include <vector>
#include <stdlib.h>
#include <assert.h>
#include <math.h>
using namespace std;
// INCLUDE TEMPLATED CLASS FROM ABOVE
typedef vector<float> vf;
bool compare(float a, float b) { return a > b; }
int main()
{
int N = 2000;
vf v;
for (int i = 0; i < N; i++) v.push_back( rand()*1e6 / RAND_MAX);
static const int M = 50;
m_heap<M, float> h;
for (int i = 0; i < N; i++) h.add( v[i] );
sort(v.begin(), v.end(), compare);
vf heap(h.get(), h.get() + M); // assume public in m_heap: T* get() { return nodes; }
sort(heap.begin(), heap.end(), compare);
cout << "Real\tFake" << endl;
for (int i = 0; i < M; i++) {
cout << v[i] << "\t" << heap[i] << endl;
if (fabs(v[i] - heap[i]) > 1e-5) abort();
}
}
You're looking for a priority queue:
template < class T, class Container = vector<T>,
class Compare = less<typename Container::value_type> >
class priority_queue;
You'll need to figure out the best underlying container to use, and probably define a Compare function to deal with your Point type.
If you want to optimize it, you could run a queue on each row of your matrix in its own worker thread, then run an algorithm to pick the largest item of the queue fronts until you have your M elements.
A quick optimization would be to add a sentinel value to yourmaxValues array. If you have maxValues[M].value equal to std::numeric_limits<float>::max() then you can eliminate the q < M test in your while loop condition.
One idea would be to use the std::partial_sort algorithm on a plain one-dimensional sequence of references into your NxN array. You could probably also cache this sequence of references for subsequent calls. I don't know how well it performs, but it's worth a try - if it works good enough, you don't have as much "magic". In particular, you don't resort to micro optimizations.
Consider this showcase:
#include <algorithm>
#include <iostream>
#include <vector>
#include <stddef.h>
static const int M = 15;
static const int N = 20;
// Represents a reference to a sample of some two-dimensional array
class Sample
{
public:
Sample( float *arr, size_t row, size_t col )
: m_arr( arr ),
m_row( row ),
m_col( col )
{
}
inline operator float() const {
return m_arr[m_row * N + m_col];
}
bool operator<( const Sample &rhs ) const {
return (float)other < (float)*this;
}
int row() const {
return m_row;
}
int col() const {
return m_col;
}
private:
float *m_arr;
size_t m_row;
size_t m_col;
};
int main()
{
// Setup a demo array
float arr[N][N];
memset( arr, 0, sizeof( arr ) );
// Put in some sample values
arr[2][1] = 5.0;
arr[9][11] = 2.0;
arr[5][4] = 4.0;
arr[15][7] = 3.0;
arr[12][19] = 1.0;
// Setup the sequence of references into this array; you could keep
// a copy of this sequence around to reuse it later, I think.
std::vector<Sample> samples;
samples.reserve( N * N );
for ( size_t row = 0; row < N; ++row ) {
for ( size_t col = 0; col < N; ++col ) {
samples.push_back( Sample( (float *)arr, row, col ) );
}
}
// Let partial_sort find the M largest entry
std::partial_sort( samples.begin(), samples.begin() + M, samples.end() );
// Print out the row/column of the M largest entries.
for ( std::vector<Sample>::size_type i = 0; i < M; ++i ) {
std::cout << "#" << (i + 1) << " is " << (float)samples[i] << " at " << samples[i].row() << "/" << samples[i].col() << std::endl;
}
}
First of all, you are marching through the array in the wrong order!
You always, always, always want to scan through memory linearly. That means the last index of your array needs to be changing fastest. So instead of this:
for (int j = 0; j < rows; j++) {
for (int i = 0; i < cols; i++) {
float sample = arr[i][j];
Try this:
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
float sample = arr[i][j];
I predict this will make a bigger difference than any other single change.
Next, I would use a heap instead of a sorted array. The standard <algorithm> header already has push_heap and pop_heap functions to use a vector as a heap. (This will probably not help all that much, though, unless M is fairly large. For small M and a randomized array, you do not wind up doing all that many insertions on average... Something like O(log N) I believe.)
Next after that is to use SSE2. But that is peanuts compared to marching through memory in the right order.
You should be able to get nearly linear speedup with parallel processing.
With N CPUs, you can process a band of rows/N rows (and all columns) with each CPU, finding the top M entries in each band. And then do a selection sort to find the overall top M.
You could probably do that with SIMD as well (but here you'd divide up the task by interleaving columns instead of banding the rows). Don't try to make SIMD do your insertion sort faster, make it do more insertion sorts at once, which you combine at the end using a single very fast step.
Naturally you could do both multi-threading and SIMD, but on a problem which is only 30x30, that's not likely to be worthwhile.
I tried replacing float by double, and interestingly that gave me a speed improvement of about 20% (using VC++ 2008). That's a bit counterintuitive, but it seems modern processors or compilers are optimized for double value processing.
Use a linked list to store the best yet M values. You'll still have to iterate over it to find the right spot, but the insertion is O(1). It would probably even be better than binary search and insertion O(N)+O(1) vs O(lg(n))+O(N).
Interchange the fors, so you're not accessing every N element in memory and trashing the cache.
LE: Throwing another idea that might work for uniformly distributed values.
Find the min, max in 3/2*O(N^2) comparisons.
Create anywhere from N to N^2 uniformly distributed buckets, preferably closer to N^2 than N.
For every element in the NxN matrix place it in bucket[(int)(value-min)/range], range=max-min.
Finally create a set starting from the highest bucket to the lowest, add elements from other buckets to it while |current set| + |next bucket| <=M.
If you get M elements you're done.
You'll likely get less elements than M, let's say P.
Apply your algorithm for the remaining bucket and get biggest M-P elements out of it.
If elements are uniform and you use N^2 buckets it's complexity is about 3.5*(N^2) vs your current solution which is about O(N^2)*ln(M).