Fastest way to search and sort vectors - c++

I'm doing a project in which i need to insert data into vectors sort it and search it ...
i need fastest possible algorithms for sort and search ... i've been searching and found out that std::sort is basically quicksort which is one of the fastest sorts but i cant figure out which search algorithm is the best ? binarysearch?? can u help me with it? tnx ... So i've got 3 methods:
void addToVector(Obj o)
{
fvector.push_back(o);
}
void sortVector()
{
sort(fvector.begin(), fvector().end());
}
Obj* search(string& bla)
{
//i would write binary search here
return binarysearch(..);
}

I've been searching and found out that std::sort is basically
quicksort.
Answer: Not quite. Most implementations use a hybrid algorithm like
introsort, which combines quick-sort, heap-sort and insertion sort.
Quick-sort is one of the fastest sorting methods.
Answer: Not quite. In general it holds (i.e., in the average case quick-sort is of complexity). However, quick-sort has quadratic worst-case performance (i.e., ). Furthermore, for a small number of inputs (e.g., if you have a std::vector with a small numbers of elements) sorting with quick-sort tends to achieve worst performance than other sorting algorithms that are considered "slower" (see chart below):
I can't figure out which searching algorithm is the best. Is it binary-search?
Answer: Binary search has the same average and worst case performance (i.e., ). Also have in mind that binary-search requires that the container should be arranged in ascending or descending order. However, whether is better than other searching methods (e.g., linear search which has time complexity) depends on a number of factors. Some of them are:
The number of elements/objects (see chart below).
The type of elements/objects.
Bottom Line:
Usually looking for the "fastest" algorithm denotes premature optimization and according to one of the "great ones" (Premature optimization is the root of all evil - Donald Knuth). The "fastest", as I hope it has been clearly shown, depends on quite a number of factors.
Use std::sort to sort your std::vector.
After sorting your std::vector use std::binary_search to find out whether a certain element exists in your std::vector or use std::lower_bound or std::upper_bound to find and get an element from your std::vector.

For amortised O(1) access times, use a [std::unordered_map], maybe using a custom hash for best effects.
Sorting seems to be unneccessary extra work.

Searching and Sorting efficiency is highly dependent on the type of data, the ordering of the raw data, and the quantity of the data.
For example, for small sorted data sets, a linear search may be faster than a binary search; or the time differences between the two is negligible.
Some sort algorithms will perform horribly on inversely ordered data, such a binary tree sort. Data that does not have much variation may cause a high degree of collisions on hash algorithms.
Perhaps you need to answer the bigger question: Is search or sorting the execution bottleneck in my program? Profile and find out.

If you need the fastest or the best sorting algorithm... There is no such one. At least it haven't been found yet. There are algorithms that provide better results for different data, there are algorithms that provide good results for most of data. You either need to analyze your data and find the best one for your case or use generic algo like std::sort and expect it to provide good results but not the best.

if your elements are of integer you should use bucket sort algorithm which run at O(N) time instead of O(nlogn) average case as with qsort
[http://en.wikipedia.org/wiki/Bucket_sort]

Sorting
In case you want to know about the fastest sorting technique for integer values in a vector then I would suggest you to refer the following link:
https://github.com/fenilgmehta/Fastest-Integer-Sort
It uses radix sort and counting sort for large arrays and merge sort along with insertion sort for small arrays.
According to statistics, this sorting algorithm is way faster than C++ std::sort for integral values.
It is 6 times faster than C++ STL std::sort for "int64_t array[10000000]"
Searching
If you want to know whether a particular value is present in the vector or not, then you should use binary_search(...)
If you want to know the exact location of an element, then use lower_bound(...) and upper_bound(...)

Related

What are the best sorting algorithms when 'n' is very small?

In the critical path of my program, I need to sort an array (specifically, a C++ std::vector<int64_t>, using the gnu c++ standard libray). I am using the standard library provided sorting algorithm (std::sort), which in this case is introsort.
I was curious about how well this algorithm performs, and when doing some research on various sorting algorithms different standard and third party libraries use, almost all of them care about cases where 'n' tends to be the dominant factor.
In my specific case though, 'n' is going to be on the order of 2-20 elements. So the constant factors could actually be dominant. And things like cache effects might be very different when the entire array we are sorting fits into a couple of cache lines.
What are the best sorting algorithms for cases like this where the constant factors likely overwhelm the asymptotic factors? And do there exist any vetted C++ implementations of these algorithms?
Introsort takes your concern into account, and switches to an insertion sort implementation for short sequences.
Since your STL already provides it, you should probably use that.
Insertion sort or selection sort are both typically faster for small arrays (i.e., fewer than 10-20 elements).
Watch https://www.youtube.com/watch?v=FJJTYQYB1JQ
A simple linear insertion sort is really fast. Making a heap first can improve it a bit.
Sadly the talk doesn't compare that against the hardcoded solutions for <= 15 elements.
It's impossible to know the fastest way to do anything without knowing exactly what the "anything" is.
Here is one possible set of assumptions:
We don't have any knowledge of the element structure except that elements are comparable. We have no useful way to group them into bins (for radix sort), we must implement a comparison-based sort, and comparison takes place in an opaque manner.
We have no information about the initial state of the input; any input order is equally likely.
We don't have to care about whether the sort is stable.
The input sequence is a simple array. Accessing elements is constant-time, as is swapping them. Furthermore, we will benchmark the function purely according to the expected number of comparisons - not number of swaps, wall-clock time or anything else.
With that set of assumptions (and possibly some other sets), the best algorithms for small numbers of elements will be hand-crafted sorting networks, tailored to the exact length of the input array. (These always perform the same number of comparisons; it isn't feasible to "short-circuit" these algorithms conditionally because the "conditions" would depend on detecting data that is already partially sorted, which still requires comparisons.)
For a network sorting four elements (in the known-optimal five comparisons), this might look like (I did not test this):
template<class RandomIt, class Compare>
void _compare_and_swap(RandomIt first, Compare comp, int x, int y) {
if (comp(first[x], first[y])) {
auto tmp = first[x];
arr[x] = arr[y];
arr[y] = tmp;
}
}
// Assume there are exactly four elements available at the `first` iterator.
template<class RandomIt, class Compare>
void network_sort_4(RandomIt first, Compare comp) {
_compare_and_swap(2, 0);
_compare_and_swap(1, 3);
_compare_and_swap(0, 1);
_compare_and_swap(2, 3);
_compare_and_swap(1, 2);
}
In real-world environments, of course, we will have different assumptions. For small numbers of elements, with real data (but still assuming we must do comparison-based sorts) it will be difficult to beat naive implementations of insertion sort (or bubble sort, which is effectively the same thing) that have been compiled with good optimizations. It's really not feasible to reason about these things by hand, considering both the complexity of the hardware level (e.g. the steps it takes to pipeline instructions and then compensate for branch mis-predictions) and the software level (e.g. the relative cost of performing the swap vs. performing the comparison, and the effect that has on the constant-factor analysis of performance).

Benefit of printing values from an array in ascending order by selecting?

I read the tutorial regarding arrange a number of array in ascending order and understood the idea https://www.includehelp.com/cpp-programs/sort-an-array-in-ascending-order.aspx . However, now I'm thinking of other way to perform the operation. Wonder will the idea below works?
The method will be using while loop and check (while remaining number in array not equal to 0), find the smallest number in the array, print out the number and remove it from array. Repeat the same process until remaining number in array = 0. So my numbers will be print out in ascending order also and the number in the array will decrease in each loop until it reached zero.
I started learning programming just few weeks ago and have trouble writing out the code now. However I'm interested to know if this method will work? If cannot, please explain why.
What you've described is a variant of what's normally called a "selection sort". It's pretty well known. It does work, but there are many sorting algorithms that work--and while there are a few sorting algorithms that are generally less efficient, it's still one of the least efficient around.
Selection sort is typically faster than Bubble sort and a few of its variants like Shaker sort. Depending on the precise situation, it can also be faster than insertion sort, though that's pretty unusual. Those three (bubble sort, insertion sort, and selection sort) are the best known of the simple sorting algorithms. Of the three, bubble sort is most often the slowest, and insertion sort most often the fastest. But all three take time proportional to the square of the number of items being sorted, which means they get much slower in a hurry as you try to sort more items. If you have very many items, more advanced algorithms (e.g., Shell-Metzner, Quicksort, heap sort and merge sort) will almost always be substantially faster.
Ignoring execution speed for a moment, selection sort does have one extremely good property: it's easy to understand, easy to code up correctly and easy to prove that it works. If you only need to sort a few items, and need to type in the sorting code yourself (especially if you're in a hurry) it's my experience that it's probably the easiest sorting algorithm to be certain you've implemented correctly.

What is an interesting use-case for C++ nth_element algorithm?

I found many questions and answers about how to use the nth_element algorithm of C++ STL, but so far I did not find any interesting use-case for it. In particular, in what real-life use-case would I use nth_element rather than just sorting the entire vector?
EDIT: Apparently, nth_element can be used to find the median or another percentile of the data, but it seems inefficient as it modifies the vector.
A typical use case is when one wants to avoid sorting the whole set because it is large. If you just need the top n, then nth_element is used to partition the set. The first partition then contains the top n elements.
You want to answer "what is the median?" of the data set. Don't need it sorted, you just want an entry where 50% are greater and 50% are less.

Can we know if a collection is almost sorted without applying a sort algorithm?

In the wikipedia article on sorting algorithms,
http://en.wikipedia.org/wiki/Sorting_algorithm#Summaries_of_popular_sorting_algorithms
under Bubble sort it says:Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place)
So my question is: Without sorting the list using a sorting algoithm first, how can one know if that is nearly sorted or not?
Are you familiar with the general sorting lower bound? You can prove that in a comparison-based sorting algorithm, any sorting algorithm must make Ω(n log n) comparisons in the average case. The way you prove this is through an information-theoretic argument. The basic idea is that there are n! possible permutations of the input array, and since the only way you can learn about which permutation you got is to make comparisons, you have to make at least lg n! comparisons in order to be certain that you know the structure of your input permutation.
I haven't worked out the math on this, but I suspect that you could make similar arguments to show that it's difficult to learn how sorted a particular array is. Essentially, if you don't do a large number of comparisons, then you wouldn't be able to tell apart an array that's mostly sorted from an array that is actually quite far from sorted. As a result, all the algorithms I'm aware of that measure "sortedness" take a decent amount of time to do so.
For example, one measure of the level of "sortedness" in an array is the number of inversions in that array. You can count the number of inversions in an array in time O(n log n) using a divide-and-conquer algorithm based on mergesort, but with that runtime you could just sort the array instead.
Typically, the way that you'd know that your array was mostly sorted was to know something a priori about how it was generated. For example, if you're looking at temperature data gathered from 8AM - 12PM, it's very likely that the data is already mostly sorted (modulo some variance in the quality of the sensor readings). If your data looks at a stock price over time, it's also likely to be mostly sorted unless the company has a really wonky trajectory. Some other algorithms also partially sort arrays; for example, it's not uncommon for quicksort implementations to stop sorting when the size of the array left to sort is small and to follow everything up with a final insertion sort pass, since every element won't be very far from its final position then.
I don't believe there exists any standardized measure of how sorted or random an array is.
You can come up with your own measure - like count the number of adjacent pairs which are out of order (suggested in comment), or count the number of larger numbers which occur before smaller numbers in the array (this is trickier than a simple single pass).

An efficient sorting algorithm for almost sorted list containing time data?

The name says it all really. I suspect that insertion sort is best, since it's the best sort for mostly-sorted data in general. However, since I know more about the data there is a chance there are other sorts woth looking at. So the other relevant pieces of information are:
1) this is time data, which means I presumable could create an effective hash for ordering of data.
2) The data won't all exist at one time. instead I'll be reading in records which may contain a single vector, or dozen or hundreds of vectors. I want to output all time within a 5 second window. So it's possible that a sort that does the sorting as I insert the data would be a better option.
3) memory is not a big issue, but CPU speed is as this may be a bottleneck of the system.
Given these conditions can anyone suggest an algorithm that may be worth considering in addition to insertion sort? Also, How does one defined 'mostly sorted' to decide what is a good sort option? What I mean by that is how do I look at my data and decided 'this isn't as sorted as I thought it as, maybe insertion sort is no longer the best option'? Any link to an article which considered process complexity which better defines the complexity relative to the degree data is sorted would be appreciated.
Thanks
Edit:
thank you everyone for your information. I will be going with an easy insertion or merge sort (whichever I have already pre-written) for now. However, I'll be trying some of the other methods once were closer to the optimization phase (since they take more effort to implement). I appreciate the help
You could adopt option (2) you suggested - sort the data while you insert elements.
Use a skip list, sorted according to time, ascending to maintain your data.
Once a new entree arrives - check if it is larger then the last
element (easy and quick) if it is - simply append it (easy to do in a skip list). The
skip list will need to add 2 nodes on average for these cases, and will be O(1) on
average for these cases.
If the element is not larger then the last element - add it to the
skip list as a standard insert op, which will be O(logn).
This approach will yield you O(n+klogn) algorithm, where k is the number of elements inserted out of order.
I would throw in merge sort if you implement the natural version you get a best case of O(N) with a typical and worst case of O(N log N) if you have any problems. Insertion you get a worst case of O(N^2) and a best case of O(N).
You can sort a list of size n with k elements out of place in O(n + k lg k) time.
See: http://www.quora.com/How-can-I-quickly-sort-an-array-of-elements-that-is-already-sorted-except-for-a-small-number-of-elements-say-up-to-1-4-of-the-total-whose-positions-are-known/answer/Mark-Gordon-6?share=1
The basic idea is this:
Iterate over the elements of the array, building an increasing subsequence (if the current element is greater than or equal to the last element of the subsequence, append it to the end of the subsequence. Otherwise, discard both the current element and the last element of the subsequence). This takes O(n) time.
You will have discarded no more than 2k elements since k elements are out of place.
Sort the 2k elements that were discarded using an O(k lg k) sorting algorithm like merge sort or heapsort.
You now have two sorted lists. Merge the lists in O(n) time like you would in the merge step of merge sort.
Overall time complexity = O(n + k lg k)
Overall space complexity = O(n)
(this can be modified to run in O(1) space if you can merge in O(1) space, but it's by no means trivial)
Without fully understanding the problem, Timsort may fit the bill as you're alleging that your data is mostly sorted already.
There are many adaptive sorting algorithms out there that are specifically designed to sort mostly-sorted data. Ignoring the fact that you're storing dates, you might want to look at smoothsort or Cartesian tree sort as algorithms that can sort data that is reasonable sorted in worst-case O(n log n) time and best-case O(n) time. Smoothsort also has the advantage of requiring only O(1) space, like insertion sort.
Using the fact that everything is a date and therefore can be converted into an integer, you might want to look at binary quicksort (MSD radix sort) using a median-of-three pivot selection. This algorithm has best-case O(n log n) performance, but has a very low constant factor that makes it pretty competitive. Its worst case is O(n log U), where U is the number of bits in each date (probably 64), which isn't too bad.
Hope this helps!
If your OS or C library provides a mergesort function, it is very likely that it already handles the case where the data given is partially ordered (in any direction) running in O(N) time.
Otherwise, you can just copy the mergesort available from your favorite BSD operating system.