Heapsort CPU time - c++

I have implemented Heapsort in c++, it indeed sorts the array, but is giving me higher CPU times than expected. It is supposed to spend nlog(n) flops, and it is supposed to sort it faster than, at least, bubblesort and insertionsort.
Instead, it is giving me higher cpu times than both bubblesort and insertion sort. For example, for a random array of ints (size 100000), I have the following cpu times (in nanoSeconds):
BubbleSort: 1.0957e+11
InsertionSort: 4.46416e+10
MergeSort: 7.2381e+08
HeapSort: 2.04685e+11
This is the code itself:
#include <iostream>
#include <assert.h>
#include <fstream>
#include <vector>
#include <random>
#include <chrono>
using namespace std;
typedef vector<int> intv;
typedef vector<float> flov;
typedef vector<double> douv;
void max_heapify(intv& , int);
void build_max_heap(intv& v);
double hesorti(intv& v)
{
auto t0 =chrono::high_resolution_clock::now();
build_max_heap(v);
int x = 0;
int i = v.size() - 1;
while( i > x)
{
swap(v[i],v[x]);
++x;
--i;
}
auto t1 = chrono::high_resolution_clock::now();
double T = chrono::duration_cast<chrono::nanoseconds>(t1-t0).count();
return T;
}
void max_heapify(intv& v, int i)
{
int left = i + 1, right = i + 2;
int largest;
if( left <= v.size() && v[left] > v[i])
{
largest = left;
}
else
{
largest = i;
}
if( right <= v.size() && v[right] > v[largest])
{
largest = right;
}
if( largest != i)
{
swap(v[i], v[largest]);
max_heapify(v,largest);
}
}
void build_max_heap(intv& v)
{
for( int i = v.size() - 2; i >= 0; --i)
{
max_heapify(v, i);
}
}

There's definitely a problem with the implementation of heap sort.
Looking at hesorti, you can see that it is just reversing the elements of the vector after calling build_max_heap. So somehow build_max_heap isn't just making a heap, it's actually reverse sorting the whole array.
max_heapify already has an issue: in the standard array layout of a heap, the children of the node at array index i are not i+1 and i+2, but 2i+1 and 2i+2. It's being called from the back of the array forwards from build_max_heap. What does this do?
The first time it is called, on the last two elements (when i=n-2), it simply makes sure the larger comes before the smaller. What happens when it is called after that?
Let's do some mathematical induction. Suppose, for all j>i, after calling max_heapify with index j on an array where the numbers v[j+1] through v[n-1] are already in descending order, that the result is that the numbers v[j] through v[n-1] are sorted in descending order. (We've already seen this is true when i=n-2.)
If v[i] is greater or equal to v[i+1] (and therefore v[i+2] as well), no swaps will occur and when max_heapify returns, we know that the values at i through n-1 are in descending order. What happens in the other case?
Here, largest is set to i+1, and by our assumption, v[i+1] is greater than or equal to v[i+2] (and in fact all v[k] for k>i+1) already, so the test against the 'right' index (i+2) never succeeds. v[i] is swapped with v[i+1], making v[i] the largest of the numbers from v[i] through v[n-1], and then max_heapify is called on the elements from i+1 to the end. By our induction assumption, this will sort those elements in descending order, and so we know that now all the elements from v[i] to v[n-1] are in descending order.
Through the power of induction then, we've proved that build_max_heap will reverse sort the elements. The way it does it, is to percolate the elements in turn, working from the back, into their correct position in the reverse-sorted elements that come after it.
Does this look familiar? It's an insertion sort! Except it's sorting in reverse, so when hesorti is called, the sequence of swaps puts it in the correct order.
Insertion sort also has O(n^2) average behaviour, which is why you're getting similar numbers as for bubble sort. It's slower almost certainly because of the convoluted implementation of the insertion step.
TL;DR: Your heap sort is not faster because it isn't actually a heap sort, it's a backwards insert sort followed by an in-place ordering reversal.

Related

Complexity of function with array having even and odds numbers separate

So i have an array which has even and odds numbers in it.
I have to sort it with odd numbers first and then even numbers.
Here is my approach to it:
int key,val;
int odd = 0;
int index = 0;
for(int i=0;i<max;i++)
{
if(arr[i]%2!=0)
{
int temp = arr[index];
arr[index] = arr[i];
arr[i] = temp;
index++;
odd++;
}
}
First I separate even and odd numbers then I apply sorting to it.
For sorting I have this code:
for (int i=1; i<max;i++)
{
key=arr[i];
if(i<odd)
{
val = 0;
}
if(i>=odd)
{
val = odd;
}
for(int j=i; j>val && key < arr[j-1]; j--)
{
arr[j] = arr[j-1];
arr[j-1] = key;
}
}
The problem i am facing is this i cant find the complexity of the above sorting code.
Like insertion sort is applied to first odd numbers.
When they are done I skip that part and start sorting the even numbers.
Here is my approach for sorting if i have sorted array e.g: 3 5 7 9 2 6 10 12
complexity table
How all this works?
in first for loop i traverse through the loop and put all the odd numbers before the even numbers.
But since it doesnt sort them.
in next for loop which has insertion sort. I basically did is only like sorted only odd numbers first in array using if statement. Then when i == odd the nested for loop then doesnt go through all the odd numbers instead it only counts the even numbers and then sorts them.
I'm assuming you know the complexity of your partitioning (let's say A) and sorting algorithms (let's call this one B).
You first partition your n element array, then sort m element, and finally sort n - m elements. So the total complexity would be:
A(n) + B(m) + B(n - m)
Depending on what A and B actually are you should probably be able to simplify that further.
Edit: Btw, unless the goal of your code is to try and implement partitioning/sorting algorithms, I believe this is much clearer:
#include <algorithm>
#include <iterator>
template <class T>
void partition_and_sort (T & values) {
auto isOdd = [](auto const & e) { return e % 2 == 1; };
auto middle = std::partition(std::begin(values), std::end(values), isOdd);
std::sort(std::begin(values), middle);
std::sort(middle, std::end(values));
}
Complexity in this case is O(n) + 2 * O(n * log(n)) = O(n * log(n)).
Edit 2: I wrongly assumed std::partition keeps the relative order of elements. That's not the case. Fixed the code example.

Sort Array By Parity the result is not robust

I am a new programmer and I am trying to sort a vector of integers by their parities - put even numbers in front of odds. The order inside of the odd or even numbers themselves doesn't matter. For example, given an input [3,1,2,4], the output can be [2,4,3,1] or [4,2,1,3], etc. Below is my c++ code, sometimes I got luck that the vector gets sorted properly, sometimes it doesn't. I exported the odd and even vectors and they look correct, but when I tried to combine them together it is just messed up. Can someone please help me debug?
class Solution {
public:
vector<int> sortArrayByParity(vector<int>& A) {
unordered_multiset<int> even;
unordered_multiset<int> odd;
vector<int> result(A.size());
for(int C:A)
{
if(C%2 == 0)
even.insert(C);
else
odd.insert(C);
}
merge(even.begin(),even.end(),odd.begin(),odd.end(),result.begin());
return result;
}
};
If you just need even values before odds and not a complete sort I suggest you use std::partition. You give it two iterators and a predicate. The elements where the predicate returns true will appear before the others. It works in-place and should be very fast.
Something like this:
std::vector<int> sortArrayByParity(std::vector<int>& A)
{
std::partition(A.begin(), A.end(), [](int value) { return value % 2 == 0; });
return A;
}
Because the merge function assumes that the two ranges are sorted, which is used as in merge sort. Instead, you should just use the insert function of vector:
result.insert(result.end(), even.begin(), even.end());
result.insert(result.end(), odd.begin(), odd.end());
return result;
There is no need to create three separate vectors. As you have allocated enough space in the result vector, that vector can be used as the final vector also to store your sub vectors, storing the separated odd and even numbers.
The value of using a vector, which under the covers is an array, is to avoid inserts and moves. Arrays/Vectors are fast because they allow immediate access to memory as an offset from the beginning. Take advantage of this!
The code simply keeps an index to the next odd and even indices and then assigns the correct cell accordingly.
class Solution {
public:
// As this function does not access any members, it can be made static
static std::vector<int> sortArrayByParity(std::vector<int>& A) {
std::vector<int> result(A.size());
uint even_index = 0;
uint odd_index = A.size()-1;
for(int element: A)
{
if(element%2 == 0)
result[even_index++] = element;
else
result[odd_index--] = element;
}
return result;
}
};
Taking advantage of the fact that you don't care about the order among the even or odd numbers themselves, you could use a very simple algorithm to sort the array in-place:
// Assume helper function is_even() and is_odd() are defined.
void sortArrayByParity(std::vector<int>& A)
{
int i = 0; // scanning from beginning
int j = A.size()-1; // scanning from end
do {
while (i < j && is_even(A[i])) ++i; // A[i] is an even at the front
while (i < j && is_odd(A[j])) --j; // A[j] is an odd at the back
if (i >= j) break;
// Now A[i] must be an odd number in front of an even number A[j]
std::swap(A[i], A[j]);
++i;
--j;
} while (true);
}
Note that the function above returns void, since the vector is sorted in-place. If you do want to return a sorted copy of input vector, you'd need to define a new vector inside the function, and copy the elements right before every ++i and --j above (and of course do not use std::swap but copy the elements cross-way instead; also, pass A as const std::vector<int>& A).
// Assume helper function is_even() and is_odd() are defined.
std::vector<int> sortArrayByParity(const std::vector<int>& A)
{
std::vector<int> B(A.size());
int i = 0; // scanning from beginning
int j = A.size()-1; // scanning from end
do {
while (i < j && is_even(A[i])) {
B[i] = A[i];
++i;
}
while (i < j && is_odd(A[j])) {
B[j] = A[j];
--j;
}
if (i >= j) break;
// Now A[i] must be an odd number in front of an even number A[j]
B[i] = A[j];
B[j] = A[i];
++i;
--j;
} while (true);
return B;
}
In both cases (in-place or out-of-place) above, the function has complexity O(N), N being number of elements in A, much better than the general O(N log N) for sorting N elements. This is because the problem doesn't actually sort much -- it only separates even from odd. There's therefore no need to invoke a full-fledged sorting algorithm.

Time complexity of using heaps to find Kth largest element

I have some different implementations of the code for finding the Kth largest element in an unsorted array. The three implementations I use all use either min/max heap, but I am having trouble figuring out the runtime complexity for one of them.
Implementation 1:
int findKthLargest(vector<int> vec, int k)
{
// build min-heap
make_heap(vec.begin(), vec.end(), greater<int>());
for (int i = 0; i < k - 1; i++) {
vec.pop_back();
}
return vec.back();
}
Implementation 2:
int findKthLargest(vector<int> vec, int k)
{
// build max-heap
make_heap(vec.begin(), vec.end());
for (int i = 0; i < k - 1; i++) {
// move max. elem to back (from front)
pop_heap(vec.begin(), vec.end());
vec.pop_back();
}
return vec.front();
}
Implementation 3:
int findKthLargest(vector<int> vec, int k)
{
// max-heap prio. q
priority_queue<int> pq(vec.begin(), vec.end());
for (int i = 0; i < k - 1; i++) {
pq.pop();
}
return pq.top();
}
From my reading, I am under the assumption that the runtime for the SECOND one is O(n) + O(klogn) = O(n + klogn). This is because building the max-heap is done in O(n) and popping it will take O(logn)*k if we do so 'k' times.
However, here is where I am getting confused. For the FIRST one, with a min-heap, I assume building the heap is O(n). Since it is a min-heap, larger elements are in the back. Then, popping the back element 'k' times will cost k*O(1) = O(k). Hence, the complexity is O(n + k).
And similarly, for the third one, I assume the complexity is also O(n + klogn) with the same reasoning I had for the max-heap.
But, some sources still say that this problem cannot be done faster than O(n + klogn) with heaps/pqs! In my FIRST example, I think this complexity is O(n + k), however. Correct me if I'm wrong. Need help thx.
Properly implemented, getting the kth largest element from a min-heap is O((n-k) * log(n)). Getting the kth largest element from a max-heap is O(k * log(n)).
Your first implementation is not at all correct. For example, if you wanted to get the largest element from the heap (k == 1), the loop body would never be executed. Your code assumes that the last element in the vector is the largest element on the heap. That is incorrect. For example, consider the heap:
1
3 2
That is a perfectly valid heap, which would be represented by the vector [1,3,2]. Your first implementation would not work to get the 1st or 2nd largest element from that heap.
The second solution looks like it would work.
Your first two solutions end up removing items from vec. Is that what you intended?
The third solution is correct. It takes O(n) to build the heap, and O((k - 1) log n) to remove the (k-1) largest items. And then O(1) to access the largest remaining item.
There is another way to do it, that is potentially faster in practice. The idea is:
build a min-heap of size k from the first k elements in vec
for each following element
if the element is larger than the smallest element on the heap
remove the smallest element from the heap
add the new element to the heap
return element at the top of the heap
This is O(k) to build the initial heap. Then it's O((n-k) log k) in the worst case for the remaining items. The worst case occurs when the initial vector is in ascending order. That doesn't happen very often. In practice, a small percentage of items are added to the heap, so you don't have to do all those removals and insertions.
Some heap implementations have a heap_replace method that combines the two steps of removing the top element and adding the new element. That reduces the complexity by a constant factor. (i.e. rather than an O(log k) removal followed by an O(log k) insertion, you get an constant time replacement of the top element, followed by an O(log k) sifting it down the heap).
This is heap solution for java. We remove all elements which are less than kth element from the min heap. After that we will have kth largest element at the top of the min heap.
class Solution {
int kLargest(int[] arr, int k) {
PriorityQueue<Integer> heap = new PriorityQueue<>((a, b)-> Integer.compare(a, b));
for(int a : arr) {
heap.add(a);
if(heap.size()>k) {
// remove smallest element in the heap
heap.poll();
}
}
// return kth largest element
return heap.poll();
}
}
The worst case time complexity will be O(NlogK) where N is total no of elements. You will be using 1 heapify operation when inserting initial k elements in heap. After that you'll be using 2 operations(1 insert and 1 remove). So this makes the worst case time complexity O(NlogK). You can improve it with some other methods and bring the average case time complexity of heap update to Θ(1). Read this for more info.
Quickselect: Θ(N)
If you're looking for a faster solution on average. Quickselect algorithm which is based on quick sort is a good option. It provides average case time complexity of O(N) and O(1) space complexity. Of course worst case time complexity is O(N^2) however randomized pivot(used in following code) yields very low probability for such scenario. Following is code for quickselect algo for finding kth largest element.
class Solution {
public int findKthLargest(int[] nums, int k) {
return quickselect(nums, k);
}
private int quickselect(int[] nums, int k) {
int n = nums.length;
int start = 0, end = n-1;
while(start<end) {
int ind = partition(nums, start, end);
if(ind == n-k) {
return nums[ind];
} else if(ind < n-k) {
start = ind+1;
} else {
end = ind-1;
}
}
return nums[start];
}
private int partition(int[] nums, int start, int end) {
int pivot = start + (int)(Math.random()*(end-start));
swap(nums, pivot, end);
int left=start;
for(int curr=start; curr<end; curr++) {
if(nums[curr]<nums[end]) {
swap(nums, left, curr);
left++;
}
}
swap(nums, left, end);
return left;
}
private void swap(int[] nums, int i, int j) {
int temp = nums[i];
nums[i] = nums[j];
nums[j] = temp;
}
}

Deleting elements from a vector that meet a condition

I am trying to program the Sieve of Eratosthenes, but I am not sure how to delete elements from the vector I made given a specific condition. Does anyone know how to achieve this? Here is my code:
#include <iostream>
#include <vector>
using namespace std;
int prime(int n);
int prime(int n)
{
vector<int> primes;
for(int i = 2; i <= n; i++)
{
primes.push_back(i);
int t = i % (i + 1);
if(t == 0)
{
delete t; // is there a way of deleting the elements from
// the primes vector that follow this condition t?
}
cout << primes[i] << endl;
}
}
int main()
{
int n;
cout << "Enter a maximum numbers of primes you wish to find: " << endl;
cin >> n;
prime(n);
return 0;
}
Your algorithm is wrong:
t = i % (i + 1);
is
i
which is always != 0 because i is larger than 1.
By the way if you absolutely want to remove the t-th element you have to be sure that the vector is not empty and then you do:
primes.erase(primes.begin()+t);
Even if you fix the algorithm your approach is inefficient: erasing an element in the middle of a vector means copying back of one position all the ones following the erased element.
You don't usually want to delete elements in the middle of a Sieve of Eratosthenes, but when you do want to, you usually want to use the remove/erase idiom:
x.erase(std::remove_if(x.begin(), x.end(), condition), x.end());
std::remove basically just partitions the collection into those that don't meet the specified condition, followed by objects that may have been used as the source of either a copy or a move, so you can't count on their value, but they are in some stable state so erasing them will work fine.
The condition can be either a function or a functor. It receives (a reference to a const) object that it examines and determines whether it lives or dies (so to speak).
Find here a c++ pseudocode for the sieve algorithm. Once you've understood the algorithm you can start working on this.
primes(vector& primes, size_t max){
vector primesFlag(1,max);
i=1
while(i*i<max){
++i;
for(j=i*i; j < max; j+= i){
primesFlag[j] = 0;
}
}
primes.clear()
primes.reserve(...);
for(j >= 2;
if primesFlag[j] = 1
primes.push_back(j);
}

Trying to understand the Binary Insertion Sort?

Could anyone please tell me how this code sorts the array? i don't get it! and how is this code reducing the complexity of a regular insertion sort?
// Function to sort an array a[] of size 'n'
void insertionSort(int a[], int n)
{
int i, loc, j, k, selected;
for (i = 1; i < n; ++i)
{
j = i - 1;
selected = a[i];
// find location where selected sould be inseretd
loc = binarySearch(a, selected, 0, j);
// Move all elements after location to create space
while (j >= loc)
{
a[j+1] = a[j];
j--;
}
a[j+1] = selected;
}
}
This code uses the fact that the portion of the array from zero, inclusive, to i, exclusive, is already sorted. That's why it can run binarySearch for the insertion location of a[i], rather than searching for it linearly.
This clever trick does not change the asymptotic complexity of the algorithm, because the part where elements from loc to i are moved remains linear. In the worst case (which happens when the array is sorted in reverse) each of the N insertion steps will make i moves, for a total of N(N-1)/2 moves.
The only improvement that this algorithm has over the classic insertion sort is the number of comparisons. If comparisons of objects being sorted are computationally expensive, this algorithm can significantly reduce the constant factor.