Measuring runtime with counter variable - c++

I am currently working on an assignment which asks us to implement a few different sorts and introduce counter variables to measure the runtime.
My question is that I'm confused about whether or not to include certain "operations" as something that would increment my counter. For instance, my textbook says this:
....
So, from what I understand, I should be counting "comparisons" but I do not understand if this applies to if statements, while loops, etc.
For instance, here is my insertion sort.
float insertionSort(int theArray[], int n) {
float count = 0;
for (int unsorted = 1; unsorted < n; unsorted++) {
int nextItem = theArray[unsorted];
int loc = unsorted;
while ((loc > 0) && (theArray[loc - 1] > nextItem)) {
theArray[loc] = theArray[loc - 1];
theArray[loc] = nextItem;
loc--;
count += 4;
}
}
return count;
}
As you can see, I increment count by 4 for each iteration of the while loop. This really highlights my question, I think.
My reasoning is that we make two comparisons in the conditional statement of the while loop:
(loc > 0 && theArray[loc - 1] > nextItem)
Afterwards, we make two moves in the array. From my understanding, this means that we have performed 4 "operations" and we would increment counter by 4 for the sake of measuring runtime at the end of execution.
Is this correct? Thank you SO much for any help.

In this case, your number of exchanges is proportional to your number of comparisons. Also, your loc > 0 is what I'd consider an "incidental operation" as stated in that excerpt. So, assuming comparisons and movements are constant time operations (which they are for integers), you'll get the same trends in your data by simply incrementing your counter once each loop iteration.

Related

Why does the longest prefix which is also suffix calculation part in the KMP have a time complexity of O(n) and not O(n^2)?

I was going through the code of KMP when I noticed the Longest Prefix which is also suffix calculation part of KMP. Here is how it goes,
void computeLPSArray(char* pat, int M, int* lps)
{
int len = 0;
lps[0] = 0;
int i = 1;
while (i < M) {
if (pat[i] == pat[len]) {
len++;
lps[i] = len;
i++;
}
else
{
if (len != 0) {
len = lps[len - 1]; //<----I am referring to this part
}
else
{
lps[i] = 0;
i++;
}
}
}
}
Now the part where I got confused was the one which I have shown in comments in the above code. Now we do know that when a code contains a loop like the following
int a[m];
memset(a, 0, sizeof(a));
for(int i = 0; i<m; i++){
for(int j = i; j>=0; j--){
a[j] = a[j]*2;//This inner loop is causing the same cells in the 1
//dimensional array to be visited more than once.
}
}
The complexity comes out to be O(m*m).
Similarly if we write the above LPS computation in the following format
while(i<M){
if{....}
else{
if(len != 0){
//doesn't this part cause the code to again go back a few elements
//in the LPS array the same way as the inner loop in my above
//written nested for loop does? Shouldn't that mean the same cell
//in the array is getting visited more than once and hence the
//complexity should increase to O(M^2)?
}
}
}
It might be that the way I think complexities are calculated is wrong. So please clarify.
If expressions do not take time that grows with len.
Len is an integer. Reading it takes O(1) time.
Array indexing is O(1).
Visiting something more than once does not mean you are higher O notation wise. Only if the visit count grows faster than kn for some k.
If you carefully analyze the algorithm of creating prefix table, you may notice that the total number of rollbacked positions could be m at most, so the upper bound for total number of iterations is 2*m which yields O(m)
Value of len grows alongside the main iterator i and whenever there is a mismatch, len drops back to zero value but this "drop" cannot exceed the interval passed by the main iterator i since the start of match.
For example, let's say, the main iterator i started matching with len at position 5 and mismatched at position 20.
So,
LPS[5]=1
LPS[6]=2
...
LPS[19]=15
At the moment of mismatch, len has a value of 15. Hence it may rollback at most 15 positions down to zero, which is equivalent to the interval passed by i while matching. In other words, on every mismatch, len travels back no more than i has traveled forward since the start of match

Is this Insertion Sort implementation worst case O(n)?

I know that Insertion Sort is supposed to be worst case O(n^2), but I'm wondering why the following implementation isn't O(n).
void main()
{
//insertion sort runs from i = 1 to i = n, thus is worst case O(n)
for (
int i = 1,
placeholder = 0,
A[] = { 10,9,8,7,6,5,4,3,2,1 },
j = i;
i <= 10;
j-- > 0 && A[j - 1] > A[j]
? placeholder = A[j], A[j] = A[j - 1], A[j - 1] = placeholder
: j = ++i
)
{
for (
int x = 0;
x < 10; x++
)
cout << A[x] << ' ';
cout << endl;
}
system("pause");
}
There is only one for loop involved here and it runs from 1 to n. It seems to me that this would be the definition of O(n). What exactly am I missing here?
Sloppy terminology has led many people to false conclusions. This appears to be an example.
There is only one for loop involved here and it runs from 1 to n.
Yes, there is only one loop, but what is this "it" to which you refer? I really do mean for you to think about it. Should "it" refer to the loop? That would match a fairly common, yet sloppy, use of terminology, but a loop does not evaluate to a value. So a loop cannot actually run from one value to another. The sloppiness can be overlooked in simpler contexts, but not in yours.
Normally, the "it" would really refer to the loop control variable. With a simple loop, like for (int i = 0; i < 10; ++i), there is a one-to-one correspondence between iterations of the loop and values assigned to the control variable (which is i in my example). So there is an equivalence present, allowing one to refer to the loop when one really means the control variable. Saying that a loop runs from x to y really means that the control variable runs from x to y, and that there is one iteration of the loop per value assigned to the control variable. This correspondence fails in your code.
In your loop, the thing that runs from 1 to n is i. However, i is not incremented with each iteration of the loop, so "it runs from 1 to n" is not an accurate assessment of your loop. When i is 1, there are up to 2 iterations. That's not a one-to-one correspondence between iterations and values of i. As i increases, the divergence from one-to-one grows. Each value of i potentially corresponds to i+1 iterations, as j counts down from i to 0. The total number of iterations in the worst case scenario for n entries is the sum of the potential number of iterations for each value of i: 2 + 3 + &ctdot; + (n+1) = (n² + 3n)/2. That's O(n²).
Moral of the story: writing compact, cryptic code does not magically change the complexity of the algorithm being implemented. Cryptic code can make the complexity harder to pin down, but the main thing you've accomplished is making your code harder to read.
Thats a very odd way to write code.But You have 2 for loops in the definition. It is not always necessary to have nested loops to have O(n^2), you can have it with recursion also.
In simple terms O(n^2)n simply means number of operations performed when the input size is n.
The code given is not a correct c++ code and not even close to a pseudocode.
The correct code should be like this:
void main()
{
int i,j,key;
int A[]={10,9,8,7,6,5,4,3,2,1};
//cout<<"Array before sorting:"<<endl;
//for(i=0;i<10;i++)
//cout<<A[i]<<"\t";
//cout<<endl;
for(i=1;i<10;i++)
{
key=A[i];
for(j=i-1;j>=0 && A[j]>key;j--)
{
A[j+1]=A[j];
}
A[j+1]=key;
}
//cout<<"Array after sorting:"<<endl;
//for(i=0;i<10;i++)
//cout<<A[i]<<"\t";
//cout<<endl;
}
See, insertion sort has two loops. Outer loop is to maintain the key variable and the inner loop is to compare the elements prior to key variable with the key variable. And therefore the worst case time complexity is O(n^2) and not O(n), as the basic algorithm of insertion sort contains two loops, both of which eventually iterate n times in case of worst case i.e. when the array is inverted.

What EXACTLY is the "basic operation" in this given algorithm

So I have this algorithm and I am trying to determine the basic operation for an algorithm analysis problem.
here is the code:
median(int array[]){
int k = array.length();
int n = k/2;
for(int i = 0; i < k; i++){
int numsmaller = 0;
int numequal = 0;
for(int j = 0; j < k; k++){
if(array[j] < array[i]){
numsmaller++;
}else
if(array[j] == array[i]){
numequal++;
}
if(numsmaller < n && n <= (numsmaller + numequal){
return array[i]
}
}//inner loop
}//outter loop
}//end of function
I am under the current impression that the basic operation of this Algorithm is the two if statements within the inner loop of the function.
What is confusing me is that, I am unsure if the basic operation is the boolean expression itself which would be executed every iteration checking if array[j] < array[i] and if array[j] is equal to array[i]. Or weather the basic operation is the code execution that occurs when either of the if statements are true. Can someone please give me a solid explanation in terms of algorithm analysis what the basic operation of this algorithm would be :) please and much thanks
Basic operations may be things like:
Array indexing
Conditionals, i.e. if (x == y)
Assignments, i.e. x = 10
And even basic math operations, i.e. y + 2
Note this is not an exhaustive list by any means. Also note that the worst case scenario of some code requires the maximum number of basic operations to be performed; so in the following code, you'll see three basic operations in the worst case.
if (variable == true) {
int x = y + 2;
}
...this is because we really just composed several of the above list items. We have to perform the first conditional no matter one (one basic op) but after that the "worst case scenario" is when variable = true, because we then continue to perform an assignment. Of course in order to compute the non-obvious value that x will assume via the assignment, we have to perform another basic operation (arithmetic between y and 2) which gives us a total of three basic operations.
So in your case, the basic operations performed in the inner loop are the conditionals, the incrementing (basically assignment) of a variable given one of the conditions are met, and the two conditionals plus arithmetic done in the
if(numsmaller < n && n <= (numsmaller + numequal)
line.
Hopefully this helps.

Sorting Optimization

I'm currently following an algorithms class and thus decided it would be good practice to implement a few of the sorting algorithms and compare them.
I implemented merge sort and quick sort and then compared their run time, along with the std::sort:
My computer isn't the fastest but for 1000000 elements I get on average after 200 attempts:
std::sort -> 0.620342 seconds
quickSort -> 2.2692
mergeSort -> 2.19048
I would like to ask if possible for comments on how to improve and optimize the implementation of my code.
void quickSort(std::vector<int>& nums, int s, int e, std::function<bool(int,int)> comparator = defaultComparator){
if(s >= e)
return;
int pivot;
int a = s + (rand() % (e-s));
int b = s + (rand() % (e-s));
int c = s + (rand() % (e-s));
//find median of the 3 random pivots
int min = std::min(std::min(nums[a],nums[b]),nums[c]);
int max = std::max(std::max(nums[a],nums[b]),nums[c]);
if(nums[a] < max && nums[a] > min)
pivot = a;
else if(nums[b] < max && nums[b] > min)
pivot = b;
else
pivot = c;
int temp = nums[s];
nums[s] = nums[pivot];
nums[pivot] = temp;
//partition
int i = s + 1, j = s + 1;
for(; j < e; j++){
if(comparator(nums[j] , nums[s])){
temp = nums[i];
nums[i++] = nums[j];
nums[j] = temp;
}
}
temp = nums[i-1];
nums[i-1] = nums[s];
nums[s] = temp;
//sort left and right of partition
quickSort(nums,s,i-1,comparator);
quickSort(nums,i,e,comparator);
Here s is the index of the first element, e the index of the element after the last. defaultComparator is just the following lambda function:
auto defaultComparator = [](int a, int b){ return a <= b; };
std::vector<int> mergeSort(std::vector<int>& nums, int s, int e, std::function<bool(int,int)> comparator = defaultComparator){
std::vector<int> sorted(e-s);
if(s == e)
return sorted;
int mid = (s+e)/2;
if(s == mid){
sorted[0] = nums[s];
return sorted;
}
std::vector<int> left = mergeSort(nums, s, mid);
std::vector<int> right = mergeSort(nums, mid, e);
unsigned int i = 0, j = 0;
unsigned int c = 0;
while(i < left.size() || j < right.size()){
if(i == left.size()){
sorted[c++] = right[j++];
}
else if(j == right.size()){
sorted[c++] = left[i++];
}
else{
if(comparator(left[i],right[j]))
sorted[c++] = left[i++];
else
sorted[c++] = right[j++];
}
}
return sorted;
Thank you all
The first thing I see, you're passing a std::function<> which involves a virtual call, one of the most expensive calling strategies. Give it a try with simply a template T (which might be a function) - the result will be direct function calls.
Second thing, never do this result-in-local-container (vector<int> sorted;) when optimizing and when in-place variant exists. Do in-place sort. Client should be aware of you shorting their vector; if they wish, they can make a copy in advance. You take non-const reference for a reason. [1]
Third, there's a cost associated with rand() and it's far from negligible. Unless you're sure you need the randomized variant of quicksort() (and its benefits regarding 'no too bad sequence'), use just the first element as pivot. Or the middle.
Use std::swap() to swap two elements. Chances are, it gets translated to xchg (on x86 / x64) or an equivalent, which is hard to beat. Whether the optimizer identifies your intend to swap at these places without being explicit could be verified from assembly output.
The way you found the median of three elements is full of conditional moves / branches. It's simply nums[a] + nums[b] + nums[c] - max - min; but getting nums[...], min and max at the same time could also be optimized further.
Avoid i++ when aiming at speed. While most optimizers will usually create good code, there's a small chance that it's suboptimal. Be explicit when optimizing (++i after the swap), but _only_when_optimizing_.
But the most important one: valgrind/callgrind/kcachegrind. Profile, profile, profile. Only optimize what's really slow.
[1] There's an exception to this rule: const containers that you build from non-const ones. These are usually in-house types and are shared across multiple threads, hence it's better to keep them const & copy when modification is needed. In this case, you'll allocate a new container (either const or not) in your function, but you'll probably keep const one for users' convenience on API.
For quick sort, use Hoare like partition scheme.
http://en.wikipedia.org/wiki/Quicksort#Hoare_partition_scheme
Median of 3 only needs 3 if / swap statements (effectively a bubble sort). No need for min or max check.
if(nums[a] > nums[b])
std::swap(nums[a], nums[b]);
if(nums[b] > nums[c])
std::swap(nums[b], nums[c]);
if(nums[a] > nums[b])
std::swap(nums[a], nums[b]);
// use nums[b] as pivot value
For merge sort, use an entry function that does a one time creation of a working vector, then pass that vector by reference to the actual merge sort function. For top down merge sort, the indices determine the start, middle, and end of each sub-vector.
If using top down merge sort, the code can avoid copying data by alternating the direction of merge depending on the level of recursion. This can be done using two mutually recursive functions, the first one where the result ends up in the original vector, the second one where the result ends up in the working vector. The first one calls the second one twice, then merges from the working vector back to the original vector, and vice versa for the second one. For the second one, if the size == 1, then it needs to copy 1 element from the original vector to the working vector. An alternative to two functions is to pass a boolean for which direction to merge.
If using bottom up merge sort (which will be a bit faster), then each pass swaps vectors. The number of passes needed is determined up front and in the case of an odd number of passes, the first pass swaps in place, so that the data ends up in the original vector after all merge passes are done.

How to trace error with counter in do while loop in C++?

I am trying to get i to read array with numbers and get the smaller number, store it in variable and then compare it with another variable that is again from two other numbers (like 2,-3).
There is something wrong in the way I implement the do while loop. I need the counter 'i' to be updated twice so it goes through I have 2 new variables from 4 compared numbers. When I hard code it n-1,n-2 it works but with the loop it gets stuck at one value.
int i=0;
int closestDistance=0;
int distance=0;
int nextDistance=0;
do
{
distance = std::min(values[n],values[n-i]); //returns the largest
distance=abs(distance);
i++;
nextDistance=std::min(values[n],values[n-i]);
nextDistance=abs(closestDistance); //make it positive then comp
if(distance<nextDistance)
closestDistance=distance;//+temp;
else
closestDistance=nextDistance;
i++;
}
while(i<n);
return closestDistance;
Maybe this:
int i = 0;
int m = 0;
do{
int lMin = std::min(values[i],values[i + 1]);
i += 2;
int rMin = std::min(values[i], values[i + 1]);
m = std::min(lMin,rMin);
i += 2;
}while(i < n);
return m;
I didn't understand what you meant, but this compares values in values 4 at a time to find the minimal. Is that all you needed?
Note that if n is the size of values, this would go out of bounds. n would have to be the size minus 4, leading to odd exceptional cases.
The issue with your may be in the call to abs. Are all the values positive? Are you trying to find the smallest absolute value?
Also, note that using i += 2 twice ensures that you do not repeat any values. This means that you will go over 4 unique values. Your code goes through 3 in each iteration of the loop.
I hope this clarified.
What are you trying to do in following lines.
nextDistance=std::min(values[n],values[n-i]);
nextDistance=abs(closestDistance); //make it positive , then computed