Calculating Running Time of Binary Search - c++

The following Binary Search program is returning a running time of 0 milliseconds using GetTickCount() no matter how big the search item is set in the given list of values.
Is there any other way to get the running time for comparison?
Here's the code :
#include <iostream>
#include <windows.h>
using namespace std;
int main(int argc, char **argv)
{
long int i = 1, max = 10000000;
long int *data = new long int[max];
long int initial = 1;
long int final = max, mid, loc = -5;
for(i = 1; i<=max; i++)
{
data[i] = i;
}
int range = final - initial + 1;
long int search_item = 8800000;
cout<<"Search Item :- "<<search_item<<"\n";
cout<<"-------------------Binary Search-------------------\n";
long int start = GetTickCount();
cout<<"Start Time : "<<start<<"\n";
while(initial<=final)
{
mid=(initial+final)/2;
if(data[mid]==search_item)
{
loc=mid;
break;
}
if(search_item<data[mid])
final=mid-1;
if(search_item>data[mid])
initial=mid+1;
}
long int end = GetTickCount();
cout<<"End Time : "<<end<<"\n";
cout << "time: " << double(end - start)<<" milliseconds \n";
if(loc==-5)
cout<<" Required number not found "<<endl;
else
cout<<" Required number is found at index "<<loc<<endl;
return 0;
}

Your code looks like this:
int main()
{
// Some code...
while (some_condition)
{
// Some more code...
// Print timing result
return 0;
}
}
That's why your code prints zero time, you only do one iteration of the loop then you exit the program.

Try to use the clock_t object from the time.h header:
clock_t START, END;
START = clock();
**YOUR CODE GOES HERE**
END = clock();
float clocks = END - START;
cout <<"running time : **" << clocks/CLOCKS_PER_SEC << "** seconds" << endl;
CLOCKS_PER_SEC is a defined var to convert from clock ticks to seconds.

https://msdn.microsoft.com/en-us/library/windows/desktop/ms724408(v=vs.85).aspx
This article says that result of GetTickCount will wrap to zero if you system runs for 49.7 days.
You can find here: Easily measure elapsed time how to measure time in C++.

You can use time.h header
and do something like this in your code :
clock_t Start, Stop;
double sec;
Start = clock();
//call your BS function
Stop = clock();
Sec = ((double) (Stop - Start) / CLOCKS_PER_SEC);
and print the sec!
I hope this helps you!

The complexity of binary search is log2(N), it's about 23 for N = 10000000.
I think its not enough to mesure in realtime scale and even clock.
In this case you should use unsigned long long __rdtsc(), that returns number of processor ticks from last reset. Put this before and after your binary search and place cout << start; after obtaining end time. Overwise time of output would be included.
There is also memory corruption around data array. Index in C runs from 0 to size - 1, so thereis no data[max] element.
And delete [] data; before calling return.

Related

optimize c++ query to calculate Nmin

I have run into a problem where i am trying to optimize my query which is created to calculate Nmin values for the increasing values of N and error approximation.
I am not from programming background and have just started to take it up.
This is the calculation which is inefficient as it calculates Nmin even after finding Nmin.
Now to reduce the time i did below changes reduce function call with no improvement:
#include<iostream>
#include<cmath>
#include<time.h>
#include<iomanip>
using namespace std;
double f(int);
int main(void)
{
double err;
double pi = 4.0*atan(1.0);
cout<<fixed<<setprecision(7);
clock_t start = clock();
for (int n=1;;n++)
{
if((f(n)-pi)>= 1e-6)
{
cout<<"n_min is "<< n <<"\t"<<f(n)-pi<<endl;
}
else
{
break;
}
}
clock_t stop = clock();
//double elapsed = (double)(stop - start) * 1000.0 / CLOCKS_PER_SEC; //this one in ms
cout << "time: " << (stop-start)/double(CLOCKS_PER_SEC)*1000 << endl; //this one in s
return 0;
}
double f(int n)
{
double sum=0;
for (int i=1;i<=n;i++)
{
sum += 1/(1+pow((i-0.5)/n,2));
}
return (4.0/n)*sum;
}
Is there any way to reduce the time and make the second query efficient?
Any help would be greatly appreciated.
I do not see any immediate way of optimizing the algorithm itself. You could however reduce the time significantly by not writing to the standard output for every iteration. Also, do not calculate f(n) more than once per iteration.
for (int n=1;;n++)
{
double val = f(n);
double diff = val-pi;
if(diff < 1e-6)
{
cout<<"n_min is "<< n <<"\t"<<diff<<endl;
break;
}
}
Note however that this will yield a higher n_min (increased by 1 compared to the result of your version) since we changed the condition to diff < 1e-6.

Counting time in C++/CLI

I have app, where i must count time of executing part of C++ function and ASM function. Actually i have problem, times which i get are weird - 0 or about 15600. O ocurs more often. And sometimes, after executing, times looks good, and values are different than 0 and ~15600. Anybody knows why it occurs ? And how to fix it ?
Fragment of counting time for executing my app for C++:
auto start = chrono::system_clock::now();
for (int i = 0; i < nThreads; i++)
xThread[i]->Start(i);
for (int i = 0; i < nThreads; i++)
xThread[i]->Join();
auto elapsed = chrono::system_clock::now() - start;
long long milliseconds = chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
cppTimer = milliseconds;
What you're seeing there is the resolution of your timer. Apparently, chrono::system_clock ticks every 1/64th of a second, or 15,625 microseconds, on your system.
Since you're in C++/CLI and have the .Net library available, I'd switch to using the Stopwatch class. It will generally have a much higher resolution than 1/64th of a second.
Looks good to me. Except for cast to std::chrono::microseconds and naming it milliseconds.
The snippet I have used for many months now is:
class benchmark {
private:
typedef std::chrono::high_resolution_clock clock;
typedef std::chrono::milliseconds milliseconds;
clock::time_point start;
public:
benchmark(bool startCounting = true) {
if(startCounting)
start = clock::now();
}
void reset() {
start = clock::now();
}
// in milliseconds
double elapsed() {
milliseconds ms = std::chrono::duration_cast<milliseconds>(clock::now() - start);
double elapsed_secs = ms.count() / 1000.0;
return elapsed_secs;
}
};
// usage
benchmark b;
...
cout << "took " << b.elapsed() << " ms" << endl;

Check how long it takes for conhost to print

I have an array of booleans each representing a number. I am printing each one that is true with a for loop: for(unsigned long long l = 0; l<numt; l++) if(primes[l]) cout << l << endl; numt is the size of the array and is equal to over 1000000. The console window takes 30 seconds to print out all the value, but a timer I put in my program says 37ms. How do I wait for all the values to finish printing on the screen in my program so I can include that in my time.
Try this:
#include <windows.h>
...
int main() {
//init code
double startTime = GetTickCount();
//your loop
double timeNeededinSec = (GetTickCount() - startTime) / 1000.0;
}
Just in defense of ctime, cause it gives same result as with GetTickCount:
#include <ctime>
int main()
{
...
clock_t start = clock();
...
clock_t end = clock();
double timeNeededinSec = static_cast<double>(end - start) / CLOCKS_PER_SEC;
...
}
Update:
And the one with time() but in this case we can lost some precision( ~1 sec) because result in seconds.
#include <ctime>
int main()
{
time_t start;
time_t end;
...
time(&start);
...
time(&end);
int timeNeededinSec = static_cast<int>(end-start);
}
Combining both of them in simple example will show you the difference in result. In my tests I saw difference only in value after dot.

How To record total timing for C++ repetive Iterative function?

I have this function prototype code for factorial calculation by iteration
How do I include a timer to produce total time spent for looping 100 times of the function?
for (unsigned long i=number; i>=1; i--) result *=i;
My C++ knowledge is barely basic, so not sure if "loop" is correctly mentioned here.
However, I was hinted to use .
Pls advice
thank you
Here's a proper C++11 version of some timing logic:
using namespace std;
using namespace chrono;
auto start_time = system_clock::now();
// your loop goes here:
for (unsigned long i=number; i>=1; i--) result *=i;
auto end_time = system_clock::now();
auto durationInMicroSeconds = duration_cast<microseconds>(end_time - start_time);
cout << "Looping " << number << " times took " << durationInMicroSeconds << "microseconds" << endl;
Just for sport, here's a simple RAII-based variation:
class Timer {
public:
explicit Timer(const string& name)
: name_(name)
, start_time_(system_clock::now()) {
}
~Timer() {
auto end_time = system_clock::now();
auto durationInMicroSeconds = duration_cast<microseconds>(end_time - start_time);
cout << "Timer: " << name << " took " << durationInMicroSeconds << "microseconds" << endl;
}
private:
string name_;
system_clock::time_point start_time_;
};
Sure, it's a bit more code, but once you have that, you can reuse it fairly efficiently:
{
Timer timer("loops");
// your loop goes here:
for (unsigned long i=number; i>=1; i--) result *=i;
}
If you looking for time spent in executing number of looping statements in the program code try making use of gettimeofday() as below,
#include <sys/time.h>
struct timeval tv1, tv2;
gettimeofday(&tv1, NULL);
/* Your loop code to execute here */
gettimeofday(&tv2, NULL);
printf("Time taken in execution = %f seconds\n",
(double) (tv2.tv_usec - tv1.tv_usec) / 1000000 +
(double) (tv2.tv_sec - tv1.tv_sec));
This solution is more towards C which can be employed in your case to calculate time spent.
This is a perfect situation for a lambda. Honestly I don't know the syntax in C++ but it should be something like this:
duration timer(function f) {
auto start = system_clock::now();
f();
return system_clock::now() - start;
}
To use it, you wrap your code in a lambda and pass it to the timer. The effect is very similar to #Martin J.'s code.
duration code_time = timer([] () {
// put any code that you want to time here
}
duration loop_time = timer([] () {
for (unsigned long i=number; i>=1; i--) {
result *=i;
}
}

Not Finding Times of Prime Generation / Limited Generation

This program is a c++ program that finds primes using the sieve of eratosthenes to calculate primes. It is then supposed to store the time it takes to do this, and reperform the calculation 100 times, storing the times each time. There are two things that I need help with in this program:
Firstly, I can only test numbers up to 480million I would like to get higher than that.
Secondly, when i time the program it only gets the first timing and then prints zeros as the time. This is not correct and I don't know what the problem with the clock is. -Thanks for the help
Here is my code.
#include <iostream>
#include <ctime>
using namespace std;
int main ()
{
int long MAX_NUM = 1000000;
int long MAX_NUM_ARRAY = MAX_NUM+1;
int long sieve_prime = 2;
int time_store = 0;
while (time_store<=100)
{
int long sieve_prime_constant = 0;
int *Num_Array = new int[MAX_NUM_ARRAY];
std::fill_n(Num_Array, MAX_NUM_ARRAY, 3);
Num_Array [0] = 1;
Num_Array [1] = 1;
clock_t time1,time2;
time1 = clock();
while (sieve_prime_constant <= MAX_NUM_ARRAY)
{
if (Num_Array [sieve_prime_constant] == 1)
{
sieve_prime_constant++;
}
else
{
Num_Array [sieve_prime_constant] = 0;
sieve_prime=sieve_prime_constant;
while (sieve_prime<=MAX_NUM_ARRAY - sieve_prime_constant)
{
sieve_prime = sieve_prime + sieve_prime_constant;
Num_Array [sieve_prime] = 1;
}
if (sieve_prime_constant <= MAX_NUM_ARRAY)
{
sieve_prime_constant++;
sieve_prime = sieve_prime_constant;
}
}
}
time2 = clock();
delete[] Num_Array;
cout << "It took " << (float(time2 - time1)/(CLOCKS_PER_SEC)) << " seconds to execute this loop." << endl;
cout << "This loop has already been executed " << time_store << " times." << endl;
float Time_Array[100];
Time_Array[time_store] = (float(time2 - time1)/(CLOCKS_PER_SEC));
time_store++;
}
return 0;
}
I think the problem is that you don't reset the starting prime:
int long sieve_prime = 2;
Currently that is outside your loop. On second thoughts... That's not the problem. Has this code been edited to incorporate the suggestions in Mats Petersson's answer? I just corrected the bad indentation.
Anyway, for the other part of your question, I suggest you use char instead of int for Num_Array. There is no use using int to store a boolean. By using char you should be able to store about 4 times as many values in the same amount of memory (assuming your int is 32-bit, which it probably is).
That means you could handle numbers up to almost 2 billion. Since you are using signed long as your type instead of unsigned long, that is approaching the numeric limits for your calculation anyway.
If you want to use even less memory, you could use std::bitset, but be aware that performance could be significantly impaired.
By the way, you should declare your timing array at the top of main:
float Time_Array[100];
Putting it inside the loop just before it is used is a bit whack.
Oh, and just in case you're interested, here is my own implementation of the sieve which, personally, I find easier to read than yours....
std::vector<char> isPrime( N, 1 );
for( int i = 2; i < N; i++ )
{
if( !isPrime[i] ) continue;
for( int x = i*2; x < N; x+=i ) isPrime[x] = 0;
}
This section of code is supposed to go inside your loop:
int *Num_Array = new int[MAX_NUM_ARRAY];
std::fill_n(Num_Array, MAX_NUM_ARRAY, 3);
Num_Array [0] = 1;
Num_Array [1] = 1;
Edit: and this one needs be in the loop too:
int long sieve_prime_constant = 0;
When I run this on my machine, it takes 0.2s per loop. If I add two zeros to the MAX_NUM_ARRAY, it takes 4.6 seconds per iteration (up to the 20th loop, I got bored waiting longer than 1.5 minute)
Agree with the earlier comments. If you really want to juice things up you don't store an array of all possible values (as int, or char), but only keep the primes. Then you test each subsequent number for divisibility through all primes found so far. Now you are only limited by the number of primes you can store. Of course, that's not really the algorithm you wanted to implement any more... but since it would be using integer division, it's quite fast. Something like this:
int myPrimes[MAX_PRIME];
int pCount, ii, jj;
ii = 3;
myPrimes[0]=2;
for(pCount=1; pCount<MAX_PRIME; pCount++) {
for(jj = 1; jj<pCount; jj++) {
if (ii%myPrimes[jj]==0) {
// not a prime
ii+=2; // never test even numbers...
jj = 1; // start loop again
}
}
myPrimes[pCount]=ii;
}
Not really what you were asking for, but maybe it is useful.