Converting seconds (clocks) to decimal format - c++

I'm making a stopwatch, and I need to output the seconds out like so: "9.743 seconds".
I have the start time, the end time, and the difference measured out in clocks, and was planning on achieving the decimal by dividing the difference by 1000. However, no matter what I try, it will always output as a whole number. It's probably something small I'm overlooking, but I haven't a clue what.
Here's my code:
#include "Stopwatch.h"
#include <iostream>
#include <iomanip>
using namespace std;
Stopwatch::Stopwatch(){
clock_t startTime = 0;
clock_t endTime = 0;
clock_t elapsedTime = 0;
long miliseconds = 0;
}
void Stopwatch::Start(){
startTime = clock();
}
void Stopwatch::Stop(){
endTime = clock();
}
void Stopwatch::DisplayTimerInfo(){
long formattedSeconds;
setprecision(4);
seconds = (endTime - startTime) / CLOCKS_PER_SEC;
miliseconds = (endTime - startTime) / (CLOCKS_PER_SEC / 1000);
formattedSeconds = miliseconds / 1000;
cout << formattedSeconds << endl;
system("pause");
}
Like I said, the output is integer. Say it timed 5892 clocks: the output would be "5".

Division between integers is still an integer. Cast one of your division parameters to a real type (double or float) and assign to another variable that is a real type.
double elapsedSeconds = (endTime - startTime) / (double)(CLOCKS_PER_SEC);
cout << elapsedSeconds << endl;

formattedSeconds =(double) miliseconds / 1000;
it will give you real number output

Related

8 seconds for regex_match("", regex(".{40000}"));?

Update 2: Actually it's the regex(".{40000}");. That alone already takes that much time. Why?
regex_match("", regex(".{40000}")); takes almost 8 seconds on my PC. Why? Am I doing something wrong? I'm using gcc 4.9.3 from MinGW on Windows 10 on an i7-6700.
Here's a full test program:
#include <iostream>
#include <regex>
#include <ctime>
using namespace std;
int main() {
clock_t t = clock();
regex_match("", regex(".{40000}"));
cout << double(clock() - t) / CLOCKS_PER_SEC << endl;
}
How I compile and run it:
C:\Users\ ... \coding>g++ -std=c++11 test.cpp
C:\Users\ ... \coding>a.exe
7.643
Update: Looks like the time is quadratic in the given number. Doubling it roughly quadruples the time:
10000 0.520 seconds (factor 1.000)
20000 1.922 seconds (factor 3.696)
40000 7.810 seconds (factor 4.063)
80000 31.457 seconds (factor 4.028)
160000 128.904 seconds (factor 4.098)
320000 536.358 seconds (factor 4.161)
The code:
#include <regex>
#include <ctime>
using namespace std;
int main() {
double prev = 0;
for (int i=10000; ; i*=2) {
clock_t t0 = clock();
regex_match("", regex(".{" + to_string(i) + "}"));
double t = double(clock() - t0) / CLOCKS_PER_SEC;
printf("%7d %7.3f seconds (factor %.3f)\n", i, t, prev ? t / prev : 1);
prev = t;
}
}
Still no idea why. It's a very simple regex and the empty string (though it's the same with short non-empty strings). It should fail instantly. Is the regex engine just weird and bad?
Because it want be fast...
It very possible that transform this regex to another representation (state machine or something else) that is easier and faster to run. C# allow even generating runtime code that represents regex.
In your case you probably hit some bug in that transformation that have O(n^2) complexity.
Measuring the construction and matching separately:
clock_t t1 = clock();
regex r(".{40000}");
clock_t t2 = clock();
regex_match("", r);
clock_t t3 = clock();
cout << double(t2 - t1) / CLOCKS_PER_SEC << '\n'
<< double(t3 - t2) / CLOCKS_PER_SEC << endl;
I see:
0.077336
0.000613

Calculating Running Time of Binary Search

The following Binary Search program is returning a running time of 0 milliseconds using GetTickCount() no matter how big the search item is set in the given list of values.
Is there any other way to get the running time for comparison?
Here's the code :
#include <iostream>
#include <windows.h>
using namespace std;
int main(int argc, char **argv)
{
long int i = 1, max = 10000000;
long int *data = new long int[max];
long int initial = 1;
long int final = max, mid, loc = -5;
for(i = 1; i<=max; i++)
{
data[i] = i;
}
int range = final - initial + 1;
long int search_item = 8800000;
cout<<"Search Item :- "<<search_item<<"\n";
cout<<"-------------------Binary Search-------------------\n";
long int start = GetTickCount();
cout<<"Start Time : "<<start<<"\n";
while(initial<=final)
{
mid=(initial+final)/2;
if(data[mid]==search_item)
{
loc=mid;
break;
}
if(search_item<data[mid])
final=mid-1;
if(search_item>data[mid])
initial=mid+1;
}
long int end = GetTickCount();
cout<<"End Time : "<<end<<"\n";
cout << "time: " << double(end - start)<<" milliseconds \n";
if(loc==-5)
cout<<" Required number not found "<<endl;
else
cout<<" Required number is found at index "<<loc<<endl;
return 0;
}
Your code looks like this:
int main()
{
// Some code...
while (some_condition)
{
// Some more code...
// Print timing result
return 0;
}
}
That's why your code prints zero time, you only do one iteration of the loop then you exit the program.
Try to use the clock_t object from the time.h header:
clock_t START, END;
START = clock();
**YOUR CODE GOES HERE**
END = clock();
float clocks = END - START;
cout <<"running time : **" << clocks/CLOCKS_PER_SEC << "** seconds" << endl;
CLOCKS_PER_SEC is a defined var to convert from clock ticks to seconds.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724408(v=vs.85).aspx
This article says that result of GetTickCount will wrap to zero if you system runs for 49.7 days.
You can find here: Easily measure elapsed time how to measure time in C++.
You can use time.h header
and do something like this in your code :
clock_t Start, Stop;
double sec;
Start = clock();
//call your BS function
Stop = clock();
Sec = ((double) (Stop - Start) / CLOCKS_PER_SEC);
and print the sec!
I hope this helps you!
The complexity of binary search is log2(N), it's about 23 for N = 10000000.
I think its not enough to mesure in realtime scale and even clock.
In this case you should use unsigned long long __rdtsc(), that returns number of processor ticks from last reset. Put this before and after your binary search and place cout << start; after obtaining end time. Overwise time of output would be included.
There is also memory corruption around data array. Index in C runs from 0 to size - 1, so thereis no data[max] element.
And delete [] data; before calling return.

Moving from C# to C++, QueryPerformanceCounter vs clock produce confusing results

In C# I am using the Stopwatch class. I can get the ticks, and milliseconds with no problems.
Now that I am testing code while learning C++ I try to get measurements but
I don't know where the results are that match the C# Stopwatch solution equivalent. I tried to search but the information is too broad and I couldn't find an absolute solution.
double PCFreq = 0.0;
__int64 CounterStart = 0;
void StartCounter()
{
LARGE_INTEGER li;
if(!QueryPerformanceFrequency(&li))
std::cout << "QueryPerformanceFrequency failed!\n";
PCFreq = double(li.QuadPart)/1000.0;
QueryPerformanceCounter(&li);
CounterStart = li.QuadPart;
}
double GetCounter()
{
LARGE_INTEGER li;
QueryPerformanceCounter(&li);
return double(li.QuadPart-CounterStart)/PCFreq;
}
As that gives me two different results, I tend to believe the clock. :)
start = StartCounter()
//some function or for loop
end = GetCounter()
marginPc = end - start;
start = clock();
// ...same
end= clock();
marginClck = end - start;
std::cout<< "Res Pc: " << marginPc << "\r\nRes Clck: " marginClck<< std::endl;
With the clock version I tried both unsigned int and double but the results were still different.
What is the proper method equivalent to the C# Stopwatch?
clock() gives you the number of milliseconds since the program started. For example, the following program will print a number close to 500:
int main()
{
Sleep(500);
cout << clock() << endl;
/*
POSIX version:
std::cout << clock() * 1000.0 / CLOCKS_PER_SEC << std::endl;
CLOCKS_PER_SEC is 1000 in Windows
*/
return 0;
}
QueryPerformanceCounter is sort of similar to GetTickCount64, it is based on the time when the computer started. When you do Stop-Watch type subtraction, the results are very close. QueryPerformanceCounter is more accurate. chrono method from #BoPersson's link is also based on QueryPerformanceCounter.
MSDN recommends using QueryPerformanceCounter (QPC) for high resolution stamps:
Acquiring high-resolution time stamps
The same QPC function is used in managed code:
For managed code, the System.Diagnostics.Stopwatch class uses
QPC as its precise time basis
This function should have reasonable accuracy:
long long getmicroseconds()
{
LARGE_INTEGER fq, t;
QueryPerformanceFrequency(&fq);
QueryPerformanceCounter(&t);
return 1000000 * t.QuadPart / fq.QuadPart;
}
The computer clock is usually accurate to +/-1 second per day.
From above link:
Duration Uncertainty
1 microsecond ± 10 picoseconds (10-12)
1 millisecond ± 10 nanoseconds (10-9)
1 second ± 10 microseconds
1 hour ± 60 microseconds
1 day ± 0.86 seconds
1 week ± 6.08 seconds
To simplify your other function, you can avoid double results. QuadPart is long long, so use that throughout the functions:
long long PCFreq = 0;
long long CounterStart = 0;
void StartCounter()
{
LARGE_INTEGER li;
QueryPerformanceFrequency(&li);
PCFreq = li.QuadPart;
QueryPerformanceCounter(&li);
CounterStart = li.QuadPart;
}
long long GetCounter()
{
if (PCFreq < 1) return 0;
LARGE_INTEGER li;
QueryPerformanceCounter(&li);
//for milliseconds: 1,000
return 1000 * (li.QuadPart - CounterStart) / PCFreq;
//for microseconds: 1,000,000
//return 1000000 * (li.QuadPart - CounterStart) / PCFreq;
}
Your bug is this. You have StartCounter return CounterStart = li.QuadPart;
But GetCounter returns double(li.QuadPart-CounterStart)/PCFreq.
I.e. one is divided by PCFreq and the other is not. It's not valid to then subtract one from the other.

Counting time in C++/CLI

I have app, where i must count time of executing part of C++ function and ASM function. Actually i have problem, times which i get are weird - 0 or about 15600. O ocurs more often. And sometimes, after executing, times looks good, and values are different than 0 and ~15600. Anybody knows why it occurs ? And how to fix it ?
Fragment of counting time for executing my app for C++:
auto start = chrono::system_clock::now();
for (int i = 0; i < nThreads; i++)
xThread[i]->Start(i);
for (int i = 0; i < nThreads; i++)
xThread[i]->Join();
auto elapsed = chrono::system_clock::now() - start;
long long milliseconds = chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
cppTimer = milliseconds;
What you're seeing there is the resolution of your timer. Apparently, chrono::system_clock ticks every 1/64th of a second, or 15,625 microseconds, on your system.
Since you're in C++/CLI and have the .Net library available, I'd switch to using the Stopwatch class. It will generally have a much higher resolution than 1/64th of a second.
Looks good to me. Except for cast to std::chrono::microseconds and naming it milliseconds.
The snippet I have used for many months now is:
class benchmark {
private:
typedef std::chrono::high_resolution_clock clock;
typedef std::chrono::milliseconds milliseconds;
clock::time_point start;
public:
benchmark(bool startCounting = true) {
if(startCounting)
start = clock::now();
}
void reset() {
start = clock::now();
}
// in milliseconds
double elapsed() {
milliseconds ms = std::chrono::duration_cast<milliseconds>(clock::now() - start);
double elapsed_secs = ms.count() / 1000.0;
return elapsed_secs;
}
};
// usage
benchmark b;
...
cout << "took " << b.elapsed() << " ms" << endl;

Check how long it takes for conhost to print

I have an array of booleans each representing a number. I am printing each one that is true with a for loop: for(unsigned long long l = 0; l<numt; l++) if(primes[l]) cout << l << endl; numt is the size of the array and is equal to over 1000000. The console window takes 30 seconds to print out all the value, but a timer I put in my program says 37ms. How do I wait for all the values to finish printing on the screen in my program so I can include that in my time.
Try this:
#include <windows.h>
...
int main() {
//init code
double startTime = GetTickCount();
//your loop
double timeNeededinSec = (GetTickCount() - startTime) / 1000.0;
}
Just in defense of ctime, cause it gives same result as with GetTickCount:
#include <ctime>
int main()
{
...
clock_t start = clock();
...
clock_t end = clock();
double timeNeededinSec = static_cast<double>(end - start) / CLOCKS_PER_SEC;
...
}
Update:
And the one with time() but in this case we can lost some precision( ~1 sec) because result in seconds.
#include <ctime>
int main()
{
time_t start;
time_t end;
...
time(&start);
...
time(&end);
int timeNeededinSec = static_cast<int>(end-start);
}
Combining both of them in simple example will show you the difference in result. In my tests I saw difference only in value after dot.