C++ timing events console bug? - c++

I am trying to write a class that will be able to time events using QueryPerformanceCounter in C++.
The idea is that you create a timer object, give a function a time in double format, and it counts until that time has passed by and does stuff afterwards. This class will ideally be used for timing things in a game ( having a timer that counts 60 times in a second for example). When i compile this code though, it just prints 0's to the console, seemingly for ever. But i noticed some kind of bug that i can't understand. If i click on the scroll bar of the console window and hold it, the timer actually counts properly. If i enter 5.0 for example, then quickly click and hold the scroll bar for 5 seconds or longer, when i let go the program will print 'Done!!!'. so why doesn't it count properly when i just let it print the elapsed time to the console? is there a glitch with printing things to the console, or is there something wrong with my timing code? Below is the code:
#include <iostream>
#include <iomanip>
#include "windows.h"
using namespace std;
int main()
{
setprecision(10); // i tried to see if precision in the stream was the problem but i don't think it is
cout << "hello! lets time something..." << endl;
bool timing = 0; // a switch to turn the timer on and off
LARGE_INTEGER T1, T2; // the timestamps to count
LARGE_INTEGER freq; // the frequency per seccond for measuring the difference between the stamp values
QueryPerformanceFrequency(&freq); // gets the frequency from the computer
// mil.QuadPart = freq.QuadPart / 1000; // not used
double ellapsedtime = 0, desiredtime; // enter a value to count up to in secconds
// if you entered 4.5 for example, then it should wait for 4.5 secconds
cout << "enter the amount of time you would like to wait for in seconds (in double format.)!!" << endl;
cin >> desiredtime;
QueryPerformanceCounter(&T1); // gets the first stamp value
timing = 1; // switches the timer on
while(timing)
{
QueryPerformanceCounter(&T2); // gets another stamp value
ellapsedtime += (T2.QuadPart - T1.QuadPart) / freq.QuadPart; // measures the difference between the two stamp
//values and then divides them by the frequency to get how many secconds has ellapsed
cout << ellapsedtime << endl;
T1.QuadPart = T2.QuadPart; // assigns the value of the second stamp to the first one, so that we can measure the
// difference between them again and again
if(ellapsedtime>=desiredtime) // checks if the elapsed time is bigger than or equal to the desired time,
// and if it is prints done and turns the timer off
{
cout << "done!!!" << endl;
timing = 0; // breaks the loop
}
}
return 0;
}

You should store in ellapsedtime the number of microseconds elapsed since the fisrt call to QueryPerformanceCounter, and you should not overwrite the first time stamp.
Working code:
// gets another stamp value
QueryPerformanceCounter(&T2);
// measures the difference between the two stamp
ellapsedtime += (T2.QuadPart - T1.QuadPart);
cout << "number of tick " << ellapsedtime << endl;
ellapsedtime *= 1000000.;
ellapsedtime /= freq.QuadPart;
cout << "number of Microseconds " << ellapsedtime << endl;
// checks if the elapsed time is bigger than or equal to the desired time
if(ellapsedtime/1000000.>=desiredtime) {
cout << "done!!!" << endl;
timing = 0; // breaks the loop
}
Be sure to read : Acquiring high-resolution time stamps

Related

Why are there a big difference when measuring elapsed time according to where to measure?

My question is about the difference of the elapsed time according to the point.
For finding the largest portion of the total elapsed time when executing in my code, I used clock function.
source : calculating time elapsed in C++
First, I put the clock function at the start and end of the main function.
(Actually, there are some declaration of variables but I deleted them for readability of my questions). Then I think I will be able to measure the total elapsed time.
int main(){
using clock = std::chrono::system_clock;
using sec = std::chrono::duration<double>;
const auto before = clock::now();
...
std::cin >> a >> b;
lgstCommSubStr findingLCSS(a,b,numberofHT,cardi,SubsA);
const sec duration = clock::now() - before;
std::cout << "It took " << duration.count() << "s in main function" << std::endl;
return 0;
}
Second, I put the clock function at the class findingLCSS. This class is for finding longest common sub-string between two string. It is the class that actually do my algorithm. I write the code for finding it in its constructor. Therefore, when making this class, it returns longest common sub-string information. I think this elapsed time will be the actual algorithm running time.
public:
lgstCommSubStr(string a, string b, int numHT, int m, vector <int> ** SA):
strA(a), strB(b), hashTsize(numHT), SubstringsA(SA),
primeNs(numHT), xs(numHT),
A_hashValues(numHT), B_hashValues(numHT),
av(numHT), bv(numHT), cardi(m)
{
using clock = std::chrono::system_clock;
using sec = std::chrono::duration<double>;
const auto before = clock::now();
...
answer ans=binarySearch(a,b, numHT);
std::cout << ans.i << " " << ans.j << " " << ans.length << "\n";
const sec duration = clock::now() - before;
std::cout << "It took " << duration.count() << "s in the class" << std::endl;
}
The output is as below.
tool coolbox
1 1 3
It took 0.002992s in inner class
It took 4.13945s in main function
It means 'tool' and 'coolbox' have a substring 'ool'
But I am confused that there is a big difference between two times.
Because the first time is total time and the second time is algorithm running time, I have to think its difference time is the elapsed time for declaration variables.
But it looks weird because I think declaration variables time is short.
Is there a mistake in measuring the elapsed time?
Please give me a hint for troubleshoot. Thank you for reading!
Taking a snapshot of the time before std::cin >> a >> b; leads to an inaccurate measurement as you're likely starting the clock before you type in the values for a and b. Generally you want to put your timing as close as possible to the thing you're actually measuring.

timestamp of time(0) at multiple places in a C++ program

Why is the timestamp of time(0) at multiple places in a C++ program the same value?
Ex:
int main(){
cout << time(0) << endl;
cout << time(0) << endl;
cout << time(0) << endl;
cout << time(0) << endl;
}
All of the values above are the same. Is this because the program is executed at such a fast speed that the time values in the above example are all the same?
Could someone help me out? Thanks!
The resolution of the time() function isn't fine grained enough to result in different values to make a different result for each call you make, i.e. the CPU is faster.
You might try to insert std::this_thread::sleep_for calls to check what timing resolution fits for your needs with the hardware and OS you have at hand.
The time(0) function returns the current time in seconds.
The code that you upload will run all the code in one second.
Therefore, even if the time is output, all the same time is output.
The following code is to output the current time continuously for 3 seconds.
If you run it, you will see a lot of numbers, but if you look closely you can see that the number changes three times.
#include <iostream>
#include <ctime>
int main()
{
time_t s = std::time(0); // time_t is int64 in windows 10 64bit.
time_t n;
do {
n = std::time(0);
std::cout << n << " ";
} while ((s + 3) > n); // repeat until 3 sec passed.
return 0;
}

Conditional Statement is never triggered within Chrono Program

Abstract:
I wrote a short program dealing with the Chrono library in C++ for experimentation purposes. I want the CPU to count as high as it can within one second, display what it counted to, then repeat the process within an infinite loop.
Current Code:
#include <iostream>
#include <chrono>
int counter()
{
int num = 0;
auto startTime = std::chrono::system_clock::now();
while (true)
{
num++;
auto currentTime = std::chrono::system_clock::now();
if (std::chrono::duration_cast<std::chrono::seconds>(currentTime - startTime).count() == 1)
return num;
}
}
int main()
{
while(true)
std::cout << "You've counted to " << counter() << "in one second!";
return 0;
}
Problem:
The conditional statement in my program:
if (std::chrono::duration_cast<std::chrono::seconds>(currentTime - startTime).count() == 1)
isn't being triggered because the casted value of currentTime - startTime never equals nor rises above one. This can be demonstrated by replacing the operator '==' with '<', which outputs an incorrect result, as opposed to outputting nothing at all. I don't understand why the condition isn't being met; if this program is gathering time from the system clock at one point, then repeatedly comparing it to the current time, shouldn't the integer value of the difference equal one at some point?
You're hitting a cout issue, not a chrono issue. The problem is that you're printing with cout which doesn't flush if it doesn't feel like it.
cerr will flush on newline. Change to cerr and add a \n and you'll get what you expect.
std::cerr << "You've counted to " << counter() << "in one second!\n";

c++ code execution timer returning 0, need output in milliseconds

I'm trying to figure out how to time the execution of part of my program, but when I use the following code, all I ever get back is 0. I know that can't be right. The code I'm timing recursively implements mergesort of a large array of ints. How do I get the time it takes to execute the program in milliseconds?
//opening input file and storing contents into array
index = inputFileFunction(inputArray);
clock_t time = clock();//start the clock
//this is what needs to be timed
newRecursive.mergeSort(inputArray, 0, index - 1);
//getting the difference
time = clock() - time;
double ms = double(time) / CLOCKS_PER_SEC * 1000;
std::cout << "\nTime took to execute: " << std::setprecision(9) << ms << std::endl;
You can use the chrono library in C++11. Here's how you can modify your code:
#include <chrono>
//...
auto start = std::chrono::steady_clock::now();
// do whatever you're timing
auto end = std::chrono::steady_clock::now();
auto durationMS = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "\n Time took " << durationMS.count() << " ms" << std::endl;
If you're developing on OSX, this blog post from Apple may be useful. It contains code snippets that should give you the timing resolution you need.

Working with timers

I am trying to create a timer where it begins with a certain value and ends with another value like.
int pktctr = (unsigned char)unpkt[0];
if(pktctr == 2)
{
cout << "timer-begin" << endl;
//start timer here
}
if(pktctr == 255)
{
cout << "timer-end" << endl;
//stop timer here
//timer display total time then reset.
}
cout << "displays total time it took from 1 to 255 here" << endl;
Any idea on how to achieve this?
void WINAPI MyUCPackets(char* unpkt, int packetlen, int iR, int arg)
{
int pktctr = (unsigned char)unpkt[0];
if(pktctr == 2)
{
cout << "timer-begin" << endl;
}
if(pktctr == 255)
{
cout << "timer-end" << endl;
}
return MyUC2Packets(unpkt,packetlen,iR,arg);
}
Everytime this function is called unpkt starts from 2 then reaches max of 255 then goes back to 1. And I want to compute how long it took for every revolution?
This will happen alot of times. But I just wanted to check how many seconds it took for this to happen because it won't be the same everytime.
Note: This is done with MSDetours 3.0...
I'll assume you're using Windows (from the WINAPI in the code) in which case you can use GetTickCount:
/* or you could have this elsewhere, e.g. as a class member or
* in global scope (yuck!) As it stands, this isn't thread safe!
*/
static DWORD dwStartTicks = 0;
int pktctr = (unsigned char)unpkt[0];
if(pktctr == 2)
{
cout << "timer-begin" << endl;
dwStartTicks = GetTickCount();
}
if(pktctr == 255)
{
cout << "timer-end" << endl;
DWORD dwDuration = GetTickCount() - dwStartTicks;
/* use dwDuration - it's in milliseconds, so divide by 1000 to get
* seconds if you so desire.
*/
}
Things to watch out for: overflow of GetTickCount is possible (it resets to 0 approximately every 47 days, so it's possible that if you start your timer close to the rollover time, it will finish after the rollover). You can solve this in two ways, either use GetTickCount64 or simply notice when dwStartTicks > GetTickCount and if so, calculate how many milliseconds were from dwStartTicks until the rollover, and how many millseconds from 0 to the result of GetTickCount() and add those numbers together (bonus points if you can do this in a more clever way).
Alternatively, you can use the clock function. You can find out more on that, including an example of how to use it at http://msdn.microsoft.com/en-us/library/4e2ess30(v=vs.71).aspx and it should be fairly easy to adapt and integrate into your code.
Finally, if you're interested in a more "standard" solution, you can use the <chrono> stuff from the C++ standard library. Check out http://en.cppreference.com/w/cpp/chrono for an example.
If you want to use the Windows-API use GetSystemTime(). Provide a struct SYSTEMTIME, initialize it properly and pass it to GetSystemTime():
#include <Windows.h>
...
SYSTEMTIME sysTime;
GetFileTime(&sysTime);
// use sysTime and create differences
Look here for GetSystemTime() there is a link for SYSTEMTIME there, too.
I think boost timer is the best solution for you.
You can check the elapsed time like this:
#include <boost/timer.hpp>
int main() {
boost::timer t; // start timing
...
double elapsed_time = t.elapsed();
...
}