I have some program what must help me, but i cant handle timing.
Hire is a code:
#include <iostream>
using namespace std;
#include <time.h>
#include <Windows.h>
double diffclock(clock_t clock1) {
clock_t clock2 = clock();
double diffticks = clock1 - clock2;
double diffms = diffticks / (CLOCKS_PER_SEC / 1000);
return diffms;
}
int main()
{
int wait = 134;
clock_t fullbetween = clock();
for (int i = 0; i < 5; i++) {
Sleep(wait / 5);
cout << wait / 5 << " ";
}
cout << endl << "finish in " << diffclock(fullbetween) << " ms" << endl;
return 0;
}
C++ version. same result:
#include <iostream>
#include <chrono>
#include <ctime>
#include <thread>
#include <Windows.h>
int main()
{
int wait = 134;
auto start = std::chrono::system_clock::now();
for (int i = 0; i < 5; i++) {
std::this_thread::sleep_for(std::chrono::milliseconds(wait/5));
}
auto end = std::chrono::system_clock::now();
auto int_ms = std::chrono::duration_cast<std::chrono::milliseconds> (end - start);
std::cout << std::endl << "finish in " << int_ms.count() << " ms" << std::endl;
return 0;
}
134 / 5 = 26 is ok. But in last "cout" it shows that all that iteration taked about ~170ms, not 130 as expected. Why this is happening ?
Sry about my engl.
The documentation for the Sleep function at https://learn.microsoft.com/en-gb/windows/win32/api/synchapi/nf-synchapi-sleep says
Suspends the execution of the current thread until the time-out interval elapses.
The system clock "ticks" at a constant rate. If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on.
Ticks are typically 15.6 ms on Windows systems (64 ticks per second), so 26 becomes 31.2.
This is the time after which it is possible for the suspended thread to become active again, there is no guarantee that it will start executing immediately. So your five sleeps become 156ms plus a little overhead.
The documentation continues with mitigations for this behaviour, and warnings that the mitigations will affect system power usage and so on.
To increase the accuracy of the sleep interval, call the timeGetDevCaps function to determine the supported minimum timer resolution and the timeBeginPeriod function to set the timer resolution to its minimum.
In std::this_thread::sleep_for documentation (found here)
It is stated that the function blocks the execution of the current
thread for at least the specified sleep_duration.
It may block for
longer than sleep_duration due to scheduling or resource contention
delays.
So your code will take at least 135ms to execute.
Related
While I realize this is probably one of many identical questions, I can't seem to figure out how to properly use std::chrono. This is the solution I cobbled together.
#include <stdlib.h>
#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Time;
typedef std::chrono::milliseconds ms;
float startTime;
float getCurrentTime();
int main () {
startTime = getCurrentTime();
std::cout << "Start Time: " << startTime << "\n";
while(true) {
std::cout << getCurrentTime() - startTime << "\n";
}
return EXIT_SUCCESS;
}
float getCurrentTime() {
auto now = Time::now();
return std::chrono::duration_cast<ms>(now.time_since_epoch()).count() / 1000;
}
For some reason, this only ever returns integer values as the difference, which increments upwards at rate of 1 per second, but starting from an arbitrary, often negative, value.
What am I doing wrong? Is there a better way of doing this?
Don't escape the chrono type system until you absolutely have to. That means don't use .count() except for I/O or interacting with legacy API.
This translates to: Don't use float as time_point.
Don't bother with high_resolution_clock. This is always a typedef to either system_clock or steady_clock. For more portable code, choose one of the latter.
.
#include <iostream>
#include <chrono>
using Time = std::chrono::steady_clock;
using ms = std::chrono::milliseconds;
To start, you're going to need a duration with a representation of float and the units of seconds. This is how you do that:
using float_sec = std::chrono::duration<float>;
Next you need a time_point which uses Time as the clock, and float_sec as its duration:
using float_time_point = std::chrono::time_point<Time, float_sec>;
Now your getCurrentTime() can just return Time::now(). No fuss, no muss:
float_time_point
getCurrentTime() {
return Time::now();
}
Your main, because it has to do the I/O, is responsible for unpacking the chrono types into scalars so that it can print them:
int main () {
auto startTime = getCurrentTime();
std::cout << "Start Time: " << startTime.time_since_epoch().count() << "\n";
while(true) {
std::cout << (getCurrentTime() - startTime).count() << "\n";
}
}
This program does a similar thing. Hopefully it shows some of the capabilities (and methodology) of std::chrono:
#include <iostream>
#include <chrono>
#include <thread>
int main()
{
using namespace std::literals;
namespace chrono = std::chrono;
using clock_type = chrono::high_resolution_clock;
auto start = clock_type::now();
for(;;) {
auto first = clock_type::now();
// note use of literal - this is c++14
std::this_thread::sleep_for(500ms);
// c++11 would be this:
// std::this_thread::sleep_for(chrono::milliseconds(500));
auto last = clock_type::now();
auto interval = last - first;
auto total = last - start;
// integer cast
std::cout << "we just slept for " << chrono::duration_cast<chrono::milliseconds>(interval).count() << "ms\n";
// another integer cast
std::cout << "also known as " << chrono::duration_cast<chrono::nanoseconds>(interval).count() << "ns\n";
// floating point cast
using seconds_fp = chrono::duration<double, chrono::seconds::period>;
std::cout << "which is " << chrono::duration_cast<seconds_fp>(interval).count() << " seconds\n";
std::cout << " total time wasted: " << chrono::duration_cast<chrono::milliseconds>(total).count() << "ms\n";
std::cout << " in seconds: " << chrono::duration_cast<seconds_fp>(total).count() << "s\n";
std::cout << std::endl;
}
return 0;
}
example output:
we just slept for 503ms
also known as 503144616ns
which is 0.503145 seconds
total time wasted: 503ms
in seconds: 0.503145s
we just slept for 500ms
also known as 500799185ns
which is 0.500799 seconds
total time wasted: 1004ms
in seconds: 1.00405s
we just slept for 505ms
also known as 505114589ns
which is 0.505115 seconds
total time wasted: 1509ms
in seconds: 1.50923s
we just slept for 502ms
also known as 502478275ns
which is 0.502478 seconds
total time wasted: 2011ms
in seconds: 2.01183s
I want to be able to measure time elapsed (for frame time) with my Clock class. (Problem described below the code.)
Clock.h
typedef std::chrono::high_resolution_clock::time_point timePt;
class Clock
{
timePt currentTime;
timePt lastTime;
public:
Clock();
void update();
uint64_t deltaTime();
};
Clock.cpp
#include "Clock.h"
using namespace std::chrono;
Clock::Clock()
{
currentTime = high_resolution_clock::now();
lastTime = currentTime;
}
void Clock::update()
{
lastTime = currentTime;
currentTime = high_resolution_clock::now();
}
uint64_t Clock::deltaTime()
{
microseconds delta = duration_cast<microseconds>(currentTime - lastTime);
return delta.count();
}
When I try to use Clock like so
Clock clock;
while(1) {
clock.update();
uint64_t dt = clock.deltaTime();
for (int i=0; i < 10000; i++)
{
//do something to waste time between updates
int k = i*dt;
}
cout << dt << endl; //time elapsed since last update in microseconds
}
For me it prints about 30 times "0" until it finally prints a number which is always very close to something like "15625" microseconds (15.625 milliseconds).
My question is, why isn't there anything between? I'm wondering whether my implementation is wrong or the precision on high_resolution_clock is acting strange. Any ideas?
EDIT: I am using Codeblocks with mingw32 compiler on a windows 8 computer.
EDIT2:
I tried running the following code that should display high_resolution_clock precision:
template <class Clock>
void display_precision()
{
typedef std::chrono::duration<double, std::nano> NS;
NS ns = typename Clock::duration(1);
std::cout << ns.count() << " ns\n";
}
int main()
{
display_precision<std::chrono::high_resolution_clock>();
}
For me it prints: "1000 ns". So I guess high_resolution_clock has a precision of 1 microsecond right? Yet in my tests it seems to have a precision of 16 milliseconds?
What system are you using? (I guess it's Windows? Visual Studio is known to had this problem, now fixed in VS 2015, see the bug report). On some systems high_resolution_clock is defined as just an alias to system_clock, which can have really low resolution, like 16 ms you are seeing.
See for example this question.
I have the same problem with msys2 on Windows 10: the delta returned is 0 for most of my subfunctions tested and suddenly returns 15xxx or 24xxx microseconds. I thought there was a problem in my code as all the tutorials do not mention any problem.
Same thing for difftime(finish, start) in time.h which often returns 0.
I finally changed all my high_resolution clock with steady_clock, and I can find the proper times:
auto t_start = std::chrono::steady_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock ::now() - t_start).count() << " microseconds" << std::endl;
// returns the proper value (or at least a plausible value)
whereas this returns mostly 0:
auto t_start = std::chrono::high_resolution_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - t_start).count() << " microseconds" << std::endl;
// returns 0 most of the time
difftime does not seem to work either:
time_t start, finish;
time(&start);
_cvTracker->track(image);
time(&finish);
std::cout << "Time taken= " << difftime(finish, start) << std::endl;
// returns 0 most of the time
I have following which stop execution of program after certain time.
#include <iostream>
#include<ctime>
using namespace std;
int main( )
{
time_t timer1;
time(&timer1);
time_t timer2;
double second;
while(1)
{
time(&timer2);
second = difftime(timer2,timer1);
//check if timediff is cross 3 seconds
if(second > 3)
{
return 0;
}
}
return 0;
}
Is above program would work if time increase from 23:59 to 00:01 ?
If there any other better way?
Provided you have C++11, you can have a look at this example:
#include <thread>
#include <chrono>
int main() {
std::this_thread::sleep_for (std::chrono::seconds(3));
return 0;
}
Alternatively I'd go with a threading library of your choice and use its Thread sleep function. In most cases it is better to send your thread to sleep instead of busy waiting.
time() returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds. Thus, the time of day does not matter.
You can use std::chrono::steady_clock in C++11. Check the example in the now static method for an example :
using namespace std::chrono;
steady_clock::time_point clock_begin = steady_clock::now();
std::cout << "printing out 1000 stars...\n";
for (int i=0; i<1000; ++i) std::cout << "*";
std::cout << std::endl;
steady_clock::time_point clock_end = steady_clock::now();
steady_clock::duration time_span = clock_end - clock_begin;
double nseconds = double(time_span.count()) * steady_clock::period::num / steady_clock::period::den;
std::cout << "It took me " << nseconds << " seconds.";
std::cout << std::endl;
How do I call clock() in C++?
For example, I want to test how much time a linear search takes to find a given element in an array.
#include <iostream>
#include <cstdio>
#include <ctime>
int main() {
std::clock_t start;
double duration;
start = std::clock();
/* Your algorithm here */
duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;
std::cout<<"printf: "<< duration <<'\n';
}
An alternative solution, which is portable and with higher precision, available since C++11, is to use std::chrono.
Here is an example:
#include <iostream>
#include <chrono>
typedef std::chrono::high_resolution_clock Clock;
int main()
{
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << "Delta t2-t1: "
<< std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
<< " nanoseconds" << std::endl;
}
Running this on ideone.com gave me:
Delta t2-t1: 282 nanoseconds
clock() returns the number of clock ticks since your program started. There is a related constant, CLOCKS_PER_SEC, which tells you how many clock ticks occur in one second. Thus, you can test any operation like this:
clock_t startTime = clock();
doSomeOperation();
clock_t endTime = clock();
clock_t clockTicksTaken = endTime - startTime;
double timeInSeconds = clockTicksTaken / (double) CLOCKS_PER_SEC;
On Windows at least, the only practically accurate measurement mechanism is QueryPerformanceCounter (QPC). std::chrono is implemented using it (since VS2015, if you use that), but it is not accurate to the same degree as using QueryPerformanceCounter directly. In particular it's claim to report at 1 nanosecond granularity is absolutely not correct. So, if you're measuring something that takes a very short amount of time (and your case might just be such a case), then you should use QPC, or the equivalent for your OS. I came up against this when measuring cache latencies, and I jotted down some notes that you might find useful, here;
https://github.com/jarlostensen/notesandcomments/blob/master/stdchronovsqcp.md
#include <iostream>
#include <ctime>
#include <cstdlib> //_sleep() --- just a function that waits a certain amount of milliseconds
using namespace std;
int main()
{
clock_t cl; //initializing a clock type
cl = clock(); //starting time of clock
_sleep(5167); //insert code here
cl = clock() - cl; //end point of clock
_sleep(1000); //testing to see if it actually stops at the end point
cout << cl/(double)CLOCKS_PER_SEC << endl; //prints the determined ticks per second (seconds passed)
return 0;
}
//outputs "5.17"
You can measure how long your program works. The following functions help measure the CPU time since the start of the program:
C++ (double)clock() / CLOCKS_PER_SEC with ctime included.
Python time.clock() returns floating-point value in seconds.
Java System.nanoTime() returns long value in nanoseconds.
My reference: algorithms toolbox week 1 course part of data structures and algorithms specialization by University of California San Diego & National Research University Higher School of Economics
So you can add this line of code after your algorithm:
cout << (double)clock() / CLOCKS_PER_SEC;
Expected Output: the output representing the number of clock ticks per second
Probably you might be interested in timer like this :
H : M : S . Msec.
the code in Linux OS:
#include <iostream>
#include <unistd.h>
using namespace std;
void newline();
int main() {
int msec = 0;
int sec = 0;
int min = 0;
int hr = 0;
//cout << "Press any key to start:";
//char start = _gtech();
for (;;)
{
newline();
if(msec == 1000)
{
++sec;
msec = 0;
}
if(sec == 60)
{
++min;
sec = 0;
}
if(min == 60)
{
++hr;
min = 0;
}
cout << hr << " : " << min << " : " << sec << " . " << msec << endl;
++msec;
usleep(100000);
}
return 0;
}
void newline()
{
cout << "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";
}
I want to measure the speed of a function within a loop. But why my way of doing it always print "0" instead of high-res timing with 9 digits decimal precision (i.e. in nano/micro seconds)?
What's the correct way to do it?
#include <iomanip>
#include <iostream>
#include <time.h>
int main() {
for (int i = 0; i <100; i++) {
std::clock_t startTime = std::clock();
// a very fast function in the middle
cout << "Time: " << setprecision(9) << (clock() - startTime + 0.00)/CLOCKS_PER_SEC << endl;
}
return 0;
}
Related Questions:
How to overcome clock()'s low resolution
High Resolution Timer with C++ and linux
Equivalent of Windows’ QueryPerformanceCounter on OSX
Move your time calculation functions outside for () { .. } statement then devide total execution time by the number of operations in your testing loop.
#include <iostream>
#include <ctime>
#define NUMBER 10000 // the number of operations
// get the difference between start and end time and devide by
// the number of operations
double diffclock(clock_t clock1, clock_t clock2)
{
double diffticks = clock1 - clock2;
double diffms = (diffticks) / (CLOCKS_PER_SEC / NUMBER);
return diffms;
}
int main() {
// start a timer here
clock_t begin = clock();
// execute your functions several times (at least 10'000)
for (int i = 0; i < NUMBER; i++) {
// a very fast function in the middle
func()
}
// stop timer here
clock_t end = clock();
// display results here
cout << "Execution time: " << diffclock(end, begin) << " ms." << endl;
return 0;
}
Note: std::clock() lacks sufficient precision for profiling. Reference.
A few pointers:
I would be careful with the optimizer, it might throw all your code if I will think that it doesn't do anything.
You might want to run the loop 100000 times.
Before doing the total time calc store the current time in a variable.
Run your program several times.
If you need higher resolution, the only way to go is platform dependent.
On Windows, check out the QueryPerformanceCounter/QueryPerformanceFrequency API's.
On Linux, look up clock_gettime().
See a question I asked about the same thing: apparently clock()'s resolution is not guaranteed to be so high.
C++ obtaining milliseconds time on Linux -- clock() doesn't seem to work properly
Try gettimeofday function, or boost
If you need platform independence you need to use something like ACE_High_Res_Timer (http://www.dre.vanderbilt.edu/Doxygen/5.6.8/html/ace/a00244.html)
You might want to look into using openMp.
#include <omp.h>
int main(int argc, char* argv[])
{
double start = omp_get_wtime();
// code to be checked
double end = omp_get_wtime();
double result = end - start;
return 0;
}