Am I doing it correctly? At times, my program will print 2000+ for the chrono solution and it always prints 1000 for the CLOCKS_PER_SEC..
What is that value I'm actually calculating? Is it Clocks Per Sec?
#include <iostream>
#include <chrono>
#include <thread>
#include <ctime>
std::chrono::time_point<std::chrono::high_resolution_clock> SystemTime()
{
return std::chrono::high_resolution_clock::now();
}
std::uint32_t TimeDuration(std::chrono::time_point<std::chrono::high_resolution_clock> Time)
{
return std::chrono::duration_cast<std::chrono::nanoseconds>(SystemTime() - Time).count();
}
int main()
{
auto Begin = std::chrono::high_resolution_clock::now();
std::this_thread::sleep_for(std::chrono::milliseconds(1));
std::cout<< (TimeDuration(Begin) / 1000.0)<<std::endl;
std::cout<<CLOCKS_PER_SEC;
return 0;
}
In order to get the correct ticks per second on Linux, you need to use the return value of ::sysconf(_SC_CLK_TCK) (declared in the header unistd.h), rather than the macro CLOCKS_PER_SEC.
The latter is a constant defined in the POSIX standard – it is unrelated to the actual ticks per second of your CPU clock. For example, see the man page for clock:
C89, C99, POSIX.1-2001. POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution.
However, note that even when using the correct ticks-per-second constant, you still won't get the number of actual CPU cycles per second. "Clock tick" is a special unit used by the CPU clock. There is no standardized definition of how it relates to actual CPU cycles.
In boost's library, there is a timer class, use CLOCKS_PER_SEC to calculate the maximum time the timer can elapse. It said that on Windows CLOCKS_PER_SEC is 1000 and on Mac OS X, Linux it is 1000000. So on the latter OSs, the accuracy is higher.
Related
I'm in the midst of writing some timing code for a part of a program that has a low latency requirement.
Looking at whats available in the std::chrono library, I'm finding it a bit difficult to write timing code that is portable.
std::chrono::high_resolution_clock
std::chrono::steady_clock
std::chrono::system_clock
The system_clock is useless as it's not steady, the remaining two clocks are problematic.
The high_resolution_clock isn't necessarily stable on all platforms.
The steady_clock does not necessarily support fine-grain resolution time periods (eg: nano seconds)
For my purposes having a steady clock is the most important requirement and I can sort of get by with microsecond granularity.
My question is if one wanted to time code that could be running on different h/w architectures and OSes - what would be the best option?
Use steady_clock. On all implementations its precision is nanoseconds. You can check this yourself for your platform by printing out steady_clock::period::num and steady_clock::period::den.
Now that doesn't mean that it will actually measure nanosecond precision. But platforms do their best. For me, two consecutive calls to steady_clock (with optimizations enabled) will report times on the order of 100ns apart.
#include "chrono_io.h"
#include <chrono>
#include <iostream>
int
main()
{
using namespace std::chrono;
using namespace date;
auto t0 = steady_clock::now();
auto t1 = steady_clock::now();
auto t2 = steady_clock::now();
auto t3 = steady_clock::now();
std::cout << t1-t0 << '\n';
std::cout << t2-t1 << '\n';
std::cout << t3-t2 << '\n';
}
The above example uses this free, open-source, header-only library only for convenience of formatting the duration. You can format things yourself (I'm lazy). For me this just output:
287ns
116ns
75ns
YMMV.
I have a subroutine that should be executed once every milisecond. I wanted to check that indeed that's what's happening. But I get different execution times from different functions. I've been trying to understand the differences between these functions (there are several SO questions about the subject) but I cannot get my head around the results I got. Please forget the global variables etc. This is a legacy code, written in C, ported to C++, which I'm trying to improve, so is messy.
< header stuff>
std::chrono::high_resolution_clock::time_point tchrono;
int64_t tgettime;
float tclock;
void myfunction(){
<all kinds of calculations>
using ms = std::chrono::duration<double, std::milli>;
std::chrono::high_resolution_clock::time_point tmpchrono = std::chrono::high_resolution_clock::now();
printf("chrono %f (ms): \n",std::chrono::duration_cast<ms>(tmpchrono-tchrono).count());
tchrono = tmpchrono;
struct timeval tv;
gettimeofday (&tv, NULL);
int64_t tmpgettime = (int64_t) tv.tv_sec * 1000000 + tv.tv_usec;
printf("gettimeofday: %lld\n",tmpgettime-tgettime);
tgettime = tmpgettime;
float tmpclock = 1000.0f*((float)clock())/CLOCKS_PER_SEC;
printf("clock %f (ms)\n",tmpclock-tclock);
tclock = tmpclock;
<more stuff>
}
and the output is:
chrono 0.998352 (ms):
gettimeofday: 999
clock 0.544922 (ms)
Why the difference? I'd expect clock to be at least as large as the others, or not?
std::chrono::high_resolution_clock::now() is not even working.
std::chrono::milliseconds represents the milliseconds as integers. When you convert to that representation, time representations of higher granularity are truncated to whole milliseconds. Then you assign it to a duration that has a double representation and seconds-ratio. Then you pass the duration object - instead of a double - to printf. All of those steps are wrong.
To get the milliseconds as a floating point, do this:
using ms = std::chrono::duration<double, std::milli>;
std::chrono::duration_cast<ms>(tmpchrono-tchrono).count();
clock() returns the processor time the process has used. That will depend on how much time the OS scheduler has given to your process. Unless the process is the only one on the system, this will be different from the passed wall clock time.
gettimeofday() returns the wall clock time.
What's the difference between using high_resolution_clock::now() and gettimeofday() ?
Both measure the wall clock time. The internal representation of both is implementation defined. The granularity of both is implementation defined as well.
gettimeofday is part of the POSIX standard and therefore available in all operating systems that comply with that standard (POSIX.1-2001). gettimeofday is not monotonic, i.e. it's affected by things like setting the time (by ntpd or by adminstrator) and changes in daylight saving time.
high_resolution_clock represents the clock with the smallest tick period provided by the implementation. It may be an alias of std::chrono::system_clock or std::chrono::steady_clock, or a third, independent clock.
high_resolution_clock is part of the c++ standard library and therefore available in all compilers that comply with that standard (c++11). high_resolution_clock may or might not be monotonic. This can be tested with high_resolution_clock::is_steady
The simples way to use std::chrono to measure execution time is this:
auto start = high_resolution_clock::now();
/*
* multiple iterations of the code you want to benchmark -
* make sure the optimizer doesn't eliminate the whole code
*/
auto end = high_resolution_clock::now();
std::cout << "Execution time (us): " << duration_cast<microseconds>(end - start).count() << std::endl;
What is the most accurate way to calculate the elapsed time in C++? I used clock() to calculate this, but I have a feeling this is wrong as I get 0 ms 90% of the time and 15 ms the rest of it which makes little sense to me.
Even if it is really small and very close to 0 ms, is there a more accurate method that will give me the exact the value rather than a rounded down 0 ms?
clock_t tic = clock();
/*
main programme body
*/
clock_t toc = clock();
double time = (double)(toc-tic);
cout << "\nTime taken: " << (1000*(time/CLOCKS_PER_SEC)) << " (ms)";
Thanks
With C++11, I'd use
#include <chrono>
auto t0 = std::chrono::high_resolution_clock::now();
...
auto t1 = std::chrono::high_resolution_clock::now();
auto dt = 1.e-9*std::chrono::duration_cast<std::chrono::nanoseconds>(t1-t0).count();
for the elapsed time in seconds.
For pre 2011 C++, you can use QueryPerformanceCounter() on windows or gettimeofday() with linux/OSX. For example (this is actually C, not C++):
timeval oldCount,newCount;
gettimeofday(&oldCount, NULL);
...
gettimeofday(&newCount, NULL);
double t = double(newCount.tv_sec -oldCount.tv_sec )
+ double(newCount.tv_usec-oldCount.tv_usec) * 1.e-6;
for the elapsed time in seconds.
std::chrono::high_resolution_clock is as portable a solution as you can get, however it may not actually be higher resolution than what you already saw.
Pretty much any function which returns system time is going to jump forward whenever the system time is updated by the timer interrupt handler, and 10ms is a typical interval for that on modern OSes.
For better precision timing, you need to access either a CPU cycle counter or high precision event timer (HPET). Compiler library vendors ought to use these for high_resolution_clock, but not all do. So you may need OS-specific APIs.
(Note: specifically Visual C++ high_resolution_clock uses the low resolution system clock. But there are likely others.)
On Win32, for example, the QueryPerformanceFrequency() and QueryPerformanceCounter() functions are a good choice. For a wrapper that conforms to the C++11 timer interface and uses these functions, see
Mateusz answers "Difference between std::system_clock and std::steady_clock?"
If you have C++11 available, use the chrono library.
Also, different platforms provide access to high precision clocks.
For example, in linux, use clock_gettime. In Windows, use the high performance counter api.
Example:
C++11:
auto start=high_resolution_clock::now();
... // do stuff
auto diff=duration_cast<milliseconds>(high_resolution_clock::now()-start);
clog << diff.count() << "ms elapsed" << endl;
Is there cross-platform solution to get seconds since epoch, for windows i use
long long NativesGetTimeInSeconds()
{
return time (NULL);
}
But how to get on Linux?
You're already using it: std::time(0) (don't forget to #include <ctime>). However, whether std::time actually returns the time since epoch isn't specified in the standard (C11, referenced by the C++ standard):
7.27.2.4 The time function
Synopsis
#include <time.h>
time_t time(time_t *timer);
Description
The time function determines the current calendar time. The encoding of the value is unspecified. [emphasis mine]
For C++, C++11 and later provide time_since_epoch. However, before C++20 the epoch of std::chrono::system_clock was unspecified and therefore possibly non-portable in previous standards.
Still, on Linux the std::chrono::system_clock will usually use Unix Time even in C++11, C++14 and C++17, so you can use the following code:
#include <chrono>
// make the decltype slightly easier to the eye
using seconds_t = std::chrono::seconds;
// return the same type as seconds.count() below does.
// note: C++14 makes this a lot easier.
decltype(seconds_t().count()) get_seconds_since_epoch()
{
// get the current time
const auto now = std::chrono::system_clock::now();
// transform the time into a duration since the epoch
const auto epoch = now.time_since_epoch();
// cast the duration into seconds
const auto seconds = std::chrono::duration_cast<std::chrono::seconds>(epoch);
// return the number of seconds
return seconds.count();
}
In C.
time(NULL);
In C++.
std::time(0);
And the return value of time is : time_t not long long
The native Linux function for getting time is gettimeofday() [there are some other flavours too], but that gets you the time in seconds and nanoseconds, which is more than you need, so I would suggest that you continue to use time(). [Of course, time() is implemented by calling gettimeofday() somewhere down the line - but I don't see the benefit of having two different pieces of code that does exactly the same thing - and if you wanted that, you'd be using GetSystemTime() or some such on Windows [not sure that's the right name, it's been a while since I programmed on Windows]
The Simple, Portable, and Proper Approach
#include <ctime>
long CurrentTimeInSeconds()
{
return (long)std::time(0); //Returns UTC in Seconds
}
I am trying to add a timed delay in a C++ program, and was wondering if anyone has any suggestions on what I can try or information I can look at?
I wish I had more details on how I am implementing this timed delay, but until I have more information on how to add a timed delay I am not sure on how I should even attempt to implement this.
An updated answer for C++11:
Use the sleep_for and sleep_until functions:
#include <chrono>
#include <thread>
int main() {
using namespace std::this_thread; // sleep_for, sleep_until
using namespace std::chrono; // nanoseconds, system_clock, seconds
sleep_for(nanoseconds(10));
sleep_until(system_clock::now() + seconds(1));
}
With these functions there's no longer a need to continually add new functions for better resolution: sleep, usleep, nanosleep, etc. sleep_for and sleep_until are template functions that can accept values of any resolution via chrono types; hours, seconds, femtoseconds, etc.
In C++14 you can further simplify the code with the literal suffixes for nanoseconds and seconds:
#include <chrono>
#include <thread>
int main() {
using namespace std::this_thread; // sleep_for, sleep_until
using namespace std::chrono_literals; // ns, us, ms, s, h, etc.
using std::chrono::system_clock;
sleep_for(10ns);
sleep_until(system_clock::now() + 1s);
}
Note that the actual duration of a sleep depends on the implementation: You can ask to sleep for 10 nanoseconds, but an implementation might end up sleeping for a millisecond instead, if that's the shortest it can do.
In Win32:
#include<windows.h>
Sleep(milliseconds);
In Unix:
#include<unistd.h>
unsigned int microsecond = 1000000;
usleep(3 * microsecond);//sleeps for 3 second
sleep() only takes a number of seconds which is often too long.
#include <unistd.h>
usleep(3000000);
This will also sleep for three seconds. You can refine the numbers a little more though.
Do you want something as simple like:
#include <unistd.h>
sleep(3);//sleeps for 3 second
Note that this does not guarantee that the amount of time the thread sleeps will be anywhere close to the sleep period, it only guarantees that the amount of time before the thread continues execution will be at least the desired amount. The actual delay will vary depending on circumstances (especially load on the machine in question) and may be orders of magnitude higher than the desired sleep time.
Also, you don't list why you need to sleep but you should generally avoid using delays as a method of synchronization.
You can try this code snippet:
#include<chrono>
#include<thread>
int main(){
std::this_thread::sleep_for(std::chrono::nanoseconds(10));
std::this_thread::sleep_until(std::chrono::system_clock::now() + std::chrono::seconds(1));
}
You can also use select(2) if you want microsecond precision (this works on platform that don't have usleep(3))
The following code will wait for 1.5 second:
#include <sys/select.h>
#include <sys/time.h>
#include <unistd.h>`
int main() {
struct timeval t;
t.tv_sec = 1;
t.tv_usec = 500000;
select(0, NULL, NULL, NULL, &t);
}
`
I found that "_sleep(milliseconds);" (without the quotes) works well for Win32 if you include the chrono library
E.g:
#include <chrono>
using namespace std;
main
{
cout << "text" << endl;
_sleep(10000); // pauses for 10 seconds
}
Make sure you include the underscore before sleep.
Yes, sleep is probably the function of choice here. Note that the time passed into the function is the smallest amount of time the calling thread will be inactive. So for example if you call sleep with 5 seconds, you're guaranteed your thread will be sleeping for at least 5 seconds. Could be 6, or 8 or 50, depending on what the OS is doing. (During optimal OS execution, this will be very close to 5.) Another useful feature of the sleep function is to pass in 0. This will force a context switch from your thread.
Some additional information:
http://www.opengroup.org/onlinepubs/000095399/functions/sleep.html
The top answer here seems to be an OS dependent answer; for a more portable solution you can write up a quick sleep function using the ctime header file (although this may be a poor implementation on my part).
#include <iostream>
#include <ctime>
using namespace std;
void sleep(float seconds){
clock_t startClock = clock();
float secondsAhead = seconds * CLOCKS_PER_SEC;
// do nothing until the elapsed time has passed.
while(clock() < startClock+secondsAhead);
return;
}
int main(){
cout << "Next string coming up in one second!" << endl;
sleep(1.0);
cout << "Hey, what did I miss?" << endl;
return 0;
}
to delay output in cpp for fixed time, you can use the Sleep() function by including windows.h header file
syntax for Sleep() function is Sleep(time_in_ms)
as
cout<<"Apple\n";
Sleep(3000);
cout<<"Mango";
OUTPUT. above code will print Apple and wait for 3 seconds before printing Mango.
Syntax:
void sleep(unsigned seconds);
sleep() suspends execution for an interval (seconds).
With a call to sleep, the current program is suspended from execution for the number of seconds specified by the argument seconds. The interval is accurate only to the nearest hundredth of a second or to the accuracy of the operating system clock, whichever is less accurate.
Many others have provided good info for sleeping. I agree with Wedge that a sleep seldom the most appropriate solution.
If you are sleeping as you wait for something, then you are better off actually waiting for that thing/event. Look at Condition Variables for this.
I don't know what OS you are trying to do this on, but for threading and synchronisation you could look to the Boost Threading libraries (Boost Condition Varriable).
Moving now to the other extreme if you are trying to wait for exceptionally short periods then there are a couple of hack style options. If you are working on some sort of embedded platform where a 'sleep' is not implemented then you can try a simple loop (for/while etc) with an empty body (be careful the compiler does not optimise it away). Of course the wait time is dependant on the specific hardware in this case.
For really short 'waits' you can try an assembly "nop". I highly doubt these are what you are after but without knowing why you need to wait it's hard to be more specific.
On Windows you can include the windows library and use "Sleep(0);" to sleep the program. It takes a value that represents milliseconds.