I need to calculate the execution time of a function.
Currently, I use time.h
At the beginning of the function:
time_t tbegin,tend;
double texec=0.000;
time(&tbegin);
Before the return:
time(&tend);
texec = difftime(tend,tbegin);
It works fine but give me a result in texec as a integer.
How can I have my execution time in milliseconds ?
Most of the simple programs have computation time in milliseconds. So, I suppose, you will find this useful.
#include <time.h>
#include <stdio.h>
int main() {
clock_t start = clock();
// Executable code
clock_t stop = clock();
double elapsed = (double)(stop - start) * 1000.0 / CLOCKS_PER_SEC;
printf("Time elapsed in ms: %f\n", elapsed);
}
If you want to compute the run-time of the entire program and you are on a Unix system, run your program using the time command, like this: time ./a.out
You can use a lambda with auto parameters in C++14 to time your other functions. You can pass the parameters of the timed function to your lambda. I'd do it like this:
// Timing in C++14 with auto lambda parameters
#include <iostream>
#include <chrono>
// need C++14 for auto lambda parameters
auto timing = [](auto && F, auto && ... params)
{
auto start = std::chrono::steady_clock::now();
std::forward<decltype(F)>(F)
(std::forward<decltype(params)>(params)...); // execute the function
return std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start).count();
};
void f(std::size_t numsteps) // we'll measure how long this function runs
{
// need volatile, otherwise the compiler optimizes the loop
for (volatile std::size_t i = 0; i < numsteps; ++i);
}
int main()
{
auto taken = timing(f, 500'000'000); // measure the time taken to run f()
std::cout << "Took " << taken << " milliseconds" << std::endl;
taken = timing(f, 100'000'000); // measure again
std::cout << "Took " << taken << " milliseconds" << std::endl;
}
The advantage is that you can pass any callable object to the timing lambda.
But if you want to keep it simple, you can just do:
auto start = std::chrono::steady_clock::now();
your_function_call_here();
auto end = std::chrono::steady_clock::now();
auto taken = std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
std::cout << taken << " milliseconds";
If you know you're not going to change the system time during the run, you can use a std::chrono::high_resolution_clock instead, which may be more precise. std::chrono::steady_clock is however un-sensitive to system time changes during the run.
PS: In case you need to time a member function, you can do:
// time member functions
template<class Return, class Object, class... Params1, class... Params2>
auto timing(Return (Object::*fp)(Params1...), Params2... params)
{
auto start = std::chrono::steady_clock::now();
(Object{}.*fp)(std::forward<decltype(params)>(params)...);
return std::chrono::duration_cast<std::chrono::milliseconds>(
std::chrono::steady_clock::now() - start).count();
};
then use it as
// measure the time taken to run X::f()
auto taken = timing(&X::f, 500'000'000);
std::cout << "Took " << taken << " milliseconds" << std::endl;
to time e.g. X::f() member function.
You can create a function like this source:
typedef unsigned long long timestamp_t;
static timestamp_t
timestampinmilliseconf ()
{
struct timeval now;
gettimeofday (&now, NULL);
return now.tv_usec + (timestamp_t)now.tv_sec * 1000000;
}
Then you can use this to get the time difference.
timestamp_t time1 = get_timestamp();
// Your function
timestamp_t time2 = get_timestamp();
For windows you can use this function:
#ifdef WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif
typedef long long int64; typedef unsigned long long uint64;
/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */
int64 GetTimeMs64()
{
#ifdef WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
uint64 ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */
return ret;
#else
/* Linux */
struct timeval tv;
gettimeofday(&tv, NULL);
uint64 ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;
/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);
return ret;
#endif
}
Source
in the header <chrono> there is a
class std::chrono::high_resolution_clock
that does what you want. it's a bit involved to use though;
#include <chrono>
using namespace std;
using namespace chrono;
auto t1 = high_resolution_clock::now();
// do calculation here
auto t2 = high_resolution_clock::now();
auto diff = duration_cast<duration<double>>(t2 - t1);
// now elapsed time, in seconds, as a double can be found in diff.count()
long ms = (long)(1000*diff.count());
Related
I am implementing some data structure in which I need to invalidate some entries after some time, so for each entry I need to maintain its insertion timestamp. When I get an entry I need to get a timestamp again and calculate the elapsed time from the insertion (if it's too old, I can't use it).
This data structure is highly contented by many threads, so I must get this timestamp (on insert and find) in the most efficient way possible. Efficiency is extremely important here.
If it matters, I am working on a linux machine, developing in C++.
What is the most efficient way to retrieve a timestamp?
BTW, in some old project I was working on, I remember I saw some assembly command which gets a timestamp directly from the CPU (can't remember the command).
I have created the following benchmark to test several methods for retrieving a timestamp. The benchmark was compiled with GCC with -O2, and tested on my mac. I have measured the time it takes to get 1M timestamps for each method, and from the results it looks like rdtsc is faster than the others.
EDIT: The benchmark was modified to support multiple threads.
Benchmark code:
#include <iostream>
#include <chrono>
#include <sys/time.h>
#include <unistd.h>
#include <vector>
#include <thread>
#include <atomic>
#define NUM_SAMPLES 1000000
#define NUM_THREADS 4
static inline unsigned long long getticks(void)
{
unsigned int lo, hi;
// RDTSC copies contents of 64-bit TSC into EDX:EAX
asm volatile("rdtsc" : "=a" (lo), "=d" (hi));
return (unsigned long long)hi << 32 | lo;
}
std::atomic<bool> g_start(false);
std::atomic<unsigned int> totalTime(0);
template<typename Method>
void measureFunc(Method method)
{
// warmup
for (unsigned int i = 0; i < NUM_SAMPLES; i++)
{
method();
}
auto start = std::chrono::system_clock::now();
for (unsigned int i = 0; i < NUM_SAMPLES; i++)
{
method();
}
auto end = std::chrono::system_clock::now();
totalTime += std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count();
}
template<typename Method>
void measureThread(Method method)
{
while(!g_start.load());
measureFunc(method);
}
template<typename Method>
void measure(const std::string& methodName, Method method)
{
std::vector<std::thread> threads;
totalTime.store(0);
g_start.store(false);
for (unsigned int i = 0; i < NUM_THREADS; i++)
{
threads.push_back(std::thread(measureThread<Method>, method));
}
g_start.store(true);
for (std::thread& th : threads)
{
th.join();
}
double timePerThread = (double)totalTime / (double)NUM_THREADS;
std::cout << methodName << ": " << timePerThread << "ms per thread" << std::endl;
}
int main(int argc, char** argv)
{
measure("gettimeofday", [](){ timeval tv; return gettimeofday(&tv, 0); });
measure("time", [](){ return time(NULL); });
measure("std chrono system_clock", [](){ return std::chrono::system_clock::now(); });
measure("std chrono steady_clock", [](){ return std::chrono::steady_clock::now(); });
measure("clock_gettime monotonic", [](){ timespec tp; return clock_gettime(CLOCK_MONOTONIC, &tp); });
measure("clock_gettime cpu time", [](){ timespec tp; return clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp); });
measure("rdtsc", [](){ return getticks(); });
return 0;
}
Results (in milliseconds) for a single thread:
gettimeofday: 54ms per thread
time: 260ms per thread
std chrono system_clock: 62ms per thread
std chrono steady_clock: 60ms per thread
clock_gettime monotonic: 102ms per thread
clock_gettime cpu time: 493ms per thread
rdtsc: 8ms per thread
With 4 threads:
gettimeofday: 55.25ms per thread
time: 292.5ms per thread
std chrono system_clock: 69.25ms per thread
std chrono steady_clock: 68.5ms per thread
clock_gettime monotonic: 118.25ms per thread
clock_gettime cpu time: 2975.75ms per thread
rdtsc: 10.25ms per thread
From the results, it looks like std::chrono has some small overhead when called from multiple threads, the gettimeofday method stays stable as the number of threads increases.
EDIT: I am using VS2013 and on Windows 7.
With the below code I'd expect to be able to have a time difference of at least one microsecond, however, when executed it builds it up to at least 1000 microseconds (one millisecond). What is the reasoning I'm not able to get a time lower then one millisecond? Is there any way around this?
// SleepTesting.cpp : Defines the entry point for the console application.
//
#include <chrono>
#include "windows.h"
#include <iostream>
int _tmain(int argc, _TCHAR* argv[])
{
FILETIME startFileTime, endFileTime;
uint64_t ullStartTime, ullEndTime;
bool sleep = true;
auto start = std::chrono::system_clock::now();
auto now = std::chrono::system_clock::now();
auto elapsedTime = std::chrono::duration_cast<std::chrono::microseconds>(now - start);
GetSystemTimeAsFileTime(&startFileTime);
ullStartTime = static_cast<uint64_t>(startFileTime.dwHighDateTime) << 32 | startFileTime.dwLowDateTime;
while (sleep)
{
now = std::chrono::system_clock::now();
elapsedTime = std::chrono::duration_cast < std::chrono::microseconds > (now - start);
if (elapsedTime.count() > 0)
{
sleep = false;
}
}
GetSystemTimeAsFileTime(&endFileTime);
ullEndTime = static_cast<uint64_t>(endFileTime.dwHighDateTime) << 32 | endFileTime.dwLowDateTime;
uint64_t timeDifferenceHundredsOfNano = ullEndTime - ullStartTime;
std::cout << "Elapsed time with Chrono library: " << elapsedTime.count() << " micro-seconds" << std::endl;
std::cout << "Elapsed time with Windows.h FILETIME: " << timeDifferenceHundredsOfNano << " hundreds of nanoseconds" << std::endl;
return 0;
}
since you're using system_clock, I think that you can't get micro-second resolution on windows 7 (at least from what I've seen).
try high-resolution clock, but even that won't always work, since windows doesn't even guarantee that time elapsed between two consecutive operations is less then one millisecond, even without sleeping
IIRC in VS2013 the system_clock (and highres_clock) are in clock ticks i.e. ms. If you need higher resolution you can go all Windows and take a look at QueryPerformanceCounter.
LARGE_INTEGER startCount;
LARGE_INTEGER endCount;
LARGE_INTEGER frequency;
QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&startCount);
{...}
QueryPerformanceCounter(&endCount);
double startTimeInMicroSec = startCount.QuadPart * (1000000.0 / frequency.QuadPart);
double endTimeInMicroSec = endCount.QuadPart * (1000000.0 / frequency.QuadPart);
// endTimeInMicroSec - startTimeInMicroSec
disclaimer: ocular compilation
my .h file
#ifndef ITime_H
#define ITime_H
#include <QDebug>
#include <iostream>
#include <QtCore>
#include <windows.h>
class ITime
{
public:
ITime();
~ITime();
void start();
quint64 milli();
quint64 elapsed();
public:
QTime oStartTime;
QTime oEndTime;
LARGE_INTEGER ntime1,ntime2;
LARGE_INTEGER freq;
};
#endif // ITime_H
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
my cpp file
#include <QTime>
#include <QtCore>
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
ITime::ITime()
{
}
ITime::~ITime()
{
}
void ITime::start()
{
oStartTime = QTime::currentTime();
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&ntime1);
}
quint64 ITime::milli()
{
quint64 milli = oStartTime.msecsTo(oEndTime);
return milli;
}
quint64 ITime::elapsed()
{
quint64 ntime = 0;
QueryPerformanceCounter(&ntime2);
oEndTime = QTime::currentTime();
ntime = (ntime2.QuadPart-ntime1.QuadPart)/(freq.QuadPart/1000000.0);
double elapsedMilliseconds = elapsedTicks.QuadPart / (freq.QuadPart/ 1000.0);
qDebug() << "milli seconds by counter :" << elapsedMilliseconds ;
return ntime;
}
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
my main file
#include "ITime.h"
#include <iostream>
int main()
{
ITime time;
time.start();
qDebug() << "Start time" << time.oStartTime ;
qDebug() << "differnce time in micro by counter" << time.elapsed() ;
qDebug() << "differnce time in milli " << time.milli() ;
qDebug() << "End time" << time.oEndTime ;
}
my output is like this
Start timeQTime("17:57:46")
milli seconds by counter : 1.20633
differnce time in micro by counter: 1206
differnce time in milli by using QTime : 0
End time QTime("17:57:46")
here, by considering the output
microseconds is 1206 by counter , it mean almost 1 milliseconds , but in output milliseconds is 0 by QTime , so how can i get same difference in milli & micro seconds
actually i want to know why the differece in time is coming in between QTime and counter
QueryPerformanceFrequency provides the counter frequency in "counts per second". That means that whatever QueryPerformanceCounter returns, will be represented against the divisor that QueryPerformanceFrequency provided to begin with. In other words (assuming you can target a system that does arithmetic on QuadPart which is extremely likely)...
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER startTicks;
QueryPerformanceCounter(&startTicks);
//Do work here
LARGE_INTEGER endTicks;
QueryPerformanceCounter(&endTicks);
LARGE_INTEGER elapsedTicks;
elapsedTicks.QuadPart = endTicks.QuadPart - startTicks.QuadPart;
double elapsedMicoseconds = elapsedTicks.QuadPart / (freq.QuadPart / 1000000.0);
double elapsedMilliseconds = elapsedTicks.QuadPart / (freq.QuadPart / 1000.0);
double elapsedSeconds = elapsedTicks.QuadPart / (double)freq.QuadPart;
... should answer your question. You can break that up any way that you see fit including truncating into a form such as SS.MMMMM (seconds and miliseconds).
Also note that you should only call QueryPerformanceFrequency once and save the result as it doesn't change between system boots and it is just a redundent call after the first one.
In Java, we can use System.currentTimeMillis() to get the current timestamp in Milliseconds since epoch time which is -
the difference, measured in milliseconds, between the current time and
midnight, January 1, 1970 UTC.
In C++ how to get the same thing?
Currently I am using this to get the current timestamp -
struct timeval tp;
gettimeofday(&tp, NULL);
long int ms = tp.tv_sec * 1000 + tp.tv_usec / 1000; //get current timestamp in milliseconds
cout << ms << endl;
This looks right or not?
If you have access to the C++ 11 libraries, check out the std::chrono library. You can use it to get the milliseconds since the Unix Epoch like this:
#include <chrono>
// ...
using namespace std::chrono;
milliseconds ms = duration_cast< milliseconds >(
system_clock::now().time_since_epoch()
);
use <sys/time.h>
struct timeval tp;
gettimeofday(&tp, NULL);
long int ms = tp.tv_sec * 1000 + tp.tv_usec / 1000;
refer this.
Since C++11 you can use std::chrono:
get current system time: std::chrono::system_clock::now()
get time since epoch: .time_since_epoch()
translate the underlying unit to milliseconds: duration_cast<milliseconds>(d)
translate std::chrono::milliseconds to integer (uint64_t to avoid overflow)
#include <chrono>
#include <cstdint>
#include <iostream>
uint64_t timeSinceEpochMillisec() {
using namespace std::chrono;
return duration_cast<milliseconds>(system_clock::now().time_since_epoch()).count();
}
int main() {
std::cout << timeSinceEpochMillisec() << std::endl;
return 0;
}
This answer is pretty similar to Oz.'s, using <chrono> for C++ -- I didn't grab it from Oz. though...
I picked up the original snippet at the bottom of this page, and slightly modified it to be a complete console app. I love using this lil' ol' thing. It's fantastic if you do a lot of scripting and need a reliable tool in Windows to get the epoch in actual milliseconds without resorting to using VB, or some less modern, less reader-friendly code.
#include <chrono>
#include <iostream>
int main() {
unsigned __int64 now = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count();
std::cout << now << std::endl;
return 0;
}
If using gettimeofday you have to cast to long long otherwise you will get overflows and thus not the real number of milliseconds since the epoch:
long int msint = tp.tv_sec * 1000 + tp.tv_usec / 1000;
will give you a number like 767990892 which is round 8 days after the epoch ;-).
int main(int argc, char* argv[])
{
struct timeval tp;
gettimeofday(&tp, NULL);
long long mslong = (long long) tp.tv_sec * 1000L + tp.tv_usec / 1000; //get current timestamp in milliseconds
std::cout << mslong << std::endl;
}
I am trying a write a stopwatch which is used to keep track of the program's running time. The code showing the private members is as follows:-
#include <sys/time.h>
class stopwatch
{
private:
struct timeval *startTime;
int elaspedTime;
timezone *Tzp;
public:
//some code here
};
The problem is that while compiling the program, I am getting an error that ISO C++ forbids declaration of 'timezone' with no type. I am thinking this might be due to library that I am using but I am not able to correct my mistake. I have searched on the internet but the only post about <sys/time.h> is that it is very obsolete now. They did not suggest any alternatives. Can you please me.
You can just use chrono:
#include <chrono>
#include <iostream>
int main(int argc, char* argv[])
{
auto beg = std::chrono::high_resolution_clock::now();
// Do stuff here
auto end = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - beg).count() << std::endl;
std::cin.get();
return 0;
}
As seen here
#include <iostream> /* cout */
#include <time.h> /* time_t, struct tm, difftime, time, mktime */
int main ()
{
time_t timer;
struct tm y2k;
double seconds;
y2k.tm_hour = 0; y2k.tm_min = 0; y2k.tm_sec = 0;
y2k.tm_year = 100; y2k.tm_mon = 0; y2k.tm_mday = 1;
time(&timer); /* get current time; same as: timer = time(NULL) */
seconds = difftime(timer,mktime(&y2k));
std::cout << seconds << "seconds since January 1, 2000 in the current timezone" << endl;
return 0;
}
You can modify names as you want. Also, here's a timer with <sys/time.h>
If you're developing on a windows environment, you can call unsigned int startTime = timeGetTime()(msdn) once when the program starts and unsigned int endTime = timeGetTime() when it ends. Subtract endTime from startTime and you have the number of milliseconds that have passed since the program started. If you're looking for more accuracy, check out the QueryPerformanceCounter functions.