How can i count the millisecond a certain function (called repeatedly) takes ?
I thought of:
CTime::GetCurrentTM() before,
CTime::GetCurrentTM() after,
And then insert the result to CTimeSpan diff = after - before.
Finally store that diff to global member that sum all diffs since i want to know the total time this function spent.
but it will give the answer in seconds and not milliseconds.
MFC is C++, right?
If so, you can just use clock().
#include <ctime>
clock_t time1 = clock();
// do something heavy
clock_t time2 = clock();
clock_t timediff = time2 - time1;
float timediff_sec = ((float)timediff) / CLOCKS_PER_SEC;
This will usually give you millisecond precision.
If you are using MFC, the nice way is to use WIN API. And since you are worried just to calculate the time difference, the below function might suit you perfectly.
GetTickCount64()
directly returns the number of milli seconds that has elapsed since the system was started.
If you don't plan to keep your system up long (precisely more than 49.7 days), little bit faster version - GetTickCount() function
The COleDateTime is known to internally work based on milliseconds, because it stores timestamp on its m_dt variable, which is of DATE type, so having resolution for the intended purpose.
I can suggest you to base your time on
DATE now= (DATE) COleDateTime::GetCurrentTime();
and after do the respective calculations.
Related
Lets say I want to measure the total time of a particular function. Now this function calls other functions (f1 and f2). So I want to calculate total time of f1 and f2.
What I was expecting was f total time = f1 total time + f2 total time
void f(){
struct timespec total_start, total_end;
struct timespec f1_start, f1_end;
struct timespec f2_start, f2_end;
clock_gettime(CLOCK_MONOTONIC, &total_start);
clock_gettime(CLOCK_MONOTONIC, &f1_start);
f1();
clock_gettime(CLOCK_MONOTONIC, &f1_end);
clock_gettime(CLOCK_MONOTONIC, &f2_start);
f2();
clock_gettime(CLOCK_MONOTONIC, &f2_end);
clock_gettime(CLOCK_MONOTONIC, &total_end);
f_total_time = (total_end.tv_sec - total_start.tv_sec) + (total_end.tv_nsec - total_start.tv_nsec)/1e9 ;
f1_total_time = (f1_end.tv_sec - f1_start.tv_sec) + (f1_end.tv_nsec - f1_start.tv_nsec)/1e9 ;
f2_total_time = (f2_end.tv_sec - f2_start.tv_sec) + (f2_end.tv_nsec - f2_start.tv_nsec)/1e9 ;
}
My question is, Is this a correct way to measure time of functions inside function.
Problem : The problem I am facing is total time of f1 and f2 does not add up to total time of f. ie f total time != f1 total time + f2 total time what actually happens is f total time > f1 total time + f2 total time
Am I doing something wrong?
Answer -
Yes. IMHO it appears to be a valid duration measurement technique
of a function within a function.
The Posix clock_gettime() reports sec/nanoseconds from a fixed
time, so each access is independent of any other.
From "man clock_gettime" :
All implementations support the system-wide real-time clock,
which is identified by CLOCK_REALTIME. Its time represents
seconds and nanoseconds since the Epoch. When its time is
changed, timers for a relative interval are unaffected, but
timers for an absolute point in time are affected.
I see nothing wrong with your approach.
Perhaps you need to know more about the relative duration of your
code vs duration of the clock read mechanisms you are using.
On my Ubuntu 15.10, on an older Dell, using g++ 5.2.1, the Posix
call
clock_gettime(CLOCK_REALTIME, ...)
uses > 1,500 ns (avg over 3 seconds) (i.e. ~1.5 us)
To achieve some measure of repeatability, the duration you are
trying to measure (f1() and f2() and f1()+f2()) must be more than
this, probably by a factor of 10.
Your system will be different (than mine), so you must test it to
know how long these clock reads take.
There is also the interesting idea of knowing how fast
CLOCK_REALTIME increments. Even though the API indicates
nanoseconds, it might not be that fast.
An alternative I use is std::time(nullptr) with a cost of ~5 ns (on my system), 3 orders of magnitude less. FYI: ::time(0) measures the same.
A loop controlled by this API return simply kicks out at the end
of a second, when the value returned has changed from previous
values. I usually accumulate 3 seconds of loops (i.e. a fixed
test time) and compute the average event duration.
Example measurement output:
751.1412070 M 'std::time(nullptr) duration' invocations in 3.999,788 sec (3999788 us)
187.7952549 M 'std::time(nullptr) duration' events per second
5.324948176 n seconds per 'std::time(nullptr) duration' event
If using this clock access, you can simply subtract 5.3 ns (on my
system) from each invocation when calculating the seconds per event for
your functions.
Note: Any Posix API is an interface to a system provided
function, not the function itself.
Being part of an API is not conclusive evidence about the
functions implementation ... which may be in any language,
even assy for peak performance.
To chronometer a C++ application, note initial time in a variable, and a declare a duration (seconds) :
#include "time.h"
clock_t t (clock ());
size_t duration (0);
during execution, duration is updated this way :
duration = (clock() - t)/CLOCKS_PER_SEC;
In my calculator-like program, the user selects what and how many to compute (eg. how many digits of pi, how many prime numbers etc.). I use time(0) to check for the computation time elapsed in order to trigger a timeout condition. If the computation completes without timeout, I will also print the computation time taken, the value of which is stored in a double, the return type of difftime().
I just found out that the time values calculated are in seconds only. I don't want a user input of 100 and 10000 results to both print a computation duration of 0e0 seconds. I want them to print, for example, durations of 1.23e-6 and 4.56e-3 seconds respectively (as accurate as the machine can measure - I am more acquainted to the accuracy provided in Java and with the accuracies in scientific measurements so it's a personal preference).
I have seen the answers to other questions, but they don't help because 1) I will not be multi-threading (not preferred in my work environment). 2) I cannot use C++11 or later.
How can I obtain time duration values more accurate than seconds as integral values given the stated constraints?
Edit: Platform & machine-independent solutions preferred, otherwise Windows will do, thanks!
Edit 2: My notebook is also not connected to the Internet, so no downloading of external libraries like Boost (is that what Boost is?). I'll have to code anything myself.
You can use QueryPerformanceCounter (QPC) which is part of the Windows API to do high-resolution time measurements.
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
LARGE_INTEGER Frequency;
QueryPerformanceFrequency(&Frequency);
QueryPerformanceCounter(&StartingTime);
// Activity to be timed
QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
//
// We now have the elapsed number of ticks, along with the
// number of ticks-per-second. We use these values
// to convert to the number of elapsed microseconds.
// To guard against loss-of-precision, we convert
// to microseconds *before* dividing by ticks-per-second.
//
ElapsedMicroseconds.QuadPart *= 1000000;
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
On Windows, the simplest solution is to use GetTickCount, which returns the number of milliseconds since the computer was started.
#include <windows.h>
...
DWORD before = GetTickCount();
...
DWORD duration = GetTickCount() - before;
std::cout<<"It took "<<duration<<"ms\n";
Caveats:
it works only on Windows;
the resolution (milliseconds) is not stellar;
given that the result is a 32 bit integer, it wraps around after one month or something; thus, you cannot measure stuff longer than that; a possible solution is to use GetTickCount64, which however is available only from Vista onwards;
since systems with the uptime of more than one month are actually quite common, you may indeed have to deal with results bigger than 231; thus, make sure to always keep such values in a DWORD (or an uint32_t), without casting them to int, or you are risking signed integer overflow. Another option is to just store them in a 64 bit signed integer (or a double) and forget the difficulties of dealing with unsigned integers.
I realize the compiler you're using doesn't support it, but for reference purposes the C++11 solution is simple...
auto start = std::chrono::high_resolution_clock::now();
auto end = std::chrono::high_resolution_clock::now();
long ts = std::chrono::duration<long, std::chrono::nano>(end - start).count();
The time command returns the time elapsed in execution of a command.
If I put a "gettimeofday()" at the start of the command call (using system() ), and one at the end of the call, and take a difference, it doesn't come out the same. (its not a very small difference either)
Can anybody explain what is the exact difference between the two usages and which is the best way to time the execution of a call?
Thanks.
The Unix time command measures the whole program execution time, including the time it takes for the system to load your binary and all its libraries, and the time it takes to clean up everything once your program is finished.
On the other hand, gettimeofday can only work inside your program, that is after it has finished loading (for the initial measurement), and before it is cleaned up (for the final measurement).
Which one is best? Depends on what you want to measure... ;)
It's all dependent on what you are timing. If you are trying to time something in seconds, then time() is probably your best bet. If you need higher resolution than that, then I would consider gettimeofday(), which gives up to microsecond resolution (1 / 1000000th of a second).
If you need even higher resolution than that, consider using clock() and CLOCKS_PER_SECOND, just note that clock() is rarely an accurate description for the amount of time taken, but rather the number of CPU cycles used.
time() returns time since epoch in seconds.
gettimeofday(): returns:
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
Each time function has different precision. In C++11 you would use std::chrono:
using namespace std::chrono;
auto start = high_resolution_clock::now();
/* do stuff*/
auto end = high_resolution_clock::now();
float elapsedSeconds = duration_cast<duration<float>>(end-start).count();
I need a function or way to get the UNIX epoch in seconds, much like how I can in PHP using the time function.
I can't find any method except the time() in ctime which seems to only output a formatted date, or the clock() function which has seconds but seems to always be a multiple of 1 million, nothing with any resolution.
I wish to measure execution time in a program, I just wanted to calculate the diff between start and end; how would a C++ programmer do this?
EDIT: time() and difftime only allow resolution by seconds, not ms or anything too btw.
time() should work fine, use difftime for difference of time calculations. In case you need better resolutuion, use gettimeofday.
Also, duplicate of: Calculating elapsed time in a C program in milliseconds
If you want to profile get, I'd recommend using getrusage. This will allow you to track CPU time instead of wall clock time:
struct rusage ru;
getrusage(RUSAGE_SELF, &ru);
ru.ru_utime.tv_sec; // seconds of user CPU time
ru.ru_utime.tv_usec; // microseconds of user CPU time
ru.ru_stime.tv_sec; // seconds of system CPU time
ru.ru_stime.tv_usec; // microseconds of system CPU time
This code should work for you.
time_t epoch = time(0);
#include <iostream>
#include <ctime>
using namespace std;
int main() {
time_t time = time(0);
cout<<time<<endl;
system("pause");
return 0;
}
If you have any questions, feel free to comment below.
I have found some code on measuring execution time here
http://www.dreamincode.net/forums/index.php?showtopic=24685
However, it does not seem to work for calls to system(). I imagine this is because the execution jumps out of the current process.
clock_t begin=clock();
system(something);
clock_t end=clock();
cout<<"Execution time: "<<diffclock(end,begin)<<" s."<<endl;
Then
double diffclock(clock_t clock1,clock_t clock2)
{
double diffticks=clock1-clock2;
double diffms=(diffticks)/(CLOCKS_PER_SEC);
return diffms;
}
However this always returns 0 seconds... Is there another method that will work?
Also, this is in Linux.
Edit: Also, just to add, the execution time is in the order of hours. So accuracy is not really an issue.
Thanks!
Have you considered using gettimeofday?
struct timeval tv;
struct timeval start_tv;
gettimeofday(&start_tv, NULL);
system(something);
double elapsed = 0.0;
gettimeofday(&tv, NULL);
elapsed = (tv.tv_sec - start_tv.tv_sec) +
(tv.tv_usec - start_tv.tv_usec) / 1000000.0;
Unfortunately clock() only has one second resolution on Linux (even though it returns the time in units of microseconds).
Many people use gettimeofday() for benchmarking, but that measures elapsed time - not time used by this process/thread - so isn't ideal. Obviously if your system is more or less idle and your tests are quite long then you can average the results. Normally less of a problem but still worth knowing about is that the time returned by gettimeofday() is non-monatonic - it can jump around a bit e.g. when your system first connects to an NTP time server.
The best thing to use for benchmarking is clock_gettime() with whichever option is most suitable for your task.
CLOCK_THREAD_CPUTIME_ID - Thread-specific CPU-time clock.
CLOCK_PROCESS_CPUTIME_ID - High-resolution per-process timer from the CPU.
CLOCK_MONOTONIC - Represents monotonic time since some unspecified starting point.
CLOCK_REALTIME - System-wide realtime clock.
NOTE though, that not all options are supported on all Linux platforms - except clock_gettime(CLOCK_REALTIME) which is equivalent to gettimeofday().
Useful link: Profiling Code Using clock_gettime
Tuomas Pelkonen already presented the gettimeofday method that allows to get times with a resolution to the microsecond.
In his example he goes on to convert to double. I personally have wrapped the timeval struct into a class of my own that keep the counts into seconds and microseconds as integers and handle the add and minus operations correctly.
I prefer to keep integers (with exact maths) rather than get to floating points numbers and all their woes when I can.