I want to measure the time of a child process
#include <time.h>
int main() {
...
time t begin, end, diff;
...
//fork etc in here
time(&begin);
...
//some things
...
time(&end);
return 0;
}
I have 2 Time stamps now, is there a way to format it to the run-time of the child process to hours:minutes:seconds?
I have tried
diff = end - begin;
But I get a huge number then.
(Sorry for only a part of the code but it's on another PC.)
You can compute the difference with difftime:
double diff_in_seconds = difftime(end, begin);
or, for better precision, use one of C++11 chrono monotonic clocks such as std::steady_clock:
auto start = std::chrono::steady_clock::now();
// some things
auto end = std::chrono::steady_clock::now();
double time_in_seconds = std::chrono::duration_cast<double>(end - start).count();
See also this answer for details why you should use a monotonic clock.
You should probably compute the difference using difftime instead of subtraction, in case your system uses some other format for time_t besides "integer number of seconds".
difftime returns the number of seconds between the two times, as a double. It's then a simple matter of arithmetic to convert to hours, minutes and seconds.
The attempt in the question is a C way, not C++. In C++11 (assuming you have one), you can get 2 time points and then cast the difference between them to the units you need, as in the example here: http://en.cppreference.com/w/cpp/chrono/duration/duration_cast
Nearly copying the code:
auto t1 = std::chrono::high_resolution_clock::now();
// Call your child process here
auto t2 = std::chrono::high_resolution_clock::now();
std::cout << "Child process took "
<< std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count()
<< " milliseconds\n";
Related
Here is a clear question:
can you provide a simple example of chrono being called from main (or its own class) and used in another class. Or a link to an example.
and below is me fumbling around trying to explain my problem in more detail:
I have been working on this all day and keep ending up in the same place.
I am writing a program that is supposed to output the elapsed time after certain processes finish.
The problem I am having is that these processes are happening in different classes and I cannot get the clock to work properly.
I keep reverting back to having the clock in the main but am really struggling to make everything mesh together. So maybe this is a simple question about working with classes. But there is something about I am not understanding and I don't know what it is.
Below are the 4 lines of this timer that I keep reverting back to and placing in my main function. It prints the clock how I want in the format x.xxxxxx
auto clock_start = chrono::system_clock::now();
auto clock_now = chrono::system_clock::now();
float currentTime = float(chrono::duration_cast <chrono::microseconds> (clock_now - clock_start).count());
cout << "Elapsed Time: " << currentTime /1000000 << " S \n";
Eventually,
I have a queue of structs that im popping in a loop that I then manipulate. They need a time-stamp when printed at the end of each loop iteration.
I just can't for the life of me get the timer to give the time elapsed (or even work) while in the loop.
is this possible? I have read many threads on chrono and something is just not clicking for me when I try using the timer in multiple classes/functions across my program.
EDIT***
SO here is my current class in my meta.h:
These are private members inside class Meta
typedef std::chrono::system_clock timer;
timer::time_point currentTime;
timer::time_point startTime;
timer::time_point clock_wait;
timer::time_point clock_check;
timer::time_point elapsed_time; // this is my issue
And then I start the time in meta.cpp
void Meta::startTimer()
{
startTime = timer::now();
}
And here is the loop with some pieces missing so we can focus on the timer:
void Meta::displaySim()
{
//auto clock_start = chrono::system_clock::now(); THIS IS WHAT I WAS DOING
queue<sData>newFile;
while (!MetaQ.empty())
{
temp = MetaQ.front();
bool wait = true;
float waitTime = float(temp.ncycle)/1000;
while (wait)
{
clock_wait = timer::now();
clock_check = timer::now();
elapsed_time = timer::duration_cast<chrono::milliseconds>(clock_check - clock_wait);
if (elapsed_time.count() > waitTime)
wait = false;
}
cout << "****" << waitTime << "*****"<< endl;
end_time = timer::now();
//Below is the line that is giving me trouble now. I get an error when casting. I don't know how to make duration_cast part of the timer declared in meta.h
float EndTime = float(timer::duration_cast <chrono::milliseconds>(end_time - startTime).count());
cout << fixed << EndTime / 1000000 << " - (" << temp.desc << ')' << temp.cycle << " - " << temp.ncycle << " ms\n";
newFile.push(temp);
MetaQ.pop();
}
MetaQ = newFile;
}
timer::time_point elapsed_time; // this is my issue
Just from the name elapsed_time, this doesn't sound like a time_point. It sounds like a duration. Do this to fix that problem:
timer::duration elapsed_time;
This looks suspicious:
float waitTime = float(temp.ncycle)/1000;
Typically a time duration should have type std::chrono::duration<some representation, some period>. And you don't want to apply conversion factors like 1/1000 manually. Let <chrono> handle all conversions.
elapsed_time = timer::duration_cast<chrono::milliseconds>(clock_check - clock_wait);
duration_cast is not a static member function of system_clock. duration_cast is a namespace scope function. Use it like this:
elapsed_time = chrono::duration_cast<chrono::milliseconds>(clock_check - clock_wait);
If waitTime had a duration type, the .count() would be unnecessary here:
if (elapsed_time.count() > waitTime)
// Below is the line that is giving me trouble now. I get an error when casting.
// I don't know how to make duration_cast part of the timer declared in meta.h
float EndTime = float(timer::duration_cast <chrono::milliseconds>(end_time - startTime).count());
Best practice is to stay within the <chrono> type system instead of escaping to scalars such as float. You can get integral milliseconds with this:
auto EndTime = chrono::duration_cast<chrono::milliseconds>(end_time - startTime);
If you really want EndTime to be float-based milliseconds, that is easy too:
using fmilliseconds = chrono::duration<float, std::milli>;
fmilliseconds EndTime = end_time - startTime;
For more details, here is a video tutorial for the <chrono> library: https://www.youtube.com/watch?v=P32hvk8b13M
If this answer doesn't address your question, distill your problem down into a complete minimal program that others can copy/paste into their compiler and try out. For example I could not give you concrete advice on waitTime because I have no idea what temp.ncycle is.
Finally, and this is optional, if you would like an easier way to stream out durations for debugging purposes, consider using my free, open source, header-only date/time library. It can be used like this:
#include "date/date.h"
#include <iostream>
#include <thread>
using timer = std::chrono::system_clock;
timer::time_point clock_wait;
timer::time_point clock_check;
timer::duration elapsed_time;
int
main()
{
using namespace std::chrono_literals;
clock_wait = timer::now();
std::this_thread::sleep_for(25ms); // simulate work
clock_check = timer::now();
elapsed_time = clock_check - clock_wait;
using date::operator<<; // Needed to find the correct operator<<
std::cout << elapsed_time << '\n'; // then just stream it
}
which just output for me:
25729µs
The compile-time units of the duration are automatically appended to the run-time value to make it easier to see what you have. This prevents you from accidentally appending the wrong units to your output.
I basically have a school project testing the time it takes different sort algorithms and record how long they take with n amount of numbers to sort. So I decided to use Boost library with c++ to record the time. I am at the point I am not sure how to do it, I have googled it and have found people using different ways. for examples
auto start = boost::chrono::high_resolution_clock::now();
auto end = boost::chrono::high_resolution_clock::now();
auto time = (end-start).count();
or
boost::chrono::system_clock::now();
or
boost::chrono::steady_clock::now()
or even using something like this
boost::timer::cpu_timer and boost::timer::auto_cpu_time
or
boost::posix_time::ptime start = boost::posix_time::microsec_clock::local_time( );
so I want to be sure on how to do it right now this is what I have
typedef boost::chrono::duration<double, boost::nano> boost_nano;
auto start_t = boost::chrono::high_resolution_clock::now();
// call function
auto end_t = boost::chrono::high_resolution_clock::now();
boost_nano time = (end_t - start_t);
cout << t.count();
so am I on the right track?
You likely want the high resolution timer.
You can use either that of boost::chrono or std::chrono.
Boost Chrono has some support for IO builtin, so it makes it easier to report times in a human friendly way.
I usually use a wrapper similar to this:
template <typename Caption, typename F>
auto timed(Caption const& task, F&& f) {
using namespace boost::chrono;
struct _ {
high_resolution_clock::time_point s;
Caption const& task;
~_() { std::cout << " -- (" << task << " completed in " << duration_cast<milliseconds>(high_resolution_clock::now() - s) << ")\n"; }
} timing { high_resolution_clock::now(), task };
return f();
}
Which reports time taken in milliseconds.
The good part here is that you can time construction and similar:
std::vector<int> large = timed("generate data", [] {
return generate_uniform_random_data(); });
But also, general code blocks:
timed("do_step2", [] {
// step two is foo and bar:
foo();
bar();
});
And it works if e.g. foo() throws, just fine.
DEMO
Live On Coliru
int main() {
return timed("demo task", [] {
sleep(1);
return 42;
});
}
Prints
-- (demo task completed in 1000 milliseconds)
42
I typically use time(0) to control the duration of a loop. time(0) is simply one time measurement that, because of its own short duration, has the least impact on everything else going on (and you can even run a do-nothing loop to capture how much to subtract from any other loop measurement effort).
So in a loop running for 3 (or 10 seconds), how many times can the loop invoke the thing you are trying to measure?
Here is an example of how my older code measures the duration of 'getpid()'
uint32_t spinPidTillTime0SecChange(volatile int& pid)
{
uint32_t spinCount = 1; // getpid() invocation count
// no measurement, just spinning
::time_t tStart = ::time(nullptr);
::time_t tEnd = tStart;
while (0 == (tEnd - tStart)) // (tStart == tEnd)
{
pid = ::getpid();
tEnd = ::time(nullptr);
spinCount += 1;
}
return(spinCount);
}
Invoke this 3 (or 10) times, adding the return values together. To make it easy, discard the first measurement (because it probably will be a partial second).
Yes, I am sure there is a c++11 version of accessing what time(0) accesses.
Use std::chrono::steady_clock or std::chrono::high_resolution_clock (if it is steady - see below) and not std::chrono::system_clock for measuring run time in C++11 (or use its boost equivalent). The reason is (quoting system_clock's documentation):
on most systems, the system time can be adjusted at any moment
while steady_clock is monotonic and is better suited for measuring intervals:
Class std::chrono::steady_clock represents a monotonic clock. The time
points of this clock cannot decrease as physical time moves forward.
This clock is not related to wall clock time, and is best suitable for
measuring intervals.
Here's an example:
auto start = std::chrono::steady_clock::now();
// do something
auto finish = std::chrono::steady_clock::now();
double elapsed_seconds = std::chrono::duration_cast<
std::chrono::duration<double> >(finish - start).count();
A small practical tip: if you are measuring run time and want to report seconds std::chrono::duration_cast<std::chrono::seconds> is rarely what you need because it gives you whole number of seconds. To get the time in seconds as a double use the example above.
As suggested by Gregor McGregor, you can use a high_resolution_clock which may sometimes provide higher resolution (although it can be an alias of steady_clock), but beware that it may also be an alias of system_clock, so you might want to check is_steady.
I have a simple code and I used clock() and other suggested methods to measure the running time of program. The problem is I got different values when I run it times to times.
Is there any way to elapsed the real execution time of the program?
Thanks in advance
One way of doint it uses #include <ctime>
clock_t t = clock(); // take a start time
// ... do something
clock_t dt = clock() - t; // take elapsed time
cout << (((double)dt) / CLOCKS_PER_SEC) * 1000); // duration in MILLIseconds.
The other approach uses the high_resolution_clock of #include <chrono>:
chrono::high_resolution_clock::time_point t = chrono::high_resolution_clock::now();
//... do something
chrono::high_resolution_clock::time_point t2 = chrono::high_resolution_clock::now();
cout << chrono::duration_cast<chrono::duration<double>>(t2 - t).count();
// or if you prefer duration_cast<milliseconds>(t2 - t).count();
In any case, it's normal that you find small variations. First reason is your other running programms on your PC. Second reason is the clock accuracy (for example the famous 15 milliseconds on windows).
I am executing the code to calculate the time taken by a Matrix multiplication Code.
I have created four threads and called the Calculate method like this:
std::thread t1( Calculate );
std::thread t2( Calculate );
std::thread t3( Calculate );
std::thread t4( Calculate );
t1.join();
t2.join();
t3.join();
t4.join();
This is the code where I am doing the matrix multiplication
void calculate()
{
clock_t starttime = clock();
// some Code
clock_t endtime = clock();
cout << "Time Taken:"<<diffclock(endtime, starttime)<<"sec."<<endl;
}
This is the method to calculate time difference:
double diffclock(clock_t clock1,clock_t clock2)
{
double diffticks=clock1-clock2;
double diffms=(diffticks)/CLOCKS_PER_SEC;
return diffms;
}
After the execution the time taken by whole execution is displayed incorrectly. The time taken by the operation is around 22 seconds but the code is giving nearly 32 seconds as time taken. I have checked it from stopwatch and the output by this code is Incorrect.
As per the documentation of clock
In order to measure the time spent in a program, clock() should be called
at the start of the program and its return value subtracted from the value
returned by subsequent calls. The value returned by clock() is defined for
compatibility across systems that have clocks with different resolutions.
To determine the time in seconds, the value returned by clock() should be
divided by the value of the macro CLOCKS_PER_SEC. CLOCKS_PER_SEC is defined
to be one million in <time.h>.
However the time returned by this time calculating code contradicts with the time provided by the IDE. I am using code::blocks here.
Am I missing some thing?
Since you're on C++11, you can use std::chrono:
std::chrono::time_point<std::chrono :: system_clock> start, end;
start = std::chrono::system_clock::now();
// calculations here...
end = std::chrono::system_clock::now();
std::chrono::duration<double> elapsed = end-start;
std::cout << "Elapsed time: " << elapsed.count() << "s\n";
Note that you should also use an std::mutex when accessing cout. Your code looks fine, but cout is probably getting mixed up.
I'm using time.h in C++ to measure the timing of a function.
clock_t t = clock();
someFunction();
printf("\nTime taken: %.4fs\n", (float)(clock() - t)/CLOCKS_PER_SEC);
however, I'm always getting the time taken as 0.0000. clock() and t when printed separately, have the same value. I would like to know if there is way to measure the time precisely (maybe in the order of nanoseconds) in C++ . I'm using VS2010.
C++11 introduced the chrono API, you can use to get nanoseconds :
auto begin = std::chrono::high_resolution_clock::now();
// code to benchmark
auto end = std::chrono::high_resolution_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count() << "ns" << std::endl;
For a more relevant value it is good to run the function several times and compute the average :
auto begin = std::chrono::high_resolution_clock::now();
uint32_t iterations = 10000;
for(uint32_t i = 0; i < iterations; ++i)
{
// code to benchmark
}
auto end = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count();
std::cout << duration << "ns total, average : " << duration / iterations << "ns." << std::endl;
But remember the for loop and assigning begin and end var use some CPU time too.
I usually use the QueryPerformanceCounter function.
example:
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER t1, t2; // ticks
double elapsedTime;
// get ticks per second
QueryPerformanceFrequency(&frequency);
// start timer
QueryPerformanceCounter(&t1);
// do something
...
// stop timer
QueryPerformanceCounter(&t2);
// compute and print the elapsed time in millisec
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
The following text, that i completely agree with, is quoted from Optimizing software in C++ (good reading for any C++ programmer) -
The time measurements may require a very high resolution if time
intervals are short. In Windows, you can use the
GetTickCount or
QueryPerformanceCounter functions for millisecond resolution. A much
higher resolution can be obtained with the time stamp counter in the
CPU, which counts at the CPU clock frequency.
There is a problem that "the clock frequency may vary dynamically and that
measurements are unstable due to interrupts and task switches."
In C or C++ I usually do like below. If it still fails you may consider using rtdsc functions
struct timeval time;
gettimeofday(&time, NULL); // Start Time
long totalTime = (time.tv_sec * 1000) + (time.tv_usec / 1000);
//........ call your functions here
gettimeofday(&time, NULL); //END-TIME
totalTime = (((time.tv_sec * 1000) + (time.tv_usec / 1000)) - totalTime);