what is best way to measure computation time, with either STL C++ or Qt?
I know of ctime, but I have an idea Qt could be of use here.
Thanks!
Theres the QTime class, that can measure time, you can start it via start() and retrieve it via the elapsed() method.
If you want something more advanced, you can go for Boost.Chrono if you want to get into serious time perversions. It gets real hairy real quick though, and the doc is a bit sparse (as always with Boost), but it's really one of the cleanest and best libraries if you need something of that caliber.
It all depends on what you want to do though, because "measuring time of computation" is a very broad description. Do you actually want to profile your application? Then maybe a profiler tool might be more suitable.
Also, if you just want to get the raw time it takes to execute the program, there's the time command in Linux.
Personally, I would use QElapsedTimer:
http://doc.qt.io/qt-4.8/qelapsedtimer.html
If you develop for Windows, you can use this from WINAPI:
DWORD start = ::GetTickCount();
calculation();
DWORD result = ::GetTickCount - start;
The DWORD result will contain the passed time in milliseconds.
Note: That this way of measuring is not uber precise. The precision varies between 10 and 16 ms. But if you just want to display something like "It took 5.37 seconds to calculate the meaning of life" it will suffice.
Related
I am currently implementing a PID controller for a project I am doing, but I realized I don't know how to ensure a fixed interval for each iteration. I want the PID controller to run at a frequency of 10Hz, but I don't want to use any sleep functions or anything that would otherwise slow down the thread it's running in. I've looked around but I cannot for the life of me find any good topics/functions that simply gives me an accurate measurement of milliseconds. Those that I have found simply uses time_t or clock_t, but time_t only seems to give seconds(?) and clock_t will vary greatly depending on different factors.
Is there any clean and good way to simply see if it's been >= 100 milliseconds since a given point in time in C++? I'm using the Qt5 framework and OpenCV library and the program is running on an ODROID X-2, if that's of any helpful information to anyone.
Thank you for reading, Christian.
I don't know much about the ODROID X-2 platform but if it's at all unixy you may have access to gettimeofday or clock_gettime either one of which would provide a higher resolution clock if available on your hardware.
As the title suggests I'm interested in obtaining CPU clock cycles used by a process in kernel mode only. I know there is an API called "QueryProcessCycleTime" which returns the CPU clock
cycles used by the threads of the process. But this value includes cycles spent in both user mode and kernel mode. How can I obtain cycles spent in kernel mode only? Do I need to get this using Performance counters? If yes, which one I should use?
Thanks in advance for your answers.
I've just found an interesting article that describes almost what you ask for. It's on MSDN Internals.
They write there, that if you were using C# or C++/CLI, you could easily get that information from an instance of System.Diagnostic.Process class, pointed to the right PID. But it would give you a TimeSpan from the PrivilegedProcessorTime, so a "pretty time" instead of 'cycles'.
However, they also point out that all that .Net code is actually thin wrapper for unmanaged APIs, so you should be able to easily get it from native C++ too. They were ILDASM'ing that class to show what it calls, but the image is missing. I've just done the same, and it uses the GetProcessTimes from kernel32.dll
So, again, MSDN'ing it - it returns LPFILETIME structures. So, the 'pretty time', not 'cycles', again.
The description of this method points out that if you want to get the clock cycles, you should use QueryProcessCycleTime function. This actually returns the amount of clock cycles.. but user- and kernel-mode counted together.
Now, summing up:
you can read userTIME
you can read kernelTIME
you can read (user+kernel)CYCLES
So you have almost everything needed. By some simple math:
u_cycles = u_time * allcycles / (utime+ktime)
k_cycles = k_time * allcycles / (utime+ktime)
Of course this will be some approximation due to rounding etc.
Also, this will has a gotcha: you have to invoke two functions (GetTimes, QueryCycles) to get all the information, so there will be a slight delay between their readings, and therefore all your calculation will probably slip a little since the target process still runs and burns the time.
If you cannot allow for this (small?) noise in the measurement, I think you can circumvent it by temporarily suspending the process:
suspend the target
wait a little and ensure it is suspended
read first stats
read second stats
then resume the process and calculate values
I think this will ensure the two readings to be consistent, but in turn, each such reading will impact the overall performance of the measured process - so i.e. things like "wall time" will not be measureable any longer, unless you take some corrections for the time spent in suspension..
There may be some better way to get the separate clock cycles, but I have not found them, sorry. You could try looking inside the QueryProcessCycleTime and what source it reads the data from - maybe you are lucky and it reads A,B and returns A+B and maybe you could peek what are the sources. I have not checked it.
Take a look at GetProcessTimes. It'll give you the amount of kernel and user time your process has used.
Suppose I want to measure the time that a certain piece of code takes. For that I would normally do something like this
clock_t startTime = clock();
//do stuff
//do stuff
//do stuff
//do stuff
float secsElapsed = (float)(clock() - startTime)/CLOCKS_PER_SEC;
What if the program is multithreaded and context switches occur within the part which I want to measure? How would I measure the time that my code takes to execute excluding time spent on other threads? Even if there are tools that do it, I would very much like to know how they're doing it.
There are different ways to measure how long code takes to execute.
If you are interested in the relative performance of certain functions, a profiler is the only way to go. Note that this will de-emphasise the impact of blocking I/O due to the computation overheads it induces.
If you want the clock-based time of certain functions, there are loads of options.
Personally I would say gettimeofday is sufficient.
If you want to get precise, use RDTSC
If you want to get really precise, you'll want something like this
t1 = rdtsc();
t2 = rdtsc();
my_code();
t3 = rdtsc();
my_code_time = (t3-t2) - (t2-t1)
You will need to repeat this block to account for thread scheduling discrepencies, and also pay attention to cacheing effects.
This is why code benchmarking basically sucks- because you can't know how long it takes. Things like being pre-empted by the OS are unpredictable at best. Use a professional profiler, as they may have code in them that can deal with these problems, or don't bother. Writing clock() style things is utterly meaningless.
From the Linux terminal use 'time path_to_app'
This will return everything you want to know.
I have prepared two very simple classes. The first one ProfileHelper the class populate the start time in the constructor and the end time in the destructor. The second class ProfileHelperStatistic is a container with extra statistical capability (a std::multimap + few methods to return average, standard deviation and other funny stuff).
I have used this idea often for profiling. I guess you could make it work even in a multi-thread environment. It will require a bit of work, but I don't think it will be so difficult.
Have a look at this question for more information C++ Benchmark tool.
I am curious if there is a build-in function in C++ for measuring the execution time?
I am using Windows at the moment. In Linux it's pretty easy...
The best way on Windows, as far as I know, is to use QueryPerformanceCounter and QueryPerformanceFrequency.
QueryPerformanceCounter(LARGE_INTEGER*) places the performance counter's value into the LARGE_INTEGER passed.
QueryPerformanceFrequency(LARGE_INTEGER*) places the frequency the performance counter is incremented into the LARGE_INTEGER passed.
You can then find the execution time by recording the counter as execution starts, and then recording the counter when execution finishes. Subtract the start from the end to get the counter's change, then divide by the frequency to get the time in seconds.
LARGE_INTEGER start, finish, freq;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&start);
// Do something
QueryPerformanceCounter(&finish);
std::cout << "Execution took "
<< ((finish.QuadPart - start.QuadPart) / (double)freq.QuadPart) << std::endl;
It's pretty easy under Windows too - in fact it's the same function on both std::clock, defined in <ctime>
You can use the Windows API Function GetTickCount() and compare the values at start and end. Resolution is in the 16 ms ballpark. If for some reason you need more fine-grained timings, you'll need to look at QueryPerformanceCounter.
C++ has no built-in functions for high-granularity measuring code execution time, you have to resort to platform-specific code. For Windows try QueryPerformanceCounter: http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
The functions you should use depend on the resolution of timer you need. Some of them give 10ms resolutions. Those functions are easier to use. Others require more work, but give much higher resolution (and might cause you some headaches in some environments. Your dev machine might work fine, though).
http://www.geisswerks.com/ryan/FAQS/timing.html
This articles mentions:
timeGetTime
RDTSC (a processor feature, not an OS feature)
QueryPerformanceCounter
C++ works on many platforms. Why not use something that also works on many platforms, such as the Boost libraries.
Look at the documentation for the Boost Timer Library
I believe that it is a header-only library, which means that it is simple to setup and use...
This is really annoying me as I have done it before, about a year ago and I cannot for the life of me remember what library it was.
Basically, the problem is that I want to be able to call a method a certain number of times or for a certain period of time at a specified interval.
One example would be I would like to call a method "x" starting from now, 10 times, once every 0.5 seconds. Alternatively, call method "x" starting from now, 10 times, until 5 seconds have passed.
Now I thought I used a boost library for this functionality but I can't seem to find it now and feeling a bit annoyed. Unfortunately I can't look at the code again as I'm not in possession of it any more.
Alternatively, I could have dreamt this all up and it could have been proprietary code. Assuming there is nothing out there that does what I would like, what is currently the best way of producing this behaviour? It would need to be high-resolution, up to a millisecond.
It doesn't matter if it blocks the thread that it is executed from or not.
Thanks!
Maybe you are talking about boost::asio. It is a mainly used for networking, but it can be used for scheduling timers.
It can be used in conjunction with boost::threads.
A combination of boost::this_thread::sleep and time duration found in boost::datetime?
It's probably bad practice to answer your own question but I wanted to add something more to what Nikko suggested as I have no implemented the functionality with the two suggested libraries. Someone might find this useful at some point.
void SleepingExampleTest::sleepInterval(int frequency, int cycles, boost::function<void()> method) {
boost::posix_time::time_duration interval(boost::posix_time::microseconds(1000000 / frequency));
boost::posix_time::ptime timer = boost::posix_time::microsec_clock::local_time() + interval;
boost::this_thread::sleep(timer - boost::posix_time::microsec_clock::local_time());
while(cycles--) {
method();
timer = timer + interval;
boost::this_thread::sleep(timer - boost::posix_time::microsec_clock::local_time());
}
}
Hopefully people can understand this simple example that I have knocked up. Using a bound function just to allow flexibility.
Appears to work with about 50 microsecond accuracy on my machine. Before taking into account the skew of the time it takes to execute the method being called the accuracy was a couple of hundred microseconds, so definitely worth it.