Measure std::system's real execution time in C++ - c++

Is it possible to measure std::system(...)'s execution time ?
Or maybe the function returns immediately and it's not possible, and in this case is there any other way to measure the execution of a forked program ?
Thanks for any help.

Unless you are looking at a system that is neither POSIX with a sh-like shell nor Windows, std::system is synchronous and returns the result of the command. You can use the standard high resolution timer to measure wall time:
#include <chrono>
#include <cstdlib>
#include <iostream>
int main()
{
auto before = std::chrono::high_resolution_clock::now();
std::system("sleep 3");
auto after = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(
after - before);
std::cout << "It took " << duration.count() << " microseconds\n";
}
If you're rather interested in the amount of CPU time that the process used, I don't think that C++ has a standard and cross-platform way of offering that to you.

Try this code (for Linux & POSIX),
#include<time.h>
#include<sys/types.h>
#include<sys/wait.h>
#include <iostream>
#include <cstdlib>
struct tms st_time;
struct tms ed_time;
int main()
{
times(&st_time);
std::system("your call");
times(&ed_time);
std::cout<<"Total child process time ="
<<((ed_time.tms_cutime - st_time.tms_cutime)
+(ed_time.tms_cstime - st_time.tms_cstime))/CLOCKS_PER_SEC;
}

It is implementation specific (since, AFAIU, the C++ standard does not tell much about the command processor used by std::system; that command processor might not even run any external processes).
But let's focus on Linux (or at least on other POSIX like systems). Then, you could use the lower level system calls fork(2), execve(2), wait4(2) and use the struct rusage (see getrusage(2) for details) filled by that successful wait4 call, notably to get the CPU time. If you want just the elapsed real time, use <chrono> C++ facilities (or lower-level time(7) things such as clock_gettime(2)...)
Notice that the clock standard C function gives information about processor time (in the current process) so won't measure what a forked child process (by std::system) would consume.

Related

I created a high precision multitasking basic C++ code, what is the algorithm implementation called?

So I always wanted to implement basic multitasking code, specifically asynchronous code (not concurrency code) without using interrupts, boost, complex threading, complex multitasking implementations or algorithms.
I did some programming on MCUs such as the ATmega328. In most cases to make most efficient and use out from the MCUs, multitasking is required in which functions run at the same time ("perceived" running at the same time) without halting the MCU to run other functions.
Such that one "function_a" requires a delay but it should not halt the MCU for the delay so that other functions like "function_b" can also run asynchronously.
To do such task with microcontrollers only having one CPU/thread, an algorithm with timers and keeping track of the time is used to implement multitasking. It's really simple and always works. I have taken the concept from MCUs and applied it to desktop PCs in C++ using high precision timers, the code is given below.
I am really surprised that no one uses this form of asynchronous algorithm for C++ and I haven't seen any examples on the internet for C++.
My question now is, what exactly this algorithm and implementation is called in computer science or computer engineering? I read that this implementation is called a "State Machine" but I googled it and did not see any code that is similar to mine that uses only with the help of timers directly in C++.
The code below does the following: It runs function 1 but at the same time also runs function 2 without needing to halt the application.
Both functions also needs to execute such that they do not run blatantly continuously, instead the functions need to run continuously with a specified time (function_1 runs every 1sec and function_2 every 3secs).
Finding similar implementation for the requirements above, given on the internet for C++ is really complex. The code below is simple in nature and works as intended:
// Asynchronous state machine using one CPU C++ example:
// Tested working multitasking code:
#include <iostream>
#include <ctime>
#include <ratio>
#include <chrono>
using namespace std::chrono;
// At the first execution of the program, capture the time as zero reference and store it to "t2".
auto t2 = high_resolution_clock::now();
auto t3 = high_resolution_clock::now();
int main()
{
while (1)
{
// Always update the time reference variable "t1" to the current time:
auto t1 = high_resolution_clock::now();
// Always check the difference of the zero reference time with the current time and see if it is greater than the set time specified in the "if" argument:
duration<double> time_span_1 = duration_cast<duration<double>>(t1 - t2);
duration<double> time_span_2 = duration_cast<duration<double>>(t1 - t3);
if(time_span_1.count() >= 1)
{
printf("This is function_1:\n\n");
std::cout << time_span_1.count() << " Secs (t1-t2)\n\n";
// Set t2 to capture the current time again as zero reference.
t2 = high_resolution_clock::now();
std::cout << "------------------------------------------\n\n";
}
else if (time_span_2.count() >= 3)
{
printf("This is function_2:\n\n");
std::cout << time_span_2.count() << " Secs (t1-t3)\n\n";
// Set t2 to capture the current time again as zero reference.
t3 = high_resolution_clock::now();
std::cout << "------------------------------------------\n\n";
}
}
return 0;
}
What is the algorithm...called?
Some people call it "super loop." I usually write it like this:
while (1) {
if ( itsTimeToPerformTheHighestPriorityTask() ) {
performTheHighestPriorityTask();
continue;
}
if ( itsTimeToPerformTheNextHighestPriorityTask() ) {
performTheNextHighestPriorityTask();
continue;
}
...
if ( itsTimeToPerformTheLowestPriorityTask() ) {
performTheLowestPriorityTask();
continue;
}
waitForInterrupt();
}
The waitForInterrupt() call at the bottom is optional. Most processors have an op-code that puts the processor into a low-power state (basically, it halts the processor for some definition of "halt") until an interrupt occurs.
Halting the CPU when there's no work to be done can greatly improve battery life if the device is battery powered, and it can help with thermal management if that's an issue. But, the price you pay for using it is, your timers and all of your I/O must be interrupt driven.
I would describe the posted code as "microcontroller code", because it is assuming that it is the only program that will be running on the CPU and that it can therefore burn as many CPU-cycles as it wants to without any adverse consequence. That assumption is often valid for programs running on microcontrollers (since usually a microcontroller doesn't have any OS or other programs installed on it), but "spinning the CPU" is not generally considered acceptable behavior in the context of a modern PC/desktop OS where programs are expected to be efficient and share the computer's resources with each other.
In particular, "spinning" the CPU on a modern PC (or Mac) introduces the following problems:
It uses up 100% of the CPU cycles on a CPU core, which means those CPU cycles are unavailable to any other programs that might otherwise be able to make productive use of them
It prevents the CPU from ever going to sleep, which wastes power -- that's bad on a desktop or server because it generates unwanted/unnecessary heat, and it's worse on a laptop because it quickly drains the battery.
Modern OS schedulers keep track of how much CPU time each program uses, and if the scheduler notices that a program is continuously spinning the CPU, it will likely respond by drastically reducing that program's scheduling-priority, in order to allow other, less CPU-hungry programs to remain responsive. Having a reduced CPU priority means that the program is less likely to be scheduled to run at the moment when it wants to do something useful, making its timing less accurate than it otherwise might be.
Users who run system-monitoring utilities like Task Manager (in Windows) or Activity Monitor (under MacOS/X) or top (in Linux) will see the program continuously taking 100% of a CPU core and will likely assume the program is buggy and kill it. (and unless the program actually needs 100% of a CPU core to do its job, they'll be correct!)
In any case, it's not difficult to rewrite the program to use almost no CPU cycles instead. Here's a version of the posted program that uses approximately 0% of a CPU core, but still calls the desired functions at the desired intervals (and also prints out how close it came to the ideal timing -- which is usually within a few milliseconds on my machine, but if you need better timing accuracy than that, you can get it by running the program at higher/real-time priority instead of as a normal-priority task):
#include <iostream>
#include <ctime>
#include <chrono>
#include <thread>
using namespace std::chrono;
int main(int argc, char ** argv)
{
// These variables will contain the times at which we next want to execute each task.
// Initialize them to the current time so that each task will run immediately on startup
auto nextT1Time = high_resolution_clock::now();
auto nextT3Time = high_resolution_clock::now();
while (1)
{
// Compute the next time at which we need to wake up and execute one of our tasks
auto nextWakeupTime = std::min(nextT1Time, nextT3Time);
// Sleep until the desired time
std::this_thread::sleep_until(nextWakeupTime);
bool t1Executed = false, t3Executed = false;
high_resolution_clock::duration t1LateBy, t3LateBy;
auto now = high_resolution_clock::now();
if (now >= nextT1Time)
{
t1Executed = true;
t1LateBy = now-nextT1Time;
// schedule our next execution to be 1 second later
nextT1Time = nextT1Time+seconds(1);
}
if (now >= nextT3Time)
{
t3Executed = true;
t3LateBy = now-nextT3Time;
// schedule our next execution to be 3 seconds later
nextT3Time = nextT3Time+seconds(3);
}
// Since the calls to std::cout can be slow, we'll execute them down here, after the functions have been called but before
// (nextWakeupTime) is recalculated on the next go-around of the loop. That way the time spent printing to stdout during the T1
// task won't potentially hold off execution of the T3 task
if (t1Executed) std::cout << "function T1 was called (it executed " << duration_cast<microseconds>(t1LateBy).count() << " microseconds after the expected time)" << std::endl;
if (t3Executed) std::cout << "function T3 was called (it executed " << duration_cast<microseconds>(t3LateBy).count() << " microseconds after the expected time)" << std::endl;
}
return 0;
}

Execution time in c++

Trying to find the execution time of my code using this :
#include <iostream>
#include <time.h>
using namespace std;
int main()
{
clock_t t1, t2;
t1 = clock();
// code goes here
t2 = clock();
float diff = ((float)t2 - (float)t1);
cout <<"Execution Time = "<<diff / CLOCKS_PER_SEC <<endl;
system ("pause");
return 0;
}
but it returns a different time every time it is executed with the same code. Is the code correct?
I want to check the execution time of my code in different scenarios but shouldn't it display the same time when I execute the same code twice?
As mentioned here, clock ticks are units of time of a constant but system-specific length, as those returned by function clock. Having mentioned that we need to consider a couple
of scenarios/facts when using this method to find out the time of execution of a piece of code.
1) The time a tick represents depends on the OS. Moreover there are OS-internal
counters for clock ticks. Please see this SuperUser Question.
2) Resources need to be allocated for any process to run on the system. But what if the processor is busy with another more important process or even it might have run out of resources. In this case your process will be put in a queue and will run with a lower priority. But as the clockticks are stored in an internal counter(as mentioned above), it goes on getting incremented even though some other processes are utilizing processor meanwhile.
Conclusion
Your method of finding the execution time based on the clock ticks will not
yield exact results but it will give you only an idea about the execution times.

How to time event in C++?

I'd like to be able to get number of nanoseconds it takes to do something in my C++ program.
Object creation, time for a function to do it's thing etc.
In Java, we'd do something along the lines of:
long now = System.currentTimeMillis();
// stuff
long diff = (System.currentTimeMillis() - now);
How would you do the same in C++?
The <chrono> library in standard C++ provides the best way to do this. This library provides a standard, type safe, generic API for clocks.
#include <chrono>
#include <iostream>
int main() {
using std::chrono::duration_cast;
using std::chrono::nanoseconds;
typedef std::chrono::high_resolution_clock clock;
auto start = clock::now();
// stuff
auto end = clock::now();
std::cout << duration_cast<nanoseconds>(end-start).count() << "ns\n";
}
The actual resolution of the clock will vary between implementations, but this code will always show results in nanoseconds, as accurately as possible given the implementation's tick period.
In C++11 you can do it using chrono library where -
Class template std::chrono::duration represents a time interval.
It consists of a count of ticks of type Rep and a tick period, where the tick period is a compile-time rational constant representing the number of seconds from one tick to the next.
Currently implemented in GCC 4.5.1. (not yet in VC++). See sample code from cppreference.com on Ideone.com execution time of a function call
Take a look at clock and clock_t. For the resolution you're talking about, I don't think there's native support in C++. To get meaningful values, you'll have to time multiple calls or constructions, or use a profiler (desired).
I asked this exact question earlier today. The best solution I have at the moment is to use SDL, and call:
uint32 a_time = SDL_GetTicks(); // Return uint32 count of milliseconds since SDL_Init was called
Although this is probably going to give you lots of overhead, even if you just init SDL with the timer functionality. (SDL_Init(SDL_INIT_TIMER).
Hope this helps you - I settled for this as a solution because it is portable.
Asked and answered many times.
How do I do High Resolution Timing in C++ on Windows?
C++ obtaining milliseconds time on Linux — clock() doesn't seem to work properly
High Resolution Timing Part of Your Code
High resolution timer with C++ and Linux?
If you're using C++11 you can consider chrono.

Linux C++ time measurement library, fast printing library

I just started programming C++ in Linux, can anyone recommend a good way for measurement of code elapsed time, ideally to nanoseconds precision, but milli-seconds will do as well.
And also a fast printing method, I am using std::cout at the moment, but I feel it's kind of slow.
Thanks.
You could use gettimeofday, or clock_gettime.
To get a time in nanoseconds, use clock_gettime(). To measure an elapsed time taken by the code, CLOCK_MONOTONIC_RAW clock type must be used. Using other clock types is not really a solution because they are subject to NTP adjustments.
As for the printing part - define slow. A "general" code to convert built-in data types into ASCII strings is always slow. There is also a buffering going on (which is good in most cases). If you can make some good assumptions about your data, you can always throw in your own conversion to ASCII which will beat a general-purpose solutions, and make it faster.
EDIT:
See also an example of using clock_gettime() function and OS X specific mach_absolute_time() functions here:
stopwatch.h
stopwatch.c
stopwatch_example.c
For timing you can use the <chrono> standard library:
#include <chrono>
#include <iostream>
int main() {
using Clock = std::chrono::high_resolution_clock;
using std::chrono::milliseconds;
using std::chrono::nanoseconds;
using std::chrono::duration_cast;
auto start = Clock::now();
// code to time
std::this_thread::sleep_for(milliseconds(500));
auto end = Clock::now();
std::cout << duration_cast<nanoseconds>(end-start).count() << " ns\n";
}
The actual clock resolution depends on the implementation, but this will always output the correct units.
The performance of std::cout depends on the implementation as well. IME, as long as you don't use std::endl everywhere its performance compares quite well with printf on Linux or OS X. Microsoft's implementation in VC++ seems to be much slower.
Printing things is normally slow because of the terminal you're watching it in, rather than because you're printing something in the first place. You can redirect output to a file, then you might see a significant speedup if you're printing a lot to the console.
I think you probably also want to have a look at the time [0] command, which reports the time taken by a specific program to complete execution.
[0] http://linux.about.com/library/cmd/blcmdl1_time.htm
Time measurement:
Boost.Chrono: http://www.boost.org/doc/libs/release/doc/html/chrono.html
// note that if you have a modern C++11 (used to be C++0x) compiler you already have this out of the box, since "Boost.Chrono aims to implement the new time facilities in C++0x, as proposed in N2661 - A Foundation to Sleep On."
Boost.Timer: http://www.boost.org/doc/libs/release/libs/timer/
Posix Time from Boost.Date_Time: http://www.boost.org/doc/libs/release/doc/html/date_time/posix_time.html
Fast printing:
FastFormat: http://www.fastformat.org/
Benchmarks: http://www.fastformat.org/performance.html
Regarding the performance of C++ streams -- remember about std::ios_base::sync_with_stdio, see:
http://en.cppreference.com/w/cpp/io/ios_base/sync_with_stdio
http://www.cplusplus.com/reference/iostream/ios_base/sync_with_stdio/

What is the best, most accurate timer in C++?

What is the best, most accurate timer in C++?
In C++11 you can portably get to the highest resolution timer with:
#include <iostream>
#include <chrono>
#include "chrono_io"
int main()
{
typedef std::chrono::high_resolution_clock Clock;
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << t2-t1 << '\n';
}
Example output:
74 nanoseconds
"chrono_io" is an extension to ease I/O issues with these new types and is freely available here.
There is also an implementation of <chrono> available in boost (might still be on tip-of-trunk, not sure it has been released).
The answer to this is platform-specific. The operating system is responsible for keeping track of timing and consequently, the C++ language itself provides no language constructs or built-in functions for doing this.
However, here are some resources for platform-dependent timers:
Windows API - SetTimer: http://msdn.microsoft.com/en-us/library/ms644906(v=vs.85).aspx
Unix - setitimer: http://linux.die.net/man/2/setitimer
A cross-platform solution might be boost::asio::deadline_timer.
Under windows it would be QueryPerformanceCounter, though seeing as you didn't specify any conditions it possible to have an external ultra high resolution timer that has a c++ interface for the driver
The C++ standard doesn't say a whole lot about time. There are a few features inherited from C via the <ctime> header.
The function clock is the only way to get sub-second precision, but precision may be as low as one second (it is defined by the macro CLOCKS_PER_SEC). Also, it does not measure real time at all, but processor time.
The function time measures real time, but (usually) only to the nearest second.
To measure real time with subsecond precision, you need a nonstandard library.