Best way to implement a high resolution timer - c++

What is the best way in C++11 to implement a high-resolution timer that continuously checks for time in a loop, and executes some code after it passes a certain point in time? e.g. check what time it is in a loop from 9am onwards and execute some code exactly at 11am. I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
I will be implementing this program on Linux CentOS 7.3, and have no issues with dedicating CPU resources to execute this task.

Instead of implementing this manually, you could use e.g. a systemd.timer. Make sure to specify the desired accuracy which can apparently be as precise as 1us.

a high-resolution timer that continuously checks for time in a loop,
First of all, you do not want to continuously check the time in a loop; that's extremely inefficient and simply unnecessary.
...executes some code after it passes a certain point in time?
Ok so you want to run some code at a given time in the future, as accurately as possible.
The simplest way is to simply start a background thread, compute how long until the target time (in the desired resolution) and then put the thread to sleep for that time period. When your thread wakes up, it executes the actual task. This should be accurate enough for the vast majority of needs.
The std::chrono library provides calls which make this easy:
System clock in std::chrono
High resolution clock in std::chrono
Here's a snippet of code which does what you want using the system clock (which makes it easier to set a wall clock time):
// c++ --std=c++11 ans.cpp -o ans
#include <thread>
#include <iostream>
#include <iomanip>
// do some busy work
int work(int count)
{
int sum = 0;
for (unsigned i = 0; i < count; i++)
{
sum += i;
}
return sum;
}
std::chrono::system_clock::time_point make_scheduled_time (int yyyy, int mm, int dd, int HH, int MM, int SS)
{
tm datetime = tm{};
datetime.tm_year = yyyy - 1900; // Year since 1900
datetime.tm_mon = mm - 1; // Month since January
datetime.tm_mday = dd; // Day of the month [1-31]
datetime.tm_hour = HH; // Hour of the day [00-23]
datetime.tm_min = MM;
datetime.tm_sec = SS;
time_t ttime_t = mktime(&datetime);
std::chrono::system_clock::time_point scheduled = std::chrono::system_clock::from_time_t(ttime_t);
return scheduled;
}
void do_work_at_scheduled_time()
{
using period = std::chrono::system_clock::period;
auto sched_start = make_scheduled_time(2019, 9, 17, // date
00, 14, 00); // time
// Wait until the scheduled time to actually do the work
std::this_thread::sleep_until(sched_start);
// Figoure out how close to scheduled time we actually awoke
auto actual_start = std::chrono::system_clock::now();
auto start_delta = actual_start - sched_start;
float delta_ms = float(start_delta.count())*period::num/period::den * 1e3f;
std::cout << "worker: awoken within " << delta_ms << " ms" << std::endl;
// Now do some actual work!
int sum = work(12345);
}
int main()
{
std::thread worker(do_work_at_scheduled_time);
worker.join();
return 0;
}
On my laptop, the typical latency is about 2-3ms. If you use the high_resolution_clock you should be able to get even better results.
There are other APIs you could use too, such as Boost where you could use ASIO to implement high res timeout.
I require the timing to be precise (i.e. no more than 1 microsecond after 9am).
Do you really need it to be accurate to the microsecond? Consider that at this resolution, you will also need to take into account all sorts of other factors, including system load, latency, clock jitter, and so on. Your code can start to execute at close to that time, but that's only part of the problem.

My suggestion would be to use timer_create(). This allows you to get notified by a signal at a given time. You can then implement your action in the signal handler.
In any case you should be aware that the accuracy of course depends on the system clock accuracy.

Related

What is the most efficient way to calling a function every n seconds in c++?

So I'm trying to call a function every n seconds. The below is a simple representation of what I'm trying to achieve. I wanted to know if the below method is the only way to achieve this. I would love if the "if" condition can be avoided.
#include <stdio.h>
#include <time.h>
void print_hello(int i) {
printf("hello\n");
printf("%d\n", i);
}
int main () {
time_t start_t, end_t;
double diff_t;
time(&start_t);
int i = 0;
while(1) {
time(&end_t);
// printf("here in main");
i = i + 1;
diff_t = difftime(end_t, start_t);
if(diff_t==5) {
// printf("Execution time = %f\n", diff_t);
print_hello(i);
time(&start_t);
}
}
return(0);
}
The usage of time in OPs program can be reduced to something like
// get tStart;
// set tEnd = tStart + x;
do {
// get t;
} while (t < tEnd);
This is what is called busy-wait.
It might be used to write code with most precise timing as well as in other special cases. The draw-back is that the waiting consumes ful CPU load. (You might be even able to hear this – by raising ventilation noise.)
In general, however, spinning is considered an anti-pattern and should be avoided, as processor time that could be used to execute a different task is instead wasted on useless activity.
Another option is to delegate the wake-up to the system, which reduces the load of process/thread to minimum while waiting:
#include <chrono>
#include <iostream>
#include <thread>
void print_hello(int i)
{
std::cout << "hello\n"
<< i << '\n';
}
int main ()
{
using namespace std::chrono_literals; // to support e.g. 5s for 5 sceconds
auto tStart = std::chrono::system_clock::now();
for (int i = 1; i <= 3; ++i) {
auto tEnd = tStart + 2s;
std::this_thread::sleep_until(tEnd);
print_hello(i);
tStart = tEnd;
}
}
Output:
hello
1
hello
2
hello
3
Live Demo on coliru
(I had to reduce number of iterations and the waiting times to prevent the TLE in online compiler.)
std::this_thread::sleep_until
Blocks the execution of the current thread until specified sleep_time has been reached.
The clock tied to sleep_time is used, which means that adjustments of the clock are taken into account. Thus, the duration of the block might, but might not, be less or more than sleep_time - Clock::now() at the time of the call, depending on the direction of the adjustment. The function also may block for longer than until after sleep_time has been reached due to scheduling or resource contention delays.
The last sentence mentions the draw-back of this solution: The OS may decide to wake-up the thread/process later than requested. That may happen e.g. is OS is under high load. In the “normal” case, the latency shouldn't be more than a few milli-seconds. So, the latency might be tolerable.
Please, note how tEnd and tStart are updated in loop. The current wake-up time is not considered to prevent accumulation of latencies.

Running code every x seconds, no matter how long execution within loop takes

I'm trying to make an LED blink to the beat of a certain song. The song has exactly 125 bpm.
The code that I wrote seems to work at first, but the longer it runs the bigger the difference in time between the LED flashes and the next beat starts. The LED seems to blink a tiny bit too slow.
I think that happens because lastBlink is kind of depending on the blink which happened right before that to stay in sync, instead of using one static initial value to sync to...
unsigned int bpm = 125;
int flashDuration = 10;
unsigned int lastBlink = 0;
for(;;) {
if (getTickCount() >= lastBlink+1000/(bpm/60)) {
lastBlink = getTickCount();
printf("Blink!\r\n");
RS232_SendByte(cport_nr, 4); //LED ON
delay(flashDuration);
RS232_SendByte(cport_nr, 0); //LED OFF
}
}
Add value to lastBlink, not reread it as the getTickCount might have skipped more than the exact beats want to wait.
lastblink+=1000/(bpm/60);
Busy-waiting is bad, it spins the CPU for no good reason, and under most OS's it will lead to your process being punished -- the OS will notice that it is using up lots of CPU time and dynamically lower its priority so that other, less-greedy programs get first dibs on CPU time. It's much better to sleep until the appointed time(s) instead.
The trick is to dynamically calculate the amount of time to sleep until the next time to blink, based on the current system-clock time. (Simply delaying by a fixed amount of time means you will inevitably drift, since each iteration of your loop takes a non-zero and somewhat indeterminate time to execute).
Example code (tested under MacOS/X, probably also compiles under Linux, but can be adapted for just about any OS with some changes) follows:
#include <stdio.h>
#include <unistd.h>
#include <sys/times.h>
// unit conversion code, just to make the conversion more obvious and self-documenting
static unsigned long long SecondsToMillis(unsigned long secs) {return secs*1000;}
static unsigned long long MillisToMicros(unsigned long ms) {return ms*1000;}
static unsigned long long NanosToMillis(unsigned long nanos) {return nanos/1000000;}
// Returns the current absolute time, in milliseconds, based on the appropriate high-resolution clock
static unsigned long long getCurrentTimeMillis()
{
#if defined(USE_POSIX_MONOTONIC_CLOCK)
// Nicer New-style version using clock_gettime() and the monotonic clock
struct timespec ts;
return (clock_gettime(CLOCK_MONOTONIC, &ts) == 0) ? (SecondsToMillis(ts.tv_sec)+NanosToMillis(ts.tv_nsec)) : 0;
# else
// old-school POSIX version using times()
static clock_t _ticksPerSecond = 0;
if (_ticksPerSecond <= 0) _ticksPerSecond = sysconf(_SC_CLK_TCK);
struct tms junk; clock_t newTicks = (clock_t) times(&junk);
return (_ticksPerSecond > 0) ? (SecondsToMillis((unsigned long long)newTicks)/_ticksPerSecond) : 0;
#endif
}
int main(int, char **)
{
const unsigned int bpm = 125;
const unsigned int flashDurationMillis = 10;
const unsigned int millisBetweenBlinks = SecondsToMillis(60)/bpm;
printf("Milliseconds between blinks: %u\n", millisBetweenBlinks);
unsigned long long nextBlinkTimeMillis = getCurrentTimeMillis();
for(;;) {
long long millisToSleepFor = nextBlinkTimeMillis - getCurrentTimeMillis();
if (millisToSleepFor > 0) usleep(MillisToMicros(millisToSleepFor));
printf("Blink!\r\n");
//RS232_SendByte(cport_nr, 4); //LED ON
usleep(MillisToMicros(flashDurationMillis));
//RS232_SendByte(cport_nr, 0); //LED OFF
nextBlinkTimeMillis += millisBetweenBlinks;
}
}
I think the drift problem may be rooted in your using relative time delays by sleeping for a fixed duration rather than sleeping until an absolute point in time. The problem is threads don't always wake up precisely on time due to scheduling issues.
Something like this solution may work for you:
// for readability
using clock = std::chrono::steady_clock;
unsigned int bpm = 125;
int flashDuration = 10;
// time for entire cycle
clock::duration total_wait = std::chrono::milliseconds(1000 * 60 / bpm);
// time for LED off part of cycle
clock::duration off_wait = std::chrono::milliseconds(1000 - flashDuration);
// time for LED on part of cycle
clock::duration on_wait = total_wait - off_wait;
// when is next change ready?
clock::time_point ready = clock::now();
for(;;)
{
// wait for time to turn light on
std::this_thread::sleep_until(ready);
RS232_SendByte(cport_nr, 4); // LED ON
// reset timer for off
ready += on_wait;
// wait for time to turn light off
std::this_thread::sleep_until(ready);
RS232_SendByte(cport_nr, 0); // LED OFF
// reset timer for on
ready += off_wait;
}
If your problem is drifting out of sync rather than latency I would suggest measuring time from a given start instead of from the last blink.
start = now()
blinks = 0
period = 60 / bpm
while true
if 0 < ((now() - start) - blinks * period)
ledon()
sleep(blinklengh)
ledoff()
blinks++
Since you didn't specify C++98/03, I'm assuming at least C++11, and thus <chrono> is available. This so far is consistent with Galik's answer. However I would set it up so as to use <chrono>'s conversion abilities more precisely, and without having to manually enter conversion factors, except to describe "beats / minute", or actually in this answer, the inverse: "minutes / beat".
using namespace std;
using namespace std::chrono;
using mpb = duration<int, ratio_divide<minutes::period, ratio<125>>>;
constexpr auto flashDuration = 10ms;
auto beginBlink = steady_clock::now() + mpb{0};
while (true)
{
RS232_SendByte(cport_nr, 4); //LED ON
this_thread::sleep_until(beginBlink + flashDuration);
RS232_SendByte(cport_nr, 0); //LED OFF
beginBlink += mpb{1};
this_thread::sleep_until(beginBlink);
}
The first thing to do is specify the duration of a beat, which is "minutes/125". This is what mpb does. I've used minutes::period as a stand in for 60, just in an attempt to improve readability and reduce the number of magic numbers.
Assuming C++14, I can give flashDuration real units (milliseconds). In C++11 this would need to be spelled with this more verbose syntax:
constexpr auto flashDuration = milliseconds{10};
And then the loop: This is very similar in design to Galik's answer, but here I only increment the time to start the blink once per iteration, and each time, by precisely 60/125 seconds.
By delaying until a specified time_point, as opposed to a specific duration, one ensures that there is no round off accumulation as time progresses. And by working in units which exactly describe your required duration interval, there is also no round off error in terms of computing the start time of the next interval.
No need to traffic in milliseconds. And no need to compute how long one needs to delay. Only the need to symbolically compute the start time of each iteration.
Um...
Sorry to pick on Galik's answer, which I believe is the second best answer next to mine, but it exhibits a bug which my answer not only doesn't have, but is designed to prevent. I didn't notice it until I dug into it with a calculator, and it is subtle enough that testing might miss it.
In Galik's answer:
total_wait = 480ms; // this is exactly correct
off_wait = 990ms; // likely a design flaw
on_wait = -510ms; // certainly a mistake
And the total time that an iteration takes is on_wait + off_wait which is 440ms, almost imperceptibly close to total_wait (480ms), making debugging very challenging.
In contrast my answer increments ready (beginBlink) only once, and by exactly 480ms.
My answer is more likely to be right for the simple reason that it delegates more of its computation to the <chrono> library. And in this particular case, that probability paid off.
Avoid manual conversions. Instead let the <chrono> library do them for you. Manual conversions introduce the possibility for error.
You should count the time spent on the process and substract it to the flashDuration value.
The most obvious issue is that you're losing precision when you divide bpm/60. This always yields an integer (2) instead of 2.08333333...
Calling getTickCount() twice could also lead to some drift.

Boost:Take seconds/milli/micro/nano of how long a function runs

I basically have a school project testing the time it takes different sort algorithms and record how long they take with n amount of numbers to sort. So I decided to use Boost library with c++ to record the time. I am at the point I am not sure how to do it, I have googled it and have found people using different ways. for examples
auto start = boost::chrono::high_resolution_clock::now();
auto end = boost::chrono::high_resolution_clock::now();
auto time = (end-start).count();
or
boost::chrono::system_clock::now();
or
boost::chrono::steady_clock::now()
or even using something like this
boost::timer::cpu_timer and boost::timer::auto_cpu_time
or
boost::posix_time::ptime start = boost::posix_time::microsec_clock::local_time( );
so I want to be sure on how to do it right now this is what I have
typedef boost::chrono::duration<double, boost::nano> boost_nano;
auto start_t = boost::chrono::high_resolution_clock::now();
// call function
auto end_t = boost::chrono::high_resolution_clock::now();
boost_nano time = (end_t - start_t);
cout << t.count();
so am I on the right track?
You likely want the high resolution timer.
You can use either that of boost::chrono or std::chrono.
Boost Chrono has some support for IO builtin, so it makes it easier to report times in a human friendly way.
I usually use a wrapper similar to this:
template <typename Caption, typename F>
auto timed(Caption const& task, F&& f) {
using namespace boost::chrono;
struct _ {
high_resolution_clock::time_point s;
Caption const& task;
~_() { std::cout << " -- (" << task << " completed in " << duration_cast<milliseconds>(high_resolution_clock::now() - s) << ")\n"; }
} timing { high_resolution_clock::now(), task };
return f();
}
Which reports time taken in milliseconds.
The good part here is that you can time construction and similar:
std::vector<int> large = timed("generate data", [] {
return generate_uniform_random_data(); });
But also, general code blocks:
timed("do_step2", [] {
// step two is foo and bar:
foo();
bar();
});
And it works if e.g. foo() throws, just fine.
DEMO
Live On Coliru
int main() {
return timed("demo task", [] {
sleep(1);
return 42;
});
}
Prints
-- (demo task completed in 1000 milliseconds)
42
I typically use time(0) to control the duration of a loop. time(0) is simply one time measurement that, because of its own short duration, has the least impact on everything else going on (and you can even run a do-nothing loop to capture how much to subtract from any other loop measurement effort).
So in a loop running for 3 (or 10 seconds), how many times can the loop invoke the thing you are trying to measure?
Here is an example of how my older code measures the duration of 'getpid()'
uint32_t spinPidTillTime0SecChange(volatile int& pid)
{
uint32_t spinCount = 1; // getpid() invocation count
// no measurement, just spinning
::time_t tStart = ::time(nullptr);
::time_t tEnd = tStart;
while (0 == (tEnd - tStart)) // (tStart == tEnd)
{
pid = ::getpid();
tEnd = ::time(nullptr);
spinCount += 1;
}
return(spinCount);
}
Invoke this 3 (or 10) times, adding the return values together. To make it easy, discard the first measurement (because it probably will be a partial second).
Yes, I am sure there is a c++11 version of accessing what time(0) accesses.
Use std::chrono::steady_clock or std::chrono::high_resolution_clock (if it is steady - see below) and not std::chrono::system_clock for measuring run time in C++11 (or use its boost equivalent). The reason is (quoting system_clock's documentation):
on most systems, the system time can be adjusted at any moment
while steady_clock is monotonic and is better suited for measuring intervals:
Class std::chrono::steady_clock represents a monotonic clock. The time
points of this clock cannot decrease as physical time moves forward.
This clock is not related to wall clock time, and is best suitable for
measuring intervals.
Here's an example:
auto start = std::chrono::steady_clock::now();
// do something
auto finish = std::chrono::steady_clock::now();
double elapsed_seconds = std::chrono::duration_cast<
std::chrono::duration<double> >(finish - start).count();
A small practical tip: if you are measuring run time and want to report seconds std::chrono::duration_cast<std::chrono::seconds> is rarely what you need because it gives you whole number of seconds. To get the time in seconds as a double use the example above.
As suggested by Gregor McGregor, you can use a high_resolution_clock which may sometimes provide higher resolution (although it can be an alias of steady_clock), but beware that it may also be an alias of system_clock, so you might want to check is_steady.

c++ get milliseconds since some date

I need some way in c++ to keep track of the number of milliseconds since program execution. And I need the precision to be in milliseconds. (In my googling, I've found lots of folks that said to include time.h and then multiply the output of time() by 1000 ... this won't work.)
clock has been suggested a number of times. This has two problems. First of all, it often doesn't have a resolution even close to a millisecond (10-20 ms is probably more common). Second, some implementations of it (e.g., Unix and similar) return CPU time, while others (E.g., Windows) return wall time.
You haven't really said whether you want wall time or CPU time, which makes it hard to give a really good answer. On Windows, you could use GetProcessTimes. That will give you the kernel and user CPU times directly. It will also tell you when the process was created, so if you want milliseconds of wall time since process creation, you can subtract the process creation time from the current time (GetSystemTime). QueryPerformanceCounter has also been mentioned. This has a few oddities of its own -- for example, in some implementations it retrieves time from the CPUs cycle counter, so its frequency varies when/if the CPU speed changes. Other implementations read from the motherboard's 1.024 MHz timer, which does not vary with the CPU speed (and the conditions under which each are used aren't entirely obvious).
On Unix, you can use GetTimeOfDay to just get the wall time with (at least the possibility of) relatively high precision. If you want time for a process, you can use times or getrusage (the latter is newer and gives more complete information that may also be more precise).
Bottom line: as I said in my comment, there's no way to get what you want portably. Since you haven't said whether you want CPU time or wall time, even for a specific system, there's not one right answer. The one you've "accepted" (clock()) has the virtue of being available on essentially any system, but what it returns also varies just about the most widely.
See std::clock()
Include time.h, and then use the clock() function. It returns the number of clock ticks elapsed since the program was launched. Just divide it by "CLOCKS_PER_SEC" to obtain the number of seconds, you can then multiply by 1000 to obtain the number of milliseconds.
Some cross platform solution. This code was used for some kind of benchmarking:
#ifdef WIN32
LARGE_INTEGER g_llFrequency = {0};
BOOL g_bQueryResult = QueryPerformanceFrequency(&g_llFrequency);
#endif
//...
long long osQueryPerfomance()
{
#ifdef WIN32
LARGE_INTEGER llPerf = {0};
QueryPerformanceCounter(&llPerf);
return llPerf.QuadPart * 1000ll / ( g_llFrequency.QuadPart / 1000ll);
#else
struct timeval stTimeVal;
gettimeofday(&stTimeVal, NULL);
return stTimeVal.tv_sec * 1000000ll + stTimeVal.tv_usec;
#endif
}
The most portable way is using the clock function.It usually reports the time that your program has been using the processor, or an approximation thereof. Note however the following:
The resolution is not very good for GNU systems. That's really a pity.
Take care of casting everything to double before doing divisions and assignations.
The counter is held as a 32 bit number in GNU 32 bits, which can be pretty annoying for long-running programs.
There are alternatives using "wall time" which give better resolution, both in Windows and Linux. But as the libc manual states: If you're trying to optimize your program or measure its efficiency, it's very useful to know how much processor time it uses. For that, calendar time and elapsed times are useless because a process may spend time waiting for I/O or for other processes to use the CPU.
Here is a C++0x solution and an example why clock() might not do what you think it does.
#include <chrono>
#include <iostream>
#include <cstdlib>
#include <ctime>
int main()
{
auto start1 = std::chrono::monotonic_clock::now();
auto start2 = std::clock();
sleep(1);
for( int i=0; i<100000000; ++i);
auto end1 = std::chrono::monotonic_clock::now();
auto end2 = std::clock();
auto delta1 = end1-start1;
auto delta2 = end2-start2;
std::cout << "chrono: " << std::chrono::duration_cast<std::chrono::duration<float>>(delta1).count() << std::endl;
std::cout << "clock: " << static_cast<float>(delta2)/CLOCKS_PER_SEC << std::endl;
}
On my system this outputs:
chrono: 1.36839
clock: 0.36
You'll notice the clock() method is missing a second. An astute observer might also notice that clock() looks to have less resolution. On my system it's ticking by in 12 millisecond increments, terrible resolution.
If you are unable or unwilling to use C++0x, take a look at Boost.DateTime's ptime microsec_clock::universal_time().
This isn't C++ specific (nor portable), but you can do:
SYSTEMTIME systemDT;
In Windows.
From there, you can access each member of the systemDT struct.
You can record the time when the program started and compare the current time to the recorded time (systemDT versus systemDTtemp, for instance).
To refresh, you can call GetLocalTime(&systemDT);
To access each member, you would do systemDT.wHour, systemDT.wMinute, systemDT.wMilliseconds.
To get more information on SYSTEMTIME.
Do you want wall clock time, CPU time, or some other measurement? Also, what platform is this? There is no universally portable way to get more precision than time() and clock() give you, but...
on most Unix systems, you can use gettimeofday() and/or clock_gettime(), which give at least microsecond precision and access to a variety of timers;
I'm not nearly as familiar with Windows, but one of these functions probably does what you want.
You can try this code (get from StockFish chess engine source code (GPL)):
#include <iostream>
#include <stdio>
#if !defined(_WIN32) && !defined(_WIN64) // Linux - Unix
# include <sys/time.h>
typedef timeval sys_time_t;
inline void system_time(sys_time_t* t) {
gettimeofday(t, NULL);
}
inline long long time_to_msec(const sys_time_t& t) {
return t.tv_sec * 1000LL + t.tv_usec / 1000;
}
#else // Windows and MinGW
# include <sys/timeb.h>
typedef _timeb sys_time_t;
inline void system_time(sys_time_t* t) { _ftime(t); }
inline long long time_to_msec(const sys_time_t& t) {
return t.time * 1000LL + t.millitm;
}
#endif
struct Time {
void restart() { system_time(&t); }
uint64_t msec() const { return time_to_msec(t); }
long long elapsed() const {
return long long(current_time().msec() - time_to_msec(t));
}
static Time current_time() { Time t; t.restart(); return t; }
private:
sys_time_t t;
};
int main() {
sys_time_t t;
system_time(&t);
long long currentTimeMs = time_to_msec(t);
std::cout << "currentTimeMs:" << currentTimeMs << std::endl;
Time time = Time::current_time();
for (int i = 0; i < 1000000; i++) {
//Do something
}
long long e = time.elapsed();
std::cout << "time elapsed:" << e << std::endl;
getchar(); // wait for keyboard input
}

How to get system time in C++?

In fact i am trying to calculate the time a function takes to complete in my program.
So i am using the logic to get system time when i call the function and time when the function returns a value then by subtracting the values i get time it took to complete.
So if anyone can tell me some better approach or just how to get system time at an instance it would be quite a help
The approach I use when timing my code is the time() function. It returns a single numeric value to you representing the epoch which makes the subtraction part easier for calculation.
Relevant code:
#include <time.h>
#include <iostream>
int main (int argc, char *argv[]) {
int startTime, endTime, totalTime;
startTime = time(NULL);
/* relevant code to benchmark in here */
endTime = time(NULL);
totalTime = endTime - startTime;
std::cout << "Runtime: " << totalTime << " seconds.";
return 0;
}
Keep in mind this is user time. For CPU, time see Ben's reply.
Your question is totally dependant on WHICH system you are using. Each system has its own functions for getting the current time. For finding out how long the system has been running, you'd want to access one of the "high resolution performance counters". If you don't use a performance counter, you are usually limited to microsecond accuracy (or worse) which is almost useless in profiling the speed of a function.
In Windows, you can access the counter via the 'QueryPerformanceCounter()' function. This returns an arbitrary number that is different on each processor. To find out how many ticks in the counter == 1 second, call 'QueryPerformanceFrequency()'.
If you're coding under a platform other than windows, just google performance counter and the system you are coding under, and it should tell you how you can access the counter.
Edit (clarification)
This is c++, just include windows.h and import the "Kernel32.lib" (seems to have removed my hyperlink, check out the documentation at: http://msdn.microsoft.com/en-us/library/ms644904.aspx). For C#, you can use the "System.Diagnostics.PerformanceCounter" class.
You can use time_t
Under Linux, try gettimeofday() for microsecond resolution, or clock_gettime() for nanosecond resolution.
(Of course the actual clock may have a coarser resolution.)
In some system you don't have access to the time.h header. Therefore, you can use the following code snippet to find out how long does it take for your program to run, with the accuracy of seconds.
void function()
{
time_t currentTime;
time(&currentTime);
int startTime = currentTime;
/* Your program starts from here */
time(&currentTime);
int timeElapsed = currentTime - startTime;
cout<<"It took "<<timeElapsed<<" seconds to run the program"<<endl;
}
You can use the solution with std::chrono described here: Getting an accurate execution time in C++ (micro seconds) you will have much better accuracy in your measurement. Usually we measure code execution in the round of the milliseconds (ms) or even microseconds (us).
#include <chrono>
#include <iostream>
...
[YOUR METHOD/FUNCTION STARTING HERE]
auto start = std::chrono::high_resolution_clock::now();
[YOUR TEST CODE HERE]
auto elapsed = std::chrono::high_resolution_clock::now() - start;
long long microseconds = std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
std::cout << "Elapsed time: " << microseconds << " ms;