Getting the System tick count with basic C++? - c++

I essentially want to reconstruct the getTickCount() windows function so I can use it in basic C++ without any non standard libraries or even the STL. (So it complies with the libraries supplied with the Android NDK)
I have looked at
clock()
localtime
time
But I'm still unsure whether it is possible to replicate the getTickCount windows function with the time library.
Can anyone point me in the right direction as to how to do this or even if its possible?
An overview of what I want to do:
I want to be able to calculate how long an application has been "doing" a certain function.
So for example I want to be able to calculate how long the application has been trying to register with a server
I am trying to port it from windows to run on the linux based Android, here is the windows code:
int TimeoutTimer::GetSpentTime() const
{
if (m_On)
{
if (m_Freq>1)
{
unsigned int now;
QueryPerformanceCounter((int*)&now);
return (int)((1000*(now-m_Start))/m_Freq);
}
else
{
return (GetTickCount()-(int)m_Start);
}
}
return -1;
}

On Android NDK you can use the POSIX clock_gettime() call, which is part of libc. This function is where various Android timer calls end up.
For example, java.lang.System.nanoTime() is implemented with:
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
return (u8)now.tv_sec*1000000000LL + now.tv_nsec;
This example uses the monotonic clock, which is what you want when computing durations. Unlike the wall clock (available through gettimeofday()), it won't skip forward or backward when the device's clock is changed by the network provider.
The Linux man page for clock_gettime() describes the other clocks that may be available, such as the per-thread elapsed CPU time.

clock() works very similarly to Windows's GetTickCount(). The units may be different. GetTickCount() returns milliseconds. clock() returns CLOCKS_PER_SEC ticks per second. Both have a max that will rollover (for Windows, that's about 49.7 days).
GetTickCount() starts at zero when the OS starts. From the docs, it looks like clock() starts when the process does. Thus you can compare times between processes with GetTickCount(), but you probably can't do that with clock().
If you're trying to compute how long something has been happening, within a single process, and you're not worried about rollover:
const clock_t start = clock();
// do stuff here
clock_t now = clock();
clock_t delta = now - start;
double seconds_elapsed = static_cast<double>(delta) / CLOCKS_PER_SEC;
Clarification: There seems to be uncertainty in whether clock() returns elapsed wall time or processor time. The first several references I checked say wall time. For example:
Returns the number of clock ticks elapsed since the program was launched.
which admittedly is a little vague. MSDN is more explicit:
The elapsed wall-clock time since the start of the process....
User darron convinced me to dig deeper, so I found a draft copy of the C standard (ISO/IEC 9899:TC2), and it says:
... returns the implementation’s best approximation to the processor time used ...
I believe every implementation I've ever used gives wall-clock time (which I suppose is an approximation to the processor time used).
Conclusion: If you're trying to time so code so you can benchmark various optimizations, then my answer is appropriate. If you're trying to implement a timeout based on actual wall-clock time, then you have to check your local implementation of clock() or use another function that is documented to give elapsed wall-clock time.
Update: With C++11, there is also the portion of the standard library, which provides a variety of clocks and types to capture times and durations. While standardized and widely available, it's not clear if the Android NDK fully supports yet.

This is platform dependent so you just have to write a wrapper and implement the specifics for each platform.

It's not possible. The C++ standard and, as consequence the standard library, know nothing about processors or 'ticks'. This may or may not change in C++0x with the threading support but at least for now, it's not possible.

Do you have access to a vblank interrupt function (or hblank) on the Android? If so, increment a global, volatile var there for a timer.

Related

Accurate C/C++ clock on a multi-core processor with auto-overclock?

I have looked into several topics to try to get some ideas on how to make a reliable clock with C or C++. However, I also saw some functions used the processor's ticks and ticks per second to calculate the end result, which I think could be a problem on a CPU with auto-overclock like the one I have. I also saw one of them reset after a while, thus is not really reliable.
The idea is to make a (preferably cross-platform) clock like an in-game one, with a precision better than a second in order to be able to add the elapsed time in the "current session" with the saved time at the end of the program. This would be to count the time spent on a console game that does not have an in-game clock, and on the long run to perhaps integrate it to actual PC games.
It should be able to run without taking too much or all of the CPU's time (or a single core's time for multi-core CPUs) as it would be quite bad to use all these resources just for the clock, and also on systems with auto-overclock (which could otherwise cause inaccurate results).
The program I would like to implement this feature into currently looks like this, but I might re-code it in C (since I have to get back to learning how to code in C++):
#include <iostream>
#include <cstdlib>
using namespace std;
int main()
{
cout << "In game" << endl;
system("PAUSE");
return 0;
}
On a side-note, I still need to get rid of the PAUSE feature which is Windows-specific, but I think that can be taken care of with a simple "while (char != '\n')" loop.
What I have already skimmed through:
Using clock() to measure execution time
Calculating elapsed time in a C program in milliseconds
Time stamp in the C programming language
Execution time of C program
C: using clock() to measure time in multi-threaded programs
Is gettimeofday() guaranteed to be of microsecond resolution?
How to measure time in milliseconds using ANSI C?
C++ Cross-Platform High-Resolution Timer
Timer function to provide time in nano seconds using C++
How to measure cpu time and wall clock time?
How can I measure CPU time and wall clock time on both Linux/Windows?
how to measure time?
resolution of std::chrono::high_resolution_clock doesn't correspond to measurements
C++ How to make timer accurate in Linux
http://gameprogrammingpatterns.com/game-loop.html
clock() accuracy
std::chrono doesn't seem to be giving accurate clock resolution/frequency
clock function in C++ with threads
(Edit: Extra research, in particular for a C implementation:
Cross platform C++ High Precision Event Timer implementation (no real answer)
Calculating Function time in nanoseconds in C code (Windows)
How to print time difference in accuracy of milliseconds and nanoseconds? (could be the best answer for a C implementation)
How to get duration, as int milli's and float seconds from <chrono>? (C++ again) )
The problem is that it is not clear whether some of the mentioned methods, like Boost or SDL2, behave properly with auto-overclock in particular.
TL;DR : What cross-platform function should I use to make an accurate, sub-second precise counter in C/C++ that could work on multi-core and/or auto-overclocking processors please?
Thanks in advance.
The std::chrono::high_resolution_clock seems to be what you are looking for. On most modern CPUs it is going to be steady monotonically increased clock which would not be affected by overclocking of the CPU.
Just keep in mind that it can't be used to tell time. It is only good for telling the time intervals, which is a great difference. For example:
using clock = std::chrono::high_resolution_clock;
auto start = clock::now();
perform_operation();
auto end = clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "Operation took " << us << " microseconds.\n";
If the clock checking itself is a performance-sensitive operation, you will have to resort to platform-specific tricks, of which the most popular is reading CPU tick counter directly (RDTSC in Intel family). This is very fast, and on modern CPUs very accurate way of measuring time intervals.

In C++11, what is the fastest way to get system ticks/time?

My program frequently calls WINAPI function timeGetTime(), which should be replaced with usage of <chrono> (standard library). What is the fastest standardized way to get system time - in float or int, for my case?
I do not need tracking the date or day time, i only need precise relative ms/seconds value, which always increments. Is there any?
For benchmarking, you likely want std::chrono::high_resolution_clock. It may not be steady - in the sense that it "always increments". The only clock that guarantees steadiness is std::chrono::steady_clock.
The best, steady clock would then be:
using ClockType = std::conditional<
std::chrono::high_resolution_clock::is_steady,
std::chrono::high_resolution_clock,
std::chrono::steady_clock>::type;
Note that high_resolution_clock could itself simply be an alias for steady_clock.
If you need precise relative ms, you're
looking for system-specific
not wallclock
You tagged question with WinApi, I assume, this is Windows-specific.
For Windows that is High Resolution Timer. This technology allows you to precisely calculate relative times (how much time is spent in certain function call, for example).

C++: Timing in Linux (using clock()) is out of sync (due to OpenMP?)

At the top and end of my program I use clock() to figure out how long my program takes to finish. Unfortunately, it appears to take half as long as it's reporting. I double checked this with the "time" command.
My program reports:
Completed in 45.86s
Time command reports:
real 0m22.837s
user 0m45.735s
sys 0m0.152s
Using my cellphone to time it, it completed in 23s (aka: the "real" time). "User" time is the sum of all threads, which would make sense since I'm using OpenMP. (You can read about it here: What do 'real', 'user' and 'sys' mean in the output of time(1)?)
So, why is clock() reporting in "user" time rather than "real" time? Is there a different function I should be using to calculate how long my program has been running?
As a side note, Windows' clock() works as expected and reports in "real" time.
user 0m45.735s
clock() measures CPU time the process used (as good as it can) per 7.27.2.1
The clock function returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation.
and not wall clock time. Thus clock() reporting a time close to the user time that time reports is normal and standard-conforming.
To measure elapsed time, if you can assume POSIX, using clock_gettime is probably the best option, the standard function time() can also be used for that, but is not very fine-grained.
I would suggest clock_gettime using CLOCK_MONOTONIC for the clock.
Depending on your specific system, that should give near-microsecond or better resolution, and it will not do funny things if (e.g.) someone sets the system time while your program is running.
I would suggest that for benchmarking inside OpenMP applications you use the portable OpenMP timing function omp_get_wtime(), which returns a double value with the seconds since some unspecified point in the past. Call it twice and subtract the return values to obtain the elapsed time. You can find out how precise time measurements are by calling omp_get_wtick(). It returns a double value of the timer resolution - values closer to 0.0 indicate more precise timers.

timespec on windows compilers

On posix it is possible to use timespec to calculate accurate time length (like seconds and milliseconds). Unfortunately I need to migrate to windows with Visual Studio compiler. The VS time.h library doesn't declare timespec so I'm looking for other options. As far as could search is it possible to use clock and time_t although I could't check how precise is counting millisecons with clock counting.
What do you do/use for calculating time elapse in a operation (if possible using standards c++ library) ?
The function GetTickCount is usually used for that.
Also a similiar thread: C++ timing, milliseconds since last whole second
Depends on what sort of accuracy you want, my understanding is that clock and time_t are not accurate to the millisecond level. Similarly GetTickCount() is commonly used (MS docs say accurate to 10-15ms) but not sufficiently accurate for many purposes.
I use QueryPerformanceFrequency and QueryPerformanceCounter for accurate timing measurements for performance.

find c++ execution time

I am curious if there is a build-in function in C++ for measuring the execution time?
I am using Windows at the moment. In Linux it's pretty easy...
The best way on Windows, as far as I know, is to use QueryPerformanceCounter and QueryPerformanceFrequency.
QueryPerformanceCounter(LARGE_INTEGER*) places the performance counter's value into the LARGE_INTEGER passed.
QueryPerformanceFrequency(LARGE_INTEGER*) places the frequency the performance counter is incremented into the LARGE_INTEGER passed.
You can then find the execution time by recording the counter as execution starts, and then recording the counter when execution finishes. Subtract the start from the end to get the counter's change, then divide by the frequency to get the time in seconds.
LARGE_INTEGER start, finish, freq;
QueryPerformanceFrequency(&freq);
QueryPerformanceCounter(&start);
// Do something
QueryPerformanceCounter(&finish);
std::cout << "Execution took "
<< ((finish.QuadPart - start.QuadPart) / (double)freq.QuadPart) << std::endl;
It's pretty easy under Windows too - in fact it's the same function on both std::clock, defined in <ctime>
You can use the Windows API Function GetTickCount() and compare the values at start and end. Resolution is in the 16 ms ballpark. If for some reason you need more fine-grained timings, you'll need to look at QueryPerformanceCounter.
C++ has no built-in functions for high-granularity measuring code execution time, you have to resort to platform-specific code. For Windows try QueryPerformanceCounter: http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
The functions you should use depend on the resolution of timer you need. Some of them give 10ms resolutions. Those functions are easier to use. Others require more work, but give much higher resolution (and might cause you some headaches in some environments. Your dev machine might work fine, though).
http://www.geisswerks.com/ryan/FAQS/timing.html
This articles mentions:
timeGetTime
RDTSC (a processor feature, not an OS feature)
QueryPerformanceCounter
C++ works on many platforms. Why not use something that also works on many platforms, such as the Boost libraries.
Look at the documentation for the Boost Timer Library
I believe that it is a header-only library, which means that it is simple to setup and use...