How to get system time in C++? - c++

In fact i am trying to calculate the time a function takes to complete in my program.
So i am using the logic to get system time when i call the function and time when the function returns a value then by subtracting the values i get time it took to complete.
So if anyone can tell me some better approach or just how to get system time at an instance it would be quite a help

The approach I use when timing my code is the time() function. It returns a single numeric value to you representing the epoch which makes the subtraction part easier for calculation.
Relevant code:
#include <time.h>
#include <iostream>
int main (int argc, char *argv[]) {
int startTime, endTime, totalTime;
startTime = time(NULL);
/* relevant code to benchmark in here */
endTime = time(NULL);
totalTime = endTime - startTime;
std::cout << "Runtime: " << totalTime << " seconds.";
return 0;
}
Keep in mind this is user time. For CPU, time see Ben's reply.

Your question is totally dependant on WHICH system you are using. Each system has its own functions for getting the current time. For finding out how long the system has been running, you'd want to access one of the "high resolution performance counters". If you don't use a performance counter, you are usually limited to microsecond accuracy (or worse) which is almost useless in profiling the speed of a function.
In Windows, you can access the counter via the 'QueryPerformanceCounter()' function. This returns an arbitrary number that is different on each processor. To find out how many ticks in the counter == 1 second, call 'QueryPerformanceFrequency()'.
If you're coding under a platform other than windows, just google performance counter and the system you are coding under, and it should tell you how you can access the counter.
Edit (clarification)
This is c++, just include windows.h and import the "Kernel32.lib" (seems to have removed my hyperlink, check out the documentation at: http://msdn.microsoft.com/en-us/library/ms644904.aspx). For C#, you can use the "System.Diagnostics.PerformanceCounter" class.

You can use time_t

Under Linux, try gettimeofday() for microsecond resolution, or clock_gettime() for nanosecond resolution.
(Of course the actual clock may have a coarser resolution.)

In some system you don't have access to the time.h header. Therefore, you can use the following code snippet to find out how long does it take for your program to run, with the accuracy of seconds.
void function()
{
time_t currentTime;
time(&currentTime);
int startTime = currentTime;
/* Your program starts from here */
time(&currentTime);
int timeElapsed = currentTime - startTime;
cout<<"It took "<<timeElapsed<<" seconds to run the program"<<endl;
}

You can use the solution with std::chrono described here: Getting an accurate execution time in C++ (micro seconds) you will have much better accuracy in your measurement. Usually we measure code execution in the round of the milliseconds (ms) or even microseconds (us).
#include <chrono>
#include <iostream>
...
[YOUR METHOD/FUNCTION STARTING HERE]
auto start = std::chrono::high_resolution_clock::now();
[YOUR TEST CODE HERE]
auto elapsed = std::chrono::high_resolution_clock::now() - start;
long long microseconds = std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
std::cout << "Elapsed time: " << microseconds << " ms;

Related

How to obtain the execution time of a function in C/C++ [duplicate]

This question already has answers here:
How can I benchmark C code easily?
(5 answers)
Closed 6 years ago.
I have a function which can generate 10000 random numbers and write them in a file.
void generator(char filename[])
{
int i;
int n;
FILE* fp;
if((fp=fopen(filename,"w+"))==NULL)
{
printf("Fail creating fileļ¼");
}
srand((unsigned)time(NULL));
for(i=0;i<10000;i++)
{
n=rand()%10000;
fprintf(fp,"%d ",n);
}
fclose(fp);
}
How can I get the execution time of this function using C/C++ ?
Code profiling is not a particularly easy task (or, as we oft-say in programming, it's "non-trivial".) The issue is because "execution time" measured in seconds isn't particularly accurate or useful.
What you're wanting to do is to measure the number of CPU cycles. This can be done using an external tool such as callgrind (one of Valgrind's tools). There's a 99% chance that's all you want.
If you REALLY want to do that yourself in code, you're undertaking a rather difficult task. I know first hand - I wrote a comparative benchmarking library in C++ for on-the-fly performance testing.
If you really want to go down that road, you can research benchmarking on Intel processors (that mostly carries over to AMD), or whatever processor you've using. However, as I said, that topic is large and in-depth, and far beyond the scope of a StackOverflow answer.
You can use the chrono library;
#include <chrono>
//*****//
auto start = std::chrono::steady_clock::now();
generator("file.txt")
auto end = std::chrono::steady_clock::now();
std::cout << "genarator() took "
<< std::chrono::duration_cast<std::chrono::microseconds>(end - start).count() << "us.\n";
You have already some nice C answers that also work with C++.
Here a native C++ solution using <chrono>:
auto tbegin = std::chrono::high_resolution_clock::now();
...
auto tend = std::chrono::high_resolution_clock::now();
auto tduration = std::chrono::duration_cast<std::chrono::microseconds>(tend - tbegin).count();
The advantage is that you can switch from microsecond to millisecond, seconds or any other time measurement units very easily.
Note that you may have OS limits to the clocking accuracy (typically 15 milliseconds on windows environment), so that this may give meaningful results only if you're really above this limit.
void generator(char filename)
{
clock_t tStart = clock();
/* your code here */
printf("Time taken: %.2fs\n", (double)(clock() - tStart)/CLOCKS_PER_SEC);
}
upd. And add #include <ctime>
Try this
#include <sys/time.h>
struct timeval tpstart,tpend;
double timeuse;
//get time before generator starts
gettimeofday(&tpstart,NULL);
//call generator function
generator(filename);
//get time after generator ends
gettimeofday(&tpend,NULL);
//calculate the used time
timeuse=1000000*(tpend.tv_sec-tpstart.tv_sec)+tpend.tv_usec-tpstart.tv_usec;
timeuse/=1000000;
printf("Used Time:%fsec\n",timeuse);
#include <ctime>
.....
clock_t start = clock();
...//the code you want to get the execution time for
double elapsed_time = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
std::cout << elapsed_time << std::endl;//elapsed_time now contains the execution time(in seconds) of the code in between
will give you an approximate(not accurate) execution time of the code between the first and second clock() calls
Temporarily make the limit 10000000
Time it with a stopwatch. Divide the time by 1000

Measure the lapsed time when adding items into a vector in C++ VS2013 [duplicate]

I was given the following HomeWork assignment,
Write a program to test on your computer how long it takes to do
nlogn, n2, n5, 2n, and n! additions for n=5, 10, 15, 20.
I have written a piece of code but all the time I am getting the time of execution 0. Can anyone help me out with it? Thanks
#include <iostream>
#include <cmath>
#include <ctime>
using namespace std;
int main()
{
float n=20;
time_t start, end, diff;
start = time (NULL);
cout<<(n*log(n))*(n*n)*(pow(n,5))*(pow(2,n))<<endl;
end= time(NULL);
diff = difftime (end,start);
cout <<diff<<endl;
return 0;
}
better than time() with second-precision is to use a milliseconds precision.
a portable way is e.g.
int main(){
clock_t start, end;
double msecs;
start = clock();
/* any stuff here ... */
end = clock();
msecs = ((double) (end - start)) * 1000 / CLOCKS_PER_SEC;
return 0;
}
Execute each calculation thousands of times, in a loop, so that you can overcome the low resolution of time and obtain meaningful results. Remember to divide by the number of iterations when reporting results.
This is not particularly accurate but that probably does not matter for this assignment.
At least on Unix-like systems, time() only gives you 1-second granularity, so it's not useful for timing things that take a very short amount of time (unless you execute them many times in a loop). Take a look at the gettimeofday() function, which gives you the current time with microsecond resolution. Or consider using clock(), which measure CPU time rather than wall-clock time.
Your code is executed too fast to be detected by time function returning the number of seconds elapsed since 00:00 hours, Jan 1, 1970 UTC.
Try to use this piece of code:
inline long getCurrentTime() {
timeb timebstr;
ftime( &timebstr );
return (long)(timebstr.time)*1000 + timebstr.millitm;
}
To use it you have to include sys/timeb.h.
Actually the better practice is to repeat your calculations in the loop to get more precise results.
You will probably have to find a more precise platform-specific timer such as the Windows High Performance Timer. You may also (very likely) find that your compiler optimizes or removes almost all of your code.

<time.h> / <ctime> are not counting ticks

EDIT: It appears to be functioning now. The code has been updated to show my revisions. Thank you all for your help.
I imagine I'm just stupid, but I'm attempting to use ctime to count CPU ticks through my entire program. I'm writing an encryption algorithm for a school project and I'm trying to include a timer so that I can add noise processes, equalizing the amount of time among different key/plaintext combinations.
Here is a little test for ctime:
#include <iostream>
#include <string>
#include <ctime>
int main (int arc, char **argv)
{
double elapsedTime;
const clock_t start = clock ();
int uselessInt = 0;
for (int i = 0; i <= 200; i++)
{
uselessInt = uselessInt * 2 / 3 + i;
std::cout << uselessInt << std::endl;
}
clock_t end = clock();
elapsedTime = static_cast<double>(end - start);
std::cout << elapsedTime << " CPU ticks have elapsed since this application's initiation." << std::endl;
return (0);
}
which prints:
0
1
2
4
/* ... long list of numbers ... */
591
594
0 CPU ticks have elapsed since this application's initiation.
[smalltock#localhost Desktop]$
I am using GCC (G++) and it appears that ctime/time.h simply isn't counting ticks like I want it to. Can anybody identify the problem? I'm a relative amateur in this language.
My two cents. When you do cin.get(), it waits for your to input something on the console, did you do anything or simply typed enter?
I did run your code without typing any text but simply press enter, it gave the following output:
Test Text
It's a stone, Luigi... you didn't make it.
0 CPU ticks have elapsed since this application's initiation.
Real 0m0.700s
User 0m0.000s
Sys 0m0.061s
It may be because the precision of CLOCKS_PER_SEC is kind of "big" (in seconds) compared to the CPU time used by your program
Meanwhile, a syntax error in duration line, you either missed another ) or should delete the first (
BTW:
Real is wall clock time - time from start to finish of the call.
User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only actual CPU time used in executing the process.
Sys is the amount of CPU time spent in the kernel within the process.
So you basically have 0 CPU time since you are keep waiting for I/O, no CPU computation.
elapsedTime in your program is a measure of time in seconds, not a count of clock ticks. If you want ticks, use duration.
Since your program (presumably) spends the vast majority of its time blocked on I/O, not very many seconds are going to have gone by.

Calculating time of execution with time() function

I was given the following HomeWork assignment,
Write a program to test on your computer how long it takes to do
nlogn, n2, n5, 2n, and n! additions for n=5, 10, 15, 20.
I have written a piece of code but all the time I am getting the time of execution 0. Can anyone help me out with it? Thanks
#include <iostream>
#include <cmath>
#include <ctime>
using namespace std;
int main()
{
float n=20;
time_t start, end, diff;
start = time (NULL);
cout<<(n*log(n))*(n*n)*(pow(n,5))*(pow(2,n))<<endl;
end= time(NULL);
diff = difftime (end,start);
cout <<diff<<endl;
return 0;
}
better than time() with second-precision is to use a milliseconds precision.
a portable way is e.g.
int main(){
clock_t start, end;
double msecs;
start = clock();
/* any stuff here ... */
end = clock();
msecs = ((double) (end - start)) * 1000 / CLOCKS_PER_SEC;
return 0;
}
Execute each calculation thousands of times, in a loop, so that you can overcome the low resolution of time and obtain meaningful results. Remember to divide by the number of iterations when reporting results.
This is not particularly accurate but that probably does not matter for this assignment.
At least on Unix-like systems, time() only gives you 1-second granularity, so it's not useful for timing things that take a very short amount of time (unless you execute them many times in a loop). Take a look at the gettimeofday() function, which gives you the current time with microsecond resolution. Or consider using clock(), which measure CPU time rather than wall-clock time.
Your code is executed too fast to be detected by time function returning the number of seconds elapsed since 00:00 hours, Jan 1, 1970 UTC.
Try to use this piece of code:
inline long getCurrentTime() {
timeb timebstr;
ftime( &timebstr );
return (long)(timebstr.time)*1000 + timebstr.millitm;
}
To use it you have to include sys/timeb.h.
Actually the better practice is to repeat your calculations in the loop to get more precise results.
You will probably have to find a more precise platform-specific timer such as the Windows High Performance Timer. You may also (very likely) find that your compiler optimizes or removes almost all of your code.

c++ get milliseconds since some date

I need some way in c++ to keep track of the number of milliseconds since program execution. And I need the precision to be in milliseconds. (In my googling, I've found lots of folks that said to include time.h and then multiply the output of time() by 1000 ... this won't work.)
clock has been suggested a number of times. This has two problems. First of all, it often doesn't have a resolution even close to a millisecond (10-20 ms is probably more common). Second, some implementations of it (e.g., Unix and similar) return CPU time, while others (E.g., Windows) return wall time.
You haven't really said whether you want wall time or CPU time, which makes it hard to give a really good answer. On Windows, you could use GetProcessTimes. That will give you the kernel and user CPU times directly. It will also tell you when the process was created, so if you want milliseconds of wall time since process creation, you can subtract the process creation time from the current time (GetSystemTime). QueryPerformanceCounter has also been mentioned. This has a few oddities of its own -- for example, in some implementations it retrieves time from the CPUs cycle counter, so its frequency varies when/if the CPU speed changes. Other implementations read from the motherboard's 1.024 MHz timer, which does not vary with the CPU speed (and the conditions under which each are used aren't entirely obvious).
On Unix, you can use GetTimeOfDay to just get the wall time with (at least the possibility of) relatively high precision. If you want time for a process, you can use times or getrusage (the latter is newer and gives more complete information that may also be more precise).
Bottom line: as I said in my comment, there's no way to get what you want portably. Since you haven't said whether you want CPU time or wall time, even for a specific system, there's not one right answer. The one you've "accepted" (clock()) has the virtue of being available on essentially any system, but what it returns also varies just about the most widely.
See std::clock()
Include time.h, and then use the clock() function. It returns the number of clock ticks elapsed since the program was launched. Just divide it by "CLOCKS_PER_SEC" to obtain the number of seconds, you can then multiply by 1000 to obtain the number of milliseconds.
Some cross platform solution. This code was used for some kind of benchmarking:
#ifdef WIN32
LARGE_INTEGER g_llFrequency = {0};
BOOL g_bQueryResult = QueryPerformanceFrequency(&g_llFrequency);
#endif
//...
long long osQueryPerfomance()
{
#ifdef WIN32
LARGE_INTEGER llPerf = {0};
QueryPerformanceCounter(&llPerf);
return llPerf.QuadPart * 1000ll / ( g_llFrequency.QuadPart / 1000ll);
#else
struct timeval stTimeVal;
gettimeofday(&stTimeVal, NULL);
return stTimeVal.tv_sec * 1000000ll + stTimeVal.tv_usec;
#endif
}
The most portable way is using the clock function.It usually reports the time that your program has been using the processor, or an approximation thereof. Note however the following:
The resolution is not very good for GNU systems. That's really a pity.
Take care of casting everything to double before doing divisions and assignations.
The counter is held as a 32 bit number in GNU 32 bits, which can be pretty annoying for long-running programs.
There are alternatives using "wall time" which give better resolution, both in Windows and Linux. But as the libc manual states: If you're trying to optimize your program or measure its efficiency, it's very useful to know how much processor time it uses. For that, calendar time and elapsed times are useless because a process may spend time waiting for I/O or for other processes to use the CPU.
Here is a C++0x solution and an example why clock() might not do what you think it does.
#include <chrono>
#include <iostream>
#include <cstdlib>
#include <ctime>
int main()
{
auto start1 = std::chrono::monotonic_clock::now();
auto start2 = std::clock();
sleep(1);
for( int i=0; i<100000000; ++i);
auto end1 = std::chrono::monotonic_clock::now();
auto end2 = std::clock();
auto delta1 = end1-start1;
auto delta2 = end2-start2;
std::cout << "chrono: " << std::chrono::duration_cast<std::chrono::duration<float>>(delta1).count() << std::endl;
std::cout << "clock: " << static_cast<float>(delta2)/CLOCKS_PER_SEC << std::endl;
}
On my system this outputs:
chrono: 1.36839
clock: 0.36
You'll notice the clock() method is missing a second. An astute observer might also notice that clock() looks to have less resolution. On my system it's ticking by in 12 millisecond increments, terrible resolution.
If you are unable or unwilling to use C++0x, take a look at Boost.DateTime's ptime microsec_clock::universal_time().
This isn't C++ specific (nor portable), but you can do:
SYSTEMTIME systemDT;
In Windows.
From there, you can access each member of the systemDT struct.
You can record the time when the program started and compare the current time to the recorded time (systemDT versus systemDTtemp, for instance).
To refresh, you can call GetLocalTime(&systemDT);
To access each member, you would do systemDT.wHour, systemDT.wMinute, systemDT.wMilliseconds.
To get more information on SYSTEMTIME.
Do you want wall clock time, CPU time, or some other measurement? Also, what platform is this? There is no universally portable way to get more precision than time() and clock() give you, but...
on most Unix systems, you can use gettimeofday() and/or clock_gettime(), which give at least microsecond precision and access to a variety of timers;
I'm not nearly as familiar with Windows, but one of these functions probably does what you want.
You can try this code (get from StockFish chess engine source code (GPL)):
#include <iostream>
#include <stdio>
#if !defined(_WIN32) && !defined(_WIN64) // Linux - Unix
# include <sys/time.h>
typedef timeval sys_time_t;
inline void system_time(sys_time_t* t) {
gettimeofday(t, NULL);
}
inline long long time_to_msec(const sys_time_t& t) {
return t.tv_sec * 1000LL + t.tv_usec / 1000;
}
#else // Windows and MinGW
# include <sys/timeb.h>
typedef _timeb sys_time_t;
inline void system_time(sys_time_t* t) { _ftime(t); }
inline long long time_to_msec(const sys_time_t& t) {
return t.time * 1000LL + t.millitm;
}
#endif
struct Time {
void restart() { system_time(&t); }
uint64_t msec() const { return time_to_msec(t); }
long long elapsed() const {
return long long(current_time().msec() - time_to_msec(t));
}
static Time current_time() { Time t; t.restart(); return t; }
private:
sys_time_t t;
};
int main() {
sys_time_t t;
system_time(&t);
long long currentTimeMs = time_to_msec(t);
std::cout << "currentTimeMs:" << currentTimeMs << std::endl;
Time time = Time::current_time();
for (int i = 0; i < 1000000; i++) {
//Do something
}
long long e = time.elapsed();
std::cout << "time elapsed:" << e << std::endl;
getchar(); // wait for keyboard input
}