I need a function or way to get the UNIX epoch in seconds, much like how I can in PHP using the time function.
I can't find any method except the time() in ctime which seems to only output a formatted date, or the clock() function which has seconds but seems to always be a multiple of 1 million, nothing with any resolution.
I wish to measure execution time in a program, I just wanted to calculate the diff between start and end; how would a C++ programmer do this?
EDIT: time() and difftime only allow resolution by seconds, not ms or anything too btw.
time() should work fine, use difftime for difference of time calculations. In case you need better resolutuion, use gettimeofday.
Also, duplicate of: Calculating elapsed time in a C program in milliseconds
If you want to profile get, I'd recommend using getrusage. This will allow you to track CPU time instead of wall clock time:
struct rusage ru;
getrusage(RUSAGE_SELF, &ru);
ru.ru_utime.tv_sec; // seconds of user CPU time
ru.ru_utime.tv_usec; // microseconds of user CPU time
ru.ru_stime.tv_sec; // seconds of system CPU time
ru.ru_stime.tv_usec; // microseconds of system CPU time
This code should work for you.
time_t epoch = time(0);
#include <iostream>
#include <ctime>
using namespace std;
int main() {
time_t time = time(0);
cout<<time<<endl;
system("pause");
return 0;
}
If you have any questions, feel free to comment below.
Related
Is there any way to measure elapsed time in linux/unix without using system clock?
The problem is that system clock changes in some situations and elapsed time measured by time or gettimeofday or anything else like that gives incorrect result.
I'm thinking of creating separate thread which performs loop with sleep(100) inside and counts number of repetitions.
Any better solutions?
std::chrono::steady_clock can be used to measure time, and takes into account changes to the system clock.
Use monotonic time, which represents time since some point: http://linux.die.net/man/3/clock_gettime
int64_t get_monotonic_timestamp(void)
{
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
return (int64_t)ts.tv_sec * 1000000 + ts.tv_nsec / 1000;
}
Measuring elapsed time with sleep (or variants) is a bad idea. Your thread can wake up at any time after the elapsed sleep time so this is sure to be inaccurate.
For a time delay, you can use e.g. select.
I'm using clock(), and I'm wondering whether it ever resets or maxes out. all I'm using it for is just too subject it from a previous function call and find the difference.
Thanks for the help so far but I'm not really able to get the chrono thing working in VS '12 but its fine because I think its a little more than I need anyway, I was think about using 's time() but I have no idea how to convert the t_time into an int that contains just the current seconds 0-60, any help?
As far as the standard is concerned,
The range and precision of times representable in clock_t and time_t are implementation-defined.
(C99, §7.23.1 ¶4)
so there are no guarantees of range; the definition of clock() does not say anything about wrapping around, although it says that
If the processor time used is not available or its value cannot be represented, the function returns the value (clock_t)(-1)
So we may say that exceeding the range of clock_t may be seen as "its value cannot be represented"; on the other hand, this interpretation would mean that, after some time, clock() becomes completely useless.
In facts, if we get down to a specific implementation (glibc), we see:
matteo#teokubuntu:~$ man 3 clock
Note that the time can wrap around. On a 32-bit system where
CLOCKS_PER_SEC equals 1000000 this function will return the same value
approximately every 72 minutes.
Depends on what system you are on. It may use a 32+ or a 64-bit clock_t. It will definitely roll over, but if it's 64-bit, it will be OK for quite some time before it rolls over - 264 microseconds is still an awful long time (approx 244 seconds, and there is around 216 seconds per day, so 228 days - which is about 220, or a million, years... ;)
Of course, in a 32-bit system, we have about 212=4096 seconds at microsecond resoltion. An hour being 3600s = about 1h10m.
However, another problem, in some systems, is that clock() returns CPU time used, so if you sleep, it won't count as time in clock().
And of course, even though CLOCKS_PER_SEC may be 1000000, it doesn't mean that you get microsecond resultion - in many systems, it "jumps" 10000 units at a time.
In summary, "probably a bad idea".
If you have C++11 on the system, use std::chrono, which has several options for timekeeping that are sufficiently good for most purposes (but do study the std::chrono docs)
Example code:
#include <iostream>
#include <chrono>
#include <unistd.h> // replace with "windows.h" if needed.
int main()
{
std::chrono::time_point<std::chrono::system_clock> start, end;
start = std::chrono::system_clock::now();
// 10 seconds on a unix system. Sleep(10000) on windows will be the same thing
sleep(10);
end = std::chrono::system_clock::now();
int elapsed_seconds = std::chrono::duration_cast<std::chrono::seconds>
(end-start).count();
std::cout << "elapsed time: " << elapsed_seconds << "s\n";
}
The simple answer is that if you're just using it to time a function, it will probably not wrap around. It may also be too slow and chances are you might see a function duration of zero. If you want accurate timing for a function that executes fast, you're probably better using an OS level call like this one on Windows.
The time command returns the time elapsed in execution of a command.
If I put a "gettimeofday()" at the start of the command call (using system() ), and one at the end of the call, and take a difference, it doesn't come out the same. (its not a very small difference either)
Can anybody explain what is the exact difference between the two usages and which is the best way to time the execution of a call?
Thanks.
The Unix time command measures the whole program execution time, including the time it takes for the system to load your binary and all its libraries, and the time it takes to clean up everything once your program is finished.
On the other hand, gettimeofday can only work inside your program, that is after it has finished loading (for the initial measurement), and before it is cleaned up (for the final measurement).
Which one is best? Depends on what you want to measure... ;)
It's all dependent on what you are timing. If you are trying to time something in seconds, then time() is probably your best bet. If you need higher resolution than that, then I would consider gettimeofday(), which gives up to microsecond resolution (1 / 1000000th of a second).
If you need even higher resolution than that, consider using clock() and CLOCKS_PER_SECOND, just note that clock() is rarely an accurate description for the amount of time taken, but rather the number of CPU cycles used.
time() returns time since epoch in seconds.
gettimeofday(): returns:
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
Each time function has different precision. In C++11 you would use std::chrono:
using namespace std::chrono;
auto start = high_resolution_clock::now();
/* do stuff*/
auto end = high_resolution_clock::now();
float elapsedSeconds = duration_cast<duration<float>>(end-start).count();
How can i count the millisecond a certain function (called repeatedly) takes ?
I thought of:
CTime::GetCurrentTM() before,
CTime::GetCurrentTM() after,
And then insert the result to CTimeSpan diff = after - before.
Finally store that diff to global member that sum all diffs since i want to know the total time this function spent.
but it will give the answer in seconds and not milliseconds.
MFC is C++, right?
If so, you can just use clock().
#include <ctime>
clock_t time1 = clock();
// do something heavy
clock_t time2 = clock();
clock_t timediff = time2 - time1;
float timediff_sec = ((float)timediff) / CLOCKS_PER_SEC;
This will usually give you millisecond precision.
If you are using MFC, the nice way is to use WIN API. And since you are worried just to calculate the time difference, the below function might suit you perfectly.
GetTickCount64()
directly returns the number of milli seconds that has elapsed since the system was started.
If you don't plan to keep your system up long (precisely more than 49.7 days), little bit faster version - GetTickCount() function
The COleDateTime is known to internally work based on milliseconds, because it stores timestamp on its m_dt variable, which is of DATE type, so having resolution for the intended purpose.
I can suggest you to base your time on
DATE now= (DATE) COleDateTime::GetCurrentTime();
and after do the respective calculations.
I have found some code on measuring execution time here
http://www.dreamincode.net/forums/index.php?showtopic=24685
However, it does not seem to work for calls to system(). I imagine this is because the execution jumps out of the current process.
clock_t begin=clock();
system(something);
clock_t end=clock();
cout<<"Execution time: "<<diffclock(end,begin)<<" s."<<endl;
Then
double diffclock(clock_t clock1,clock_t clock2)
{
double diffticks=clock1-clock2;
double diffms=(diffticks)/(CLOCKS_PER_SEC);
return diffms;
}
However this always returns 0 seconds... Is there another method that will work?
Also, this is in Linux.
Edit: Also, just to add, the execution time is in the order of hours. So accuracy is not really an issue.
Thanks!
Have you considered using gettimeofday?
struct timeval tv;
struct timeval start_tv;
gettimeofday(&start_tv, NULL);
system(something);
double elapsed = 0.0;
gettimeofday(&tv, NULL);
elapsed = (tv.tv_sec - start_tv.tv_sec) +
(tv.tv_usec - start_tv.tv_usec) / 1000000.0;
Unfortunately clock() only has one second resolution on Linux (even though it returns the time in units of microseconds).
Many people use gettimeofday() for benchmarking, but that measures elapsed time - not time used by this process/thread - so isn't ideal. Obviously if your system is more or less idle and your tests are quite long then you can average the results. Normally less of a problem but still worth knowing about is that the time returned by gettimeofday() is non-monatonic - it can jump around a bit e.g. when your system first connects to an NTP time server.
The best thing to use for benchmarking is clock_gettime() with whichever option is most suitable for your task.
CLOCK_THREAD_CPUTIME_ID - Thread-specific CPU-time clock.
CLOCK_PROCESS_CPUTIME_ID - High-resolution per-process timer from the CPU.
CLOCK_MONOTONIC - Represents monotonic time since some unspecified starting point.
CLOCK_REALTIME - System-wide realtime clock.
NOTE though, that not all options are supported on all Linux platforms - except clock_gettime(CLOCK_REALTIME) which is equivalent to gettimeofday().
Useful link: Profiling Code Using clock_gettime
Tuomas Pelkonen already presented the gettimeofday method that allows to get times with a resolution to the microsecond.
In his example he goes on to convert to double. I personally have wrapped the timeval struct into a class of my own that keep the counts into seconds and microseconds as integers and handle the add and minus operations correctly.
I prefer to keep integers (with exact maths) rather than get to floating points numbers and all their woes when I can.