How do I get system up time since the start of the system? All I found was time since epoch and nothing else.
For example, something like time() in ctime library, but it only gives me a value of seconds since epoch. I want something like time() but since the start of the system.
It is OS dependant and already answered for several systems on stackoverflow.
#include<chrono> // for all examples :)
Windows ...
using GetTickCount64() (resolution usually 10-16 millisecond)
#include <windows>
// ...
auto uptime = std::chrono::milliseconds(GetTickCount64());
Linux ...
... using /proc/uptime
#include <fstream>
// ...
std::chrono::milliseconds uptime(0u);
double uptime_seconds;
if (std::ifstream("/proc/uptime", std::ios::in) >> uptime_seconds)
{
uptime = std::chrono::milliseconds(
static_cast<unsigned long long>(uptime_seconds*1000.0)
);
}
... using sysinfo (resolution 1 second)
#include <sys/sysinfo.h>
// ...
std::chrono::milliseconds uptime(0u);
struct sysinfo x;
if (sysinfo(&x) == 0)
{
uptime = std::chrono::milliseconds(
static_cast<unsigned long long>(x.uptime)*1000ULL
);
}
OS X ...
... using sysctl
#include <time.h>
#include <errno.h>
#include <sys/sysctl.h>
// ...
std::chrono::milliseconds uptime(0u);
struct timeval ts;
std::size_t len = sizeof(ts);
int mib[2] = { CTL_KERN, KERN_BOOTTIME };
if (sysctl(mib, 2, &ts, &len, NULL, 0) == 0)
{
uptime = std::chrono::milliseconds(
static_cast<unsigned long long>(ts.tv_sec)*1000ULL +
static_cast<unsigned long long>(ts.tv_usec)/1000ULL
);
}
BSD-like systems (or systems supporting CLOCK_UPTIME or CLOCK_UPTIME_PRECISE respectively) ...
... using clock_gettime (resolution see clock_getres)
#include <time.h>
// ...
std::chrono::milliseconds uptime(0u);
struct timespec ts;
if (clock_gettime(CLOCK_UPTIME_PRECISE, &ts) == 0)
{
uptime = std::chrono::milliseconds(
static_cast<unsigned long long>(ts.tv_sec)*1000ULL +
static_cast<unsigned long long>(ts.tv_nsec)/1000000ULL
);
}
+1 to the accepted answer. Nice survey. But the OS X answer is incorrect and I wanted to show the correction here.
The sysctl function with an input of { CTL_KERN, KERN_BOOTTIME } on OS X returns the Unix Time the system was booted, not the time since boot. And on this system (and every other system too), std::chrono::system_clock also measures Unix Time. So one simply has to subtract these two time_points to get the time-since-boot. Here is how you modify the accepted answer's OS X solution to do this:
std::chrono::milliseconds
uptime()
{
using namespace std::chrono;
timeval ts;
auto ts_len = sizeof(ts);
int mib[2] = { CTL_KERN, KERN_BOOTTIME };
auto constexpr mib_len = sizeof(mib)/sizeof(mib[0]);
if (sysctl(mib, mib_len, &ts, &ts_len, nullptr, 0) == 0)
{
system_clock::time_point boot{seconds{ts.tv_sec} + microseconds{ts.tv_usec}};
return duration_cast<milliseconds>(system_clock::now() - boot);
}
return 0ms;
}
Notes:
It is best to have chrono do your units conversions for you. If your code has 1000 in it (e.g. to convert seconds to milliseconds), rewrite it to have chrono do the conversion.
You can rely on implicit chrono duration unit conversions to be correct if they compile. If they don't compile, that means you're asking for truncation, and you can explicitly ask for truncation with duration_cast.
It's ok to use a using directive locally in a function if it makes the code more readable.
There is a boost example on how to customize logging messages.
In it the author is implementing a simple function unsigned int get_uptime() to get the system uptime for different platforms including Windows, OSx, Linux as well as BSD.
Related
I'm looking to implement a simple timer mechanism in C++. The code should work in Windows and Linux. The resolution should be as precise as possible (at least millisecond accuracy). This will be used to simply track the passage of time, not to implement any kind of event-driven design. What is the best tool to accomplish this?
Updated answer for an old question:
In C++11 you can portably get to the highest resolution timer with:
#include <iostream>
#include <chrono>
#include "chrono_io"
int main()
{
typedef std::chrono::high_resolution_clock Clock;
auto t1 = Clock::now();
auto t2 = Clock::now();
std::cout << t2-t1 << '\n';
}
Example output:
74 nanoseconds
"chrono_io" is an extension to ease I/O issues with these new types and is freely available here.
There is also an implementation of <chrono> available in boost (might still be on tip-of-trunk, not sure it has been released).
Update
This is in response to Ben's comment below that subsequent calls to std::chrono::high_resolution_clock take several milliseconds in VS11. Below is a <chrono>-compatible workaround. However it only works on Intel hardware, you need to dip into inline assembly (syntax to do that varies with compiler), and you have to hardwire the machine's clock speed into the clock:
#include <chrono>
struct clock
{
typedef unsigned long long rep;
typedef std::ratio<1, 2800000000> period; // My machine is 2.8 GHz
typedef std::chrono::duration<rep, period> duration;
typedef std::chrono::time_point<clock> time_point;
static const bool is_steady = true;
static time_point now() noexcept
{
unsigned lo, hi;
asm volatile("rdtsc" : "=a" (lo), "=d" (hi));
return time_point(duration(static_cast<rep>(hi) << 32 | lo));
}
private:
static
unsigned
get_clock_speed()
{
int mib[] = {CTL_HW, HW_CPU_FREQ};
const std::size_t namelen = sizeof(mib)/sizeof(mib[0]);
unsigned freq;
size_t freq_len = sizeof(freq);
if (sysctl(mib, namelen, &freq, &freq_len, nullptr, 0) != 0)
return 0;
return freq;
}
static
bool
check_invariants()
{
static_assert(1 == period::num, "period must be 1/freq");
assert(get_clock_speed() == period::den);
static_assert(std::is_same<rep, duration::rep>::value,
"rep and duration::rep must be the same type");
static_assert(std::is_same<period, duration::period>::value,
"period and duration::period must be the same type");
static_assert(std::is_same<duration, time_point::duration>::value,
"duration and time_point::duration must be the same type");
return true;
}
static const bool invariants;
};
const bool clock::invariants = clock::check_invariants();
So it isn't portable. But if you want to experiment with a high resolution clock on your own intel hardware, it doesn't get finer than this. Though be forewarned, today's clock speeds can dynamically change (they aren't really a compile-time constant). And with a multiprocessor machine you can even get time stamps from different processors. But still, experiments on my hardware work fairly well. If you're stuck with millisecond resolution, this could be a workaround.
This clock has a duration in terms of your cpu's clock speed (as you reported it). I.e. for me this clock ticks once every 1/2,800,000,000 of a second. If you want to, you can convert this to nanoseconds (for example) with:
using std::chrono::nanoseconds;
using std::chrono::duration_cast;
auto t0 = clock::now();
auto t1 = clock::now();
nanoseconds ns = duration_cast<nanoseconds>(t1-t0);
The conversion will truncate fractions of a cpu cycle to form the nanosecond. Other rounding modes are possible, but that's a different topic.
For me this will return a duration as low as 18 clock ticks, which truncates to 6 nanoseconds.
I've added some "invariant checking" to the above clock, the most important of which is checking that the clock::period is correct for the machine. Again, this is not portable code, but if you're using this clock, you've already committed to that. The private get_clock_speed() function shown here gets the maximum cpu frequency on OS X, and that should be the same number as the constant denominator of clock::period.
Adding this will save you a little debugging time when you port this code to your new machine and forget to update the clock::period to the speed of your new machine. All of the checking is done either at compile-time or at program startup time. So it won't impact the performance of clock::now() in the least.
For C++03:
Boost.Timer might work, but it depends on the C function clock and so may not have good enough resolution for you.
Boost.Date_Time includes a ptime class that's been recommended on Stack Overflow before. See its docs on microsec_clock::local_time and microsec_clock::universal_time, but note its caveat that "Win32 systems often do not achieve microsecond resolution via this API."
STLsoft provides, among other things, thin cross-platform (Windows and Linux/Unix) C++ wrappers around OS-specific APIs. Its performance library has several classes that would do what you need. (To make it cross platform, pick a class like performance_counter that exists in both the winstl and unixstl namespaces, then use whichever namespace matches your platform.)
For C++11 and above:
The std::chrono library has this functionality built in. See this answer by #HowardHinnant for details.
Matthew Wilson's STLSoft libraries provide several timer types, with congruent interfaces so you can plug-and-play. Amongst the offerings are timers that are low-cost but low-resolution, and ones that are high-resolution but have high-cost. There are also ones for measuring pre-thread times and for measuring per-process times, as well as all that measure elapsed times.
There's an exhaustive article covering it in Dr. Dobb's from some years ago, although it only covers the Windows ones, those defined in the WinSTL sub-project. STLSoft also provides for UNIX timers in the UNIXSTL sub-project, and you can use the "PlatformSTL" one, which includes the UNIX or Windows one as appropriate, as in:
#include <platformstl/performance/performance_counter.hpp>
#include <iostream>
int main()
{
platformstl::performance_counter c;
c.start();
for(int i = 0; i < 1000000000; ++i);
c.stop();
std::cout << "time (s): " << c.get_seconds() << std::endl;
std::cout << "time (ms): " << c.get_milliseconds() << std::endl;
std::cout << "time (us): " << c.get_microseconds() << std::endl;
}
HTH
The StlSoft open source library provides a quite good timer on both windows and linux platforms. If you want it to implement on your own, just have a look at their sources.
The ACE library has portable high resolution timers also.
Doxygen for high res timer:
http://www.dre.vanderbilt.edu/Doxygen/5.7.2/html/ace/a00244.html
I have seen this implemented a few times as closed-source in-house solutions .... which all resorted to #ifdef solutions around native Windows hi-res timers on the one hand and Linux kernel timers using struct timeval (see man timeradd) on the other hand.
You can abstract this and a few Open Source projects have done it -- the last one I looked at was the CoinOR class CoinTimer but there are surely more of them.
I highly recommend boost::posix_time library for that. It supports timers in various resolutions down to microseconds I believe
SDL2 has an excellent cross-platform high-resolution timer. If however you need sub-millisecond accuracy, I wrote a very small cross-platform timer library here.
It is compatible with both C++03 and C++11/higher versions of C++.
I found this which looks promising, and is extremely straightforward, not sure if there are any drawbacks:
https://gist.github.com/ForeverZer0/0a4f80fc02b96e19380ebb7a3debbee5
/* ----------------------------------------------------------------------- */
/*
Easy embeddable cross-platform high resolution timer function. For each
platform we select the high resolution timer. You can call the 'ns()'
function in your file after embedding this.
*/
#include <stdint.h>
#if defined(__linux)
# define HAVE_POSIX_TIMER
# include <time.h>
# ifdef CLOCK_MONOTONIC
# define CLOCKID CLOCK_MONOTONIC
# else
# define CLOCKID CLOCK_REALTIME
# endif
#elif defined(__APPLE__)
# define HAVE_MACH_TIMER
# include <mach/mach_time.h>
#elif defined(_WIN32)
# define WIN32_LEAN_AND_MEAN
# include <windows.h>
#endif
static uint64_t ns() {
static uint64_t is_init = 0;
#if defined(__APPLE__)
static mach_timebase_info_data_t info;
if (0 == is_init) {
mach_timebase_info(&info);
is_init = 1;
}
uint64_t now;
now = mach_absolute_time();
now *= info.numer;
now /= info.denom;
return now;
#elif defined(__linux)
static struct timespec linux_rate;
if (0 == is_init) {
clock_getres(CLOCKID, &linux_rate);
is_init = 1;
}
uint64_t now;
struct timespec spec;
clock_gettime(CLOCKID, &spec);
now = spec.tv_sec * 1.0e9 + spec.tv_nsec;
return now;
#elif defined(_WIN32)
static LARGE_INTEGER win_frequency;
if (0 == is_init) {
QueryPerformanceFrequency(&win_frequency);
is_init = 1;
}
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
return (uint64_t) ((1e9 * now.QuadPart) / win_frequency.QuadPart);
#endif
}
/* ----------------------------------------------------------------------- */-------------------------------- */
The first answer to C++ library questions is generally BOOST: http://www.boost.org/doc/libs/1_40_0/libs/timer/timer.htm. Does this do what you want? Probably not but it's a start.
The problem is you want portable and timer functions are not universal in OSes.
STLSoft have a Performance Library, which includes a set of timer classes, some that work for both UNIX and Windows.
I am not sure about your requirement, If you want to calculate time interval please see thread below
Calculating elapsed time in a C program in milliseconds
Late to the party here, but I'm working in a legacy codebase that can't be upgraded to c++11 yet. Nobody on our team is very skilled in c++, so adding a library like STL is proving difficult (on top of potential concerns others have raised about deployment issues). I really needed an extremely simple cross platform timer that could live by itself without anything beyond bare-bones standard system libraries. Here's what I found:
http://www.songho.ca/misc/timer/timer.html
Reposting the entire source here just so it doesn't get lost if the site ever dies:
//////////////////////////////////////////////////////////////////////////////
// Timer.cpp
// =========
// High Resolution Timer.
// This timer is able to measure the elapsed time with 1 micro-second accuracy
// in both Windows, Linux and Unix system
//
// AUTHOR: Song Ho Ahn (song.ahn#gmail.com) - http://www.songho.ca/misc/timer/timer.html
// CREATED: 2003-01-13
// UPDATED: 2017-03-30
//
// Copyright (c) 2003 Song Ho Ahn
//////////////////////////////////////////////////////////////////////////////
#include "Timer.h"
#include <stdlib.h>
///////////////////////////////////////////////////////////////////////////////
// constructor
///////////////////////////////////////////////////////////////////////////////
Timer::Timer()
{
#if defined(WIN32) || defined(_WIN32)
QueryPerformanceFrequency(&frequency);
startCount.QuadPart = 0;
endCount.QuadPart = 0;
#else
startCount.tv_sec = startCount.tv_usec = 0;
endCount.tv_sec = endCount.tv_usec = 0;
#endif
stopped = 0;
startTimeInMicroSec = 0;
endTimeInMicroSec = 0;
}
///////////////////////////////////////////////////////////////////////////////
// distructor
///////////////////////////////////////////////////////////////////////////////
Timer::~Timer()
{
}
///////////////////////////////////////////////////////////////////////////////
// start timer.
// startCount will be set at this point.
///////////////////////////////////////////////////////////////////////////////
void Timer::start()
{
stopped = 0; // reset stop flag
#if defined(WIN32) || defined(_WIN32)
QueryPerformanceCounter(&startCount);
#else
gettimeofday(&startCount, NULL);
#endif
}
///////////////////////////////////////////////////////////////////////////////
// stop the timer.
// endCount will be set at this point.
///////////////////////////////////////////////////////////////////////////////
void Timer::stop()
{
stopped = 1; // set timer stopped flag
#if defined(WIN32) || defined(_WIN32)
QueryPerformanceCounter(&endCount);
#else
gettimeofday(&endCount, NULL);
#endif
}
///////////////////////////////////////////////////////////////////////////////
// compute elapsed time in micro-second resolution.
// other getElapsedTime will call this first, then convert to correspond resolution.
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTimeInMicroSec()
{
#if defined(WIN32) || defined(_WIN32)
if(!stopped)
QueryPerformanceCounter(&endCount);
startTimeInMicroSec = startCount.QuadPart * (1000000.0 / frequency.QuadPart);
endTimeInMicroSec = endCount.QuadPart * (1000000.0 / frequency.QuadPart);
#else
if(!stopped)
gettimeofday(&endCount, NULL);
startTimeInMicroSec = (startCount.tv_sec * 1000000.0) + startCount.tv_usec;
endTimeInMicroSec = (endCount.tv_sec * 1000000.0) + endCount.tv_usec;
#endif
return endTimeInMicroSec - startTimeInMicroSec;
}
///////////////////////////////////////////////////////////////////////////////
// divide elapsedTimeInMicroSec by 1000
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTimeInMilliSec()
{
return this->getElapsedTimeInMicroSec() * 0.001;
}
///////////////////////////////////////////////////////////////////////////////
// divide elapsedTimeInMicroSec by 1000000
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTimeInSec()
{
return this->getElapsedTimeInMicroSec() * 0.000001;
}
///////////////////////////////////////////////////////////////////////////////
// same as getElapsedTimeInSec()
///////////////////////////////////////////////////////////////////////////////
double Timer::getElapsedTime()
{
return this->getElapsedTimeInSec();
}
and the header file:
//////////////////////////////////////////////////////////////////////////////
// Timer.h
// =======
// High Resolution Timer.
// This timer is able to measure the elapsed time with 1 micro-second accuracy
// in both Windows, Linux and Unix system
//
// AUTHOR: Song Ho Ahn (song.ahn#gmail.com) - http://www.songho.ca/misc/timer/timer.html
// CREATED: 2003-01-13
// UPDATED: 2017-03-30
//
// Copyright (c) 2003 Song Ho Ahn
//////////////////////////////////////////////////////////////////////////////
#ifndef TIMER_H_DEF
#define TIMER_H_DEF
#if defined(WIN32) || defined(_WIN32) // Windows system specific
#include <windows.h>
#else // Unix based system specific
#include <sys/time.h>
#endif
class Timer
{
public:
Timer(); // default constructor
~Timer(); // default destructor
void start(); // start timer
void stop(); // stop the timer
double getElapsedTime(); // get elapsed time in second
double getElapsedTimeInSec(); // get elapsed time in second (same as getElapsedTime)
double getElapsedTimeInMilliSec(); // get elapsed time in milli-second
double getElapsedTimeInMicroSec(); // get elapsed time in micro-second
protected:
private:
double startTimeInMicroSec; // starting time in micro-second
double endTimeInMicroSec; // ending time in micro-second
int stopped; // stop flag
#if defined(WIN32) || defined(_WIN32)
LARGE_INTEGER frequency; // ticks per second
LARGE_INTEGER startCount; //
LARGE_INTEGER endCount; //
#else
timeval startCount; //
timeval endCount; //
#endif
};
#endif // TIMER_H_DEF
If one is using the Qt framework in the project, the best solution is probably to use QElapsedTimer.
What's the best way to calculate a time difference in C++? I'm timing the execution speed of a program, so I'm interested in milliseconds. Better yet, seconds.milliseconds..
The accepted answer works, but needs to include ctime or time.h as noted in the comments.
See std::clock() function.
const clock_t begin_time = clock();
// do something
std::cout << float( clock () - begin_time ) / CLOCKS_PER_SEC;
If you want calculate execution time for self ( not for user ), it is better to do this in clock ticks ( not seconds ).
EDIT:
responsible header files - <ctime> or <time.h>
I added this answer to clarify that the accepted answer shows CPU time which may not be the time you want. Because according to the reference, there are CPU time and wall clock time. Wall clock time is the time which shows the actual elapsed time regardless of any other conditions like CPU shared by other processes. For example, I used multiple processors to do a certain task and the CPU time was high 18s where it actually took 2s in actual wall clock time.
To get the actual time you do,
#include <chrono>
auto t_start = std::chrono::high_resolution_clock::now();
// the work...
auto t_end = std::chrono::high_resolution_clock::now();
double elapsed_time_ms = std::chrono::duration<double, std::milli>(t_end-t_start).count();
if you are using c++11, here is a simple wrapper (see this gist):
#include <iostream>
#include <chrono>
class Timer
{
public:
Timer() : beg_(clock_::now()) {}
void reset() { beg_ = clock_::now(); }
double elapsed() const {
return std::chrono::duration_cast<second_>
(clock_::now() - beg_).count(); }
private:
typedef std::chrono::high_resolution_clock clock_;
typedef std::chrono::duration<double, std::ratio<1> > second_;
std::chrono::time_point<clock_> beg_;
};
Or for c++03 on *nix:
#include <iostream>
#include <ctime>
class Timer
{
public:
Timer() { clock_gettime(CLOCK_REALTIME, &beg_); }
double elapsed() {
clock_gettime(CLOCK_REALTIME, &end_);
return end_.tv_sec - beg_.tv_sec +
(end_.tv_nsec - beg_.tv_nsec) / 1000000000.;
}
void reset() { clock_gettime(CLOCK_REALTIME, &beg_); }
private:
timespec beg_, end_;
};
Example of usage:
int main()
{
Timer tmr;
double t = tmr.elapsed();
std::cout << t << std::endl;
tmr.reset();
t = tmr.elapsed();
std::cout << t << std::endl;
return 0;
}
I would seriously consider the use of Boost, particularly boost::posix_time::ptime and boost::posix_time::time_duration (at http://www.boost.org/doc/libs/1_38_0/doc/html/date_time/posix_time.html).
It's cross-platform, easy to use, and in my experience provides the highest level of time resolution an operating system provides. Possibly also very important; it provides some very nice IO operators.
To use it to calculate the difference in program execution (to microseconds; probably overkill), it would look something like this [browser written, not tested]:
ptime time_start(microsec_clock::local_time());
//... execution goes here ...
ptime time_end(microsec_clock::local_time());
time_duration duration(time_end - time_start);
cout << duration << '\n';
boost 1.46.0 and up includes the Chrono library:
thread_clock class provides access to the real thread wall-clock, i.e.
the real CPU-time clock of the calling thread. The thread relative
current time can be obtained by calling thread_clock::now()
#include <boost/chrono/thread_clock.hpp>
{
...
using namespace boost::chrono;
thread_clock::time_point start = thread_clock::now();
...
thread_clock::time_point stop = thread_clock::now();
std::cout << "duration: " << duration_cast<milliseconds>(stop - start).count() << " ms\n";
In Windows: use GetTickCount
//GetTickCount defintition
#include <windows.h>
int main()
{
DWORD dw1 = GetTickCount();
//Do something
DWORD dw2 = GetTickCount();
cout<<"Time difference is "<<(dw2-dw1)<<" milliSeconds"<<endl;
}
You can also use the clock_gettime. This method can be used to measure:
System wide real-time clock
System wide monotonic clock
Per Process CPU time
Per process Thread CPU time
Code is as follows:
#include < time.h >
#include <iostream>
int main(){
timespec ts_beg, ts_end;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_beg);
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts_end);
std::cout << (ts_end.tv_sec - ts_beg.tv_sec) + (ts_end.tv_nsec - ts_beg.tv_nsec) / 1e9 << " sec";
}
`
just in case you are on Unix, you can use time to get the execution time:
$ g++ myprog.cpp -o myprog
$ time ./myprog
For me, the most easy way is:
#include <boost/timer.hpp>
boost::timer t;
double duration;
t.restart();
/* DO SOMETHING HERE... */
duration = t.elapsed();
t.restart();
/* DO OTHER STUFF HERE... */
duration = t.elapsed();
using this piece of code you don't have to do the classic end - start.
Enjoy your favorite approach.
Just a side note: if you're running on Windows, and you really really need precision, you can use QueryPerformanceCounter. It gives you time in (potentially) nanoseconds.
Get the system time in milliseconds at the beginning, and again at the end, and subtract.
To get the number of milliseconds since 1970 in POSIX you would write:
struct timeval tv;
gettimeofday(&tv, NULL);
return ((((unsigned long long)tv.tv_sec) * 1000) +
(((unsigned long long)tv.tv_usec) / 1000));
To get the number of milliseconds since 1601 on Windows you would write:
SYSTEMTIME systime;
FILETIME filetime;
GetSystemTime(&systime);
if (!SystemTimeToFileTime(&systime, &filetime))
return 0;
unsigned long long ns_since_1601;
ULARGE_INTEGER* ptr = (ULARGE_INTEGER*)&ns_since_1601;
// copy the result into the ULARGE_INTEGER; this is actually
// copying the result into the ns_since_1601 unsigned long long.
ptr->u.LowPart = filetime.dwLowDateTime;
ptr->u.HighPart = filetime.dwHighDateTime;
// Compute the number of milliseconds since 1601; we have to
// divide by 10,000, since the current value is the number of 100ns
// intervals since 1601, not ms.
return (ns_since_1601 / 10000);
If you cared to normalize the Windows answer so that it also returned the number of milliseconds since 1970, then you would have to adjust your answer by 11644473600000 milliseconds. But that isn't necessary if all you care about is the elapsed time.
If you are using:
tstart = clock();
// ...do something...
tend = clock();
Then you will need the following to get time in seconds:
time = (tend - tstart) / (double) CLOCKS_PER_SEC;
This seems to work fine for intel Mac 10.7:
#include <time.h>
time_t start = time(NULL);
//Do your work
time_t end = time(NULL);
std::cout<<"Execution Time: "<< (double)(end-start)<<" Seconds"<<std::endl;
I am in the middle of developing a cross platform application that changes the system date and time to a specified value. I have completed the part for Windows.
How can I set the system date and time from a C++ program in Linux? I am looking for a function similar to SetSystemTime(SYSTEMTIME &x).
As far as I understood settimeofday() does nothing with the date and I am not sure about the usage of function stime(). I hope mktime() has nothing to do with my need.
Can anybody help me.
You understand wrongly. settimeofday(2) is setting the Epoch time. which is both date and time. Read time(7)
So you if you start from a string expressing a date, convert that string with strptime(3) to a struct tm then convert that to a Unix time with mktime(3) then feed that to settimeofday (i.e. the tv_sec field).
However, settimeofday requires root privilege and I believe you usually should avoid calling it (at least on usual, Internet-connected, computers). Better set some NTP client service on your Linux PC (e.g. run ntpd or chrony and more generally read the sysadmin chapter on keeping time...). See also adjtimex(2)
BTW, changing abruptly the system time on a multi-tasking system -like Linux or Windows- is a very dangerous operation (since it will upset and disturb a lot of system tasks depending or using the time). There are few good reasons to do that (it is a very bad idea in general). If you do that, do that with very few programs & services running (e.g. single user mode Linux). You should not do that in ordinary application code.
I write this piece of code to set Date and Time under Linux.
#include <time.h>
struct tm time = { 0 };
time.tm_year = Year - 1900;
time.tm_mon = Month - 1;
time.tm_mday = Day;
time.tm_hour = Hour;
time.tm_min = Minute;
time.tm_sec = Second;
if (time.tm_year < 0)
{
time.tm_year = 0;
}
time_t t = mktime(&time);
if (t != (time_t) -1)
{
stime(&t);
}
Note that stime requires root privileges.
Example using clock_settime instead of stime since, as Mehmet Fide pointed out, stime is now deprecated. I like the reference code from Converting between timespec & std::chrono for this:
#include <time.h>
#include <chrono>
using std::chrono; // for example brevity
constexpr timespec timepointToTimespec(
time_point<system_clock, nanoseconds> tp)
{
auto secs = time_point_cast<seconds>(tp);
auto ns = time_point_cast<nanoseconds>(tp) -
time_point_cast<nanoseconds>(secs);
return timespec{secs.time_since_epoch().count(), ns.count()};
}
const char* timePointToChar(
const time_point<system_clock, nanoseconds>& tp) {
time_t ttp = system_clock::to_time_t(tp);
return ctime(&ttp);
}
const time_point system_time = system_clock::now();
cout << "System time = " << timePointToChar(system_time) << endl;
const timespec ts = timepointToTimespec(system_time);
clock_settime(CLOCK_REALTIME, &ts);
I am working on a project using Visual C++ /CLR in console mode.
How can I get the system clock in microseconds ?
I want to display hours:minutes:seconds:microseconds
The following program works well but is not compatible with other platforms:
#include <stdio.h>
#include <sys/time.h>
int main()
{
struct timeval tv;
struct timezone tz;
struct tm *tm;
gettimeofday(&tv, &tz);
tm=localtime(&tv.tv_sec);
printf(" %d:%02d:%02d %ld \n", tm->tm_hour, tm->tm_min,tm->tm_sec, tv.tv_usec);
return 0;
}
You could use ptime microsec_clock::local_time() from Boost.
The documentation is available here.
After that, you can use std::string to_iso_extended_string(ptime) to display the returned time as a string or you can use the members of ptime directly to format the output by yourself.
Anyway it is worth noting that:
Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution.
So I guess it depends on how precise you require your "clock" to be.
thank you Mr ereOn
I followed your instructions and i have wrote this code ==> it works 100 %
#include <iostream>
#include "boost/date_time/posix_time/posix_time.hpp"
typedef boost::posix_time::ptime Time;
int main (){
int i;
Time t1;
for (int i=0;i<1000;i++)
{
t1=boost::posix_time::microsec_clock::local_time();
std::cout << to_iso_extended_string(t1) << "\n";
}
return 0;
}
I have tried clock_gettime(CLOCK_REALTIME) and gettimeofday() without luck - And the most basic like clock(), what return 0 to me (?).
But none of they count the time under sleep. I don't need a high resolution timer, but I need something for getting the elapsed time in ms.
EDIT: Final program:
#include <iostream>
#include <string>
#include <time.h>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
// Non-system sleep (wasting cpu)
void wait ( int seconds )
{
clock_t endwait;
endwait = clock () + seconds * CLOCKS_PER_SEC ;
while (clock() < endwait) {}
}
int show_time() {
timeval tv;
gettimeofday(&tv, 0);
time_t t = tv.tv_sec;
long sub_sec = tv.tv_usec;
cout<<"t value: "<<t<<endl;
cout<<"sub_sec value: "<<sub_sec<<endl;
}
int main() {
cout<<show_time()<<endl;
sleep(2);
cout<<show_time()<<endl;
wait(2);
cout<<show_time()<<endl;
}
You need to try gettimeofday() again, it certainly count the wall clock time, so it counts when the process sleep as well.
long long getmsofday()
{
struct timeval tv;
gettimeofday(&tv);
return (long long)tv.tv_sec*1000 + tv.tv_usec/1000;
}
...
long long start = getmsofday();
do_something();
long long end = getmsofday();
printf("do_something took %lld ms\n",end - start);
Your problem probably relates to integral division. You need to cast one of the division operands to float/double to avoid truncation of decimal values less than a second.
clock_t start = clock();
// do stuff
// Can cast either operand for the division result to a double.
// I chose the right-hand operand, CLOCKS_PER_SEC.
double time_passed = clock() / static_cast<double>(CLOCKS_PER_SEC);
[Edit] As pointed out, clock() measures CPU time (clock ticks/cycles) and is not suitable well-suited for wall timer tests. If you want a portable solution for that, #see Boost.Timer as a possible solution
You actually want clock_gettime(CLOCK_MONOTONIC, ...).