Consider this example of code:
#include <chrono>
#include <iostream>
int main ( )
{
using namespace std::chrono;
system_clock::time_point s = system_clock::now();
for (int i = 0; i < 1000000; ++i)
std::cout << duration_cast<duration<double>>(system_clock::now() - s).count() << "\n";
}
I expect this to print the elapsed time in seconds. But it actually prints time in thousands of second (the expected result multiplied by 0.001). Am I doing something wrong?
Edit
Since seconds is equivalent to duration<some-int-type>, duration_cast<seconds> gives the same result.
I used gcc-4.7.3-r1
You program works as expected using both gcc 4.8.2 and VS2013. I think it might be a compiler bug in your old gcc 4.7
maverik guessed this right: the problem was with binary incompatibility inside std::chrono - gcc-4.8 version of libstdc++ and gcc-4.7 did not agree on internal units of time.
You could use duration_cast < seconds > instead of duration_cast < duration < double > >
Related
I have a problem with Visual Studio 2017.
I'm trying to get the current time and date with millisecond resolution. I tried the follwing code in a few compilers:
#define _CRT_SECURE_NO_WARNINGS
#include <iostream>
#include <ctime>
#include <chrono>
using namespace std;
using namespace chrono;
int main()
{
high_resolution_clock::time_point p = high_resolution_clock::now();
milliseconds ms = duration_cast<milliseconds>(p.time_since_epoch());
seconds s = duration_cast<seconds>(ms);
time_t t = s.count();
cout << ctime(&t) << "\n";
cin.ignore(1);
}
Every compiler except Visual Studio 2017 prints the correct time. The output of Visual Studio is:
Tue Jan 6 07:28:21 1970
the MinGW Output:
Sun Feb 03 18:01:38 2019
Is there any way to fix the code so that it works in all compilers correctly? I need high_resolution_clock to have access to milliseconds.
high_resolution_clock is an alias for the clock with the highest resoultion availavle:
Class std::chrono::high_resolution_clock represents the clock with the smallest tick period provided by the implementation. It may be an alias of std::chrono::system_clock or std::chrono::steady_clock, or a third, independent clock.
This could explain the different times you get on different compilers.
steady_clock Does not guarantee to give a time that makes sense, but is good for keeping track time:
This clock is not related to wall clock time (for example, it can be time since last reboot), and is most suitable for measuring intervals.
Is there any way to fix the code so that it works in all compilers correctly? I need high_resolution_clock to have access to milliseconds.
system_clock represents the clock of your OS:
Class std::chrono::system_clock represents the system-wide real time wall clock.
It may not be monotonic: on most systems, the system time can be adjusted at any moment. It is the only C++ clock that has the ability to map its time points to C-style time, and, therefore, to be displayed (until C++20).
If you need the milliseconds of a date or timepoint then use std::chrono::system_clock but if you just need to keep track of passed time, use std::chrono::high_resolution_clock.
To get the number of milliseconds since the start of system_clock:
auto timePoint = std::chrono::system_clock::now();
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>
(timePoint.time_since_epoch());
std::cout << "since epoch: " << ms.count() << " ms";
above snippet should work across most operating systems and compilers, although it is not guaranteed that time_since_epoch returns the time since 1970 but only to return the time since the clocks epoch, which in most cases is your desired behaviour.
The code assumes that time_since_epoch() returns the number of seconds since January 1, 1970 – so the value would be assignable to a variable time_t.
That assumption is wrong. time_since_epoch() can return any unit. In fact, high_resolution_clock is not designed to retrieve an absolute time and date. It's meant for performance measurements in the micro and nano seconds range.
In order to retrieve an absolute time / date, use system_clock. The class has a static method to create a time_t value:
#include <iostream>
#include <chrono>
#include <ctime>
using namespace std;
using namespace chrono;
int main()
{
time_point<system_clock> now = system_clock::now();
time_t now_time = system_clock::to_time_t(now);
cout << ctime(&now_time) << "\n"
}
Update
To get the milliseconds since Jan 1, 1970:
#include <iostream>
#include <chrono>
#include <ctime>
using namespace std;
using namespace chrono;
int main()
{
system_clock::time_point epochStart = system_clock::from_time_t(0);
long long epochStartMs = duration_cast<milliseconds>(epochStart.time_since_epoch()).count();
system_clock::time_point timePoint = system_clock::now();
long long timePointMs = duration_cast<milliseconds>(timePoint.time_since_epoch()).count();
long long durMs = timePointMs - epochStartMs;
cout << "Since 1st Jan 1970: " << durMs << " ms" << "\n";
}
For most systems, epochStartMs will probably be 0. But I think the standard doesn't guarantee that system_clock has it's epoch start on Jan 1, 1970.
Does mt19937_64 have a higher throughput (bit/s) than the 32 bit version, mt19937, assuming a 64 bit architecture?
What about after vectorization?
As #byjoe points out, this obviously depends on the compiler.
In this case, it seems to be considerably more dependent on the compiler than is typical though. For example, the Boost test linked in the comments uses the compiler from VC++ 2010, and shows only a fairly slight increase in random bits per second from using mt19937_64.
To get more up-to-date information, I whipped up a simple test:
#include <random>
#include <chrono>
#include <iostream>
#include <iomanip>
template <class T, class U>
U test(char const *label, U count) {
using namespace std::chrono;
T gen(100);
U result = 0;
auto start = high_resolution_clock::now();
for (U i = 0; i < count; i++)
result ^= gen();
auto stop = high_resolution_clock::now();
std::cout << "Time for " << std::left << std::setw(12) << label
<< duration_cast<milliseconds>(stop - start).count() << "\n";
return result;
}
int main(int argc, char **argv) {
unsigned long long limit = 1000000000;
auto result1 = test<std::mt19937>("mt19937: ", limit);
auto result2 = test<std::mt19937_64>("mt19937_64: ", limit);
std::cout << "Ignore: " << result1 << ", " << result2 << "\n";
}
With VC++ 2015 udpate 3 (with /o2b2 /GL, though it probably doesn't matter), I got results like these:
Time for mt19937: 4339
Time for mt19937_64: 4215
Ignore: 2598366015, 13977046647333287932
This shows mt19937_64 as being slightly faster per call, so over twice as fast per bit as mt19937. With MinGW (using -O3), the results were much more like those linked from the Boost site:
Time for mt19937: 2211
Time for mt19937_64: 4183
Ignore: 2598366015, 13977046647333287932
In this case, mt19937_64 takes just a little less than twice the time per call, so it's only slightly faster per bit. The highest overall speed seems to be from g++ with mt19937_64, but the difference between g++ and VC++ (on these runs) is less than 1%, so I'm not sure it's reproducible.
For what it's worth, the difference in speed (per call) between mt19937 and mt19937_64 with VC++ is also pretty small, but does seem to be reproducible--it happened quite consistently in my testing. I did wonder about whether that might be (at least partially) a matter of clock management--that when the code first started, the CPU was idle, and the clock had been slowed, so the first part of the first run was at a lower clock speed. To check, I reversed the order to test mt19937_64 first. I think my hypothesis was at least partially correct--when I reversed the order, mt19937_64 slowed down compared to mt19937, so they were nearly identical on a per-call basis with VC++.
It clearly depends on your compiler and their implementation. I just tested and the 64bit version takes about 60% longer call-for-call, so that makes the 64bit version about 25% fast bit-for-bit. I tested with an i7 cpu.
If you need max speed, you may want to consider using something else. Especially if the numbers don't need to be very high quality.
I have tried this and other codes I found online but they did not work. My IDE is Xcode.
Edit: When I tried the code on the link, the variable long long microseconds always returned 0.
I would like to print the timestamp in this manner: (hour:minute:microseconds). For example, 15:17:09:134613464312.
The method you use is correct.
If you the code between the two now() calls lasts less than a microseconds, microseconds will always be zero, since the number is not a floating point. If you have always zero, that means you need higher resolution (try nanoseconds).
Btw, if you simply want timestamp, you don't want to use this code since it is done to compute elapsed time between two time points. You can try something like this:
auto microseconds = std::chrono::duration_cast<std::chrono::microseconds>
(std::chrono::high_resolution_clock::now().time_since_epoch()).count();
Which is really a timestamp and not a duration.
EDIT: To have the current microseconds count, you can do something like this:
#include <iostream>
#include <chrono>
#include <ctime>
using namespace std;
using namespace std::chrono;
using days = duration<int, ratio_multiply<hours::period, ratio<24> >::type>;
int main() {
system_clock::time_point now = system_clock::now();
system_clock::duration tp = now.time_since_epoch();
days d = duration_cast<days>(tp);
tp -= d;
hours h = duration_cast<hours>(tp);
tp -= h;
minutes m = duration_cast<minutes>(tp);
tp -= m;
seconds s = duration_cast<seconds>(tp);
tp -= s;
cout << tp.count() << "\n";
return 0;
}
This will print the current microseconds count.
Hours:Minutes:Seconds is pretty easy.
I've got a problem with getting actual system time with milliseconds. The only one good method I found is in Windows.h, but I can't use it. I'm supposed to use std::chrono. How can I do this?
I spent a lot of time trying to google it, but I found only second-precision examples.
I'm trying to get string like this:
[2014-11-25 22:15:38:449]
Using code from this answer:
#include <chrono>
#include <ctime>
#include <iostream>
template <typename Duration>
void print_time(tm t, Duration fraction) {
using namespace std::chrono;
std::printf("[%04u-%02u-%02u %02u:%02u:%02u.%03u]\n", t.tm_year + 1900,
t.tm_mon + 1, t.tm_mday, t.tm_hour, t.tm_min, t.tm_sec,
static_cast<unsigned>(fraction / milliseconds(1)));
// VS2013's library has a bug which may require you to replace
// "fraction / milliseconds(1)" with
// "duration_cast<milliseconds>(fraction).count()"
}
int main() {
using namespace std;
using namespace std::chrono;
system_clock::time_point now = system_clock::now();
system_clock::duration tp = now.time_since_epoch();
tp -= duration_cast<seconds>(tp);
time_t tt = system_clock::to_time_t(now);
print_time(*gmtime(&tt), tp);
print_time(*localtime(&tt), tp);
}
One thing to keep in mind is that the fact that the timer returns values of sub-millisecond denominations does not necessarily indicate that the timer has sub-millisecond resolution. I think Windows' implementation in VS2015 may finally be fixed, but the timer they've been using to back their chrono implementation so far has been sensitive to the OS timeBeginPeriod() setting, displaying varying resolution, and the default setting is I think 16 milliseconds.
Also the above code assumes that neither UTC nor your local timezone are offset from the epoch of std::chrono::system_clock by a fractional second value.
Example of using Howard's date functions to avoid ctime: http://coliru.stacked-crooked.com/a/98db840b238d3ce7
This answer still uses a bit of C API but is only used in the function, so you can forget about it:
template<typename T>
void print_time(std::chrono::time_point<T> time) {
using namespace std;
using namespace std::chrono;
time_t curr_time = T::to_time_t(time);
char sRep[100];
strftime(sRep,sizeof(sRep),"%Y-%m-%d %H:%M:%S",localtime(&curr_time));
typename T::duration since_epoch = time.time_since_epoch();
seconds s = duration_cast<seconds>(since_epoch);
since_epoch -= s;
milliseconds milli = duration_cast<milliseconds>(since_epoch);
cout << '[' << sRep << ":" << milli.count() << "]\n";
}
This is merely a rewrite of the code that bames53, but using strftime to shorten the code a bit.
std::chrono give you utilities to represent a point in time or the elapsed duration between two points in time. It allows you to get information about these time intervals.
It does not provide any calendar information. Unfortunately, at this time there are no tools in the C++ standard for these. boost::date_time may be helpful here.
Did anybody notice that to_time_t rounds the seconds, instead of truncating
auto now = system_clock::now();
time_t secs = system_clock::to_time_t(now);
now {_MyDur={_MyRep=15107091978759765 } }
secs = 1510709198
so when you tack on the milliseconds
auto tse = now.time_since_epoch();
auto now_ms = duration_cast<milliseconds>(tse);
auto now_s = duration_cast<seconds>(tse);
auto jst_ms = now_ms - now_s;
DWORD msecs = jst_ms.count();
msecs = 875
secs should be 1510709197, but look at now_s, it's right
now_s {_MyRep=1510709197 }
I used gcc-4.8.1(configure: ./configure --prefix=/usr/local) to compile following code in Ubuntu 12.04, but when I ran it, it didn't work. it didn't stop to wait the mutex. It returned false, and outputed "Hello world!"
command: g++ -std=c++11 main.cpp -omain -pthread
When I used gcc-4.6(apt-get install g++) to compile it, it worked well. The program waited about ten seconds, and outputed "Hello world!"
#include <thread>
#include <iostream>
#include <chrono>
#include <mutex>
std::timed_mutex test_mutex;
void f()
{
test_mutex.try_lock_for(std::chrono::seconds(10));
std::cout << "hello world\n";
}
int main()
{
std::lock_guard<std::timed_mutex> l(test_mutex);
std::thread t(f);
t.join();
return 0;
}
If I am not mistaken, that is Bug 54562 -mutex and condition variable timers.
The reason for the bug is also mentioned:
This is because it uses the CLOCK_MONOTONIC clock (if available on the
platform) to calculate the absolute time when it needs to return,
which is incorrect as the POSIX pthread_mutex_timedlock() call uses
the CLOCK_REALTIME clock, and on my platform the monotonic clock is
way behind the real time clock.
However, this doesn't explain why you see the correct behavior on gcc-4.6 though. Perhaps _GLIBCXX_USE_CLOCK_MONOTONIC is not enabled?
A possible workaround:
const int WAIT_PRECISION_MS = 10; // Change it to whatever you like
int TIME_TO_WAIT_MS = 2000; // Change it to whatever you like
int ms_waited = 0;
bool got_lock = false;
while (ms_waited < TIME_TO_WAIT_MS) {
std::this_thread::sleep_for(
std::chrono::milliseconds(WAIT_PRECISION_MS));
ms_waited += WAIT_PRECISION_MS;
got_lock = YOUR_MUTEX.try_lock();
if (got_lock) {
break;
}
}
The WAIT_PRECISION_MS will tell the while loop how often to "wake up" and try getting the lock. But, it would also tell how accurate your deadline is going to be, unless your precision time is a factor of the deadline time.
For example:
deadline = 20, precision = 3: 3 is not a factor of 20 - the last iteration of the while loop will be when ms_waited is 18. It means that you are going to wait a total of 21ms and not 20ms.
deadline = 20, precision = 4: 4 is a factor of 20 - the last iteration of the while loop will be when ms_waited is 16. It means that you are going to wait exactly 20ms, as your deadline is defined.