I try to get the time elapsed between two points in time in milliseconds as integer or in seconds as double.
I'm trying to put constant acceleration of 4m/s² on something. I got this already:
int main() {
double accel = 4, velocity = 0;
auto start = chrono::system_clock::now();
sleep(3);
auto ende = chrono::system_clock::now();
chrono::duration<double> elapsed_seconds = ende - start;
velocity += accel * elapsed_seconds; //This is where I don't know what to put instead of "elapsed_seconds"
cout << "Velocity after " << elapsed_seconds << "s is " << velocity << "m/s" << endl;
return 0;
}
But as you might see it doesn't work. I already found something like
chrono::duration_cast<ms>(elapsed_time);
but I can't get it to work. Do you have any ideas?
It's maybe a little strange to say you "count" a double but elapsed_seconds.count() will return the underlying value.
To get the seconds as a double:
auto seconds = chrono::duration<double>(ende - start);
auto val = seconds.count();
To get milliseconds:
auto ms = chrono::duration_cast<chrono::milliseconds>(ende - start);
auto val = ms.count();
Be careful when using duration_cast, you can lose precision.
Related
Is there an easy way to get the time elapsed during std::future<T>::wait_for if no time-out occurred? I want to achieves something like this:
std::future<void> futureRet = std::async(std::launch::async, &Someone::doSomething, this);
futureRet.wait_for(std::chrono::seconds(30));
cout << "doSomething returned after <" << futureRet.getElapsedTime() << "> seconds.";
Is there a kind of "getElapsedTime()" function or do I have to calculate the elapsed time myself?
There is an easy way using <chrono>:
auto start = std::chrono::steady_clock::now();
std::future<void> futureRet = std::async(std::launch::async, &Someone::doSomething, this);
futureRet.wait_for(std::chrono::seconds(30));
auto end = std::chrono::steady_clock::now();
std::chrono::duration<double> elapsed_seconds = end - start;
cout << "doSomething returned after <" << elapsed_seconds.count() << "> seconds.";
I'm trying to figure out how to time the execution of part of my program, but when I use the following code, all I ever get back is 0. I know that can't be right. The code I'm timing recursively implements mergesort of a large array of ints. How do I get the time it takes to execute the program in milliseconds?
//opening input file and storing contents into array
index = inputFileFunction(inputArray);
clock_t time = clock();//start the clock
//this is what needs to be timed
newRecursive.mergeSort(inputArray, 0, index - 1);
//getting the difference
time = clock() - time;
double ms = double(time) / CLOCKS_PER_SEC * 1000;
std::cout << "\nTime took to execute: " << std::setprecision(9) << ms << std::endl;
You can use the chrono library in C++11. Here's how you can modify your code:
#include <chrono>
//...
auto start = std::chrono::steady_clock::now();
// do whatever you're timing
auto end = std::chrono::steady_clock::now();
auto durationMS = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << "\n Time took " << durationMS.count() << " ms" << std::endl;
If you're developing on OSX, this blog post from Apple may be useful. It contains code snippets that should give you the timing resolution you need.
I have following C code:
uint64_t combine(uint32_t const sec, uint32_t const usec){
return (uint64_t) sec << 32 | usec;
};
uint64_t now3(){
struct timeval tv;
gettimeofday(&tv, NULL);
return combine((uint32_t) tv.tv_sec, (uint32_t) tv.tv_usec);
}
What this do it combine 32 bit timestamp, and 32 bit "something", probably micro/nanoseconds into single 64 bit integer.
I have really hard time to rewrite it with C++11 chrono.
This is what I did so far, but I think this is wrong way to do it.
auto tse = std::chrono::system_clock::now().time_since_epoch();
auto dur = std::chrono::duration_cast<std::chrono::nanoseconds>( tse ).count();
uint64_t time = static_cast<uint64_t>( dur );
Important note - I only care about first 32 bit to be "valid" timestamp.
Second 32 bit "part" can be anything - nano or microseconds - everything is good as long as two sequential calls of this function give me different second "part".
i want seconds in one int, milliseconds in another.
Here is code to do that:
#include <chrono>
#include <iostream>
int
main()
{
auto now = std::chrono::system_clock::now().time_since_epoch();
std::cout << now.count() << '\n';
auto s = std::chrono::duration_cast<std::chrono::seconds>(now);
now -= s;
auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(now);
int si = s.count();
int msi = ms.count();
std::cout << si << '\n';
std::cout << msi << '\n';
}
This just output for me:
1447109182307707
1447109182
307
The C++11 chrono types use only one number to represent a time since a given Epoch, unlike the timeval (or timespec) structure which uses two numbers to precisely represent a time. So with C++11 chrono you don't need the combine() method.
The content of the timestamp returned by now() depends on the clock you use; there are tree clocks, described in http://en.cppreference.com/w/cpp/chrono :
system_clock wall clock time from the system-wide realtime clock
steady_clock monotonic clock that will never be adjusted
high_resolution_clock the clock with the shortest tick period available
If you want successive timestamps to be always different, use the steady clock:
auto t1 = std::chrono::steady_clock::now();
...
auto t2 = std::chrono::steady_clock::now();
assert (t2 > t1);
Edit: answer to comment
#include <iostream>
#include <chrono>
#include <cstdint>
int main()
{
typedef std::chrono::duration< uint32_t, std::ratio<1> > s32_t;
typedef std::chrono::duration< uint32_t, std::milli > ms32_t;
s32_t first_part;
ms32_t second_part;
auto t1 = std::chrono::nanoseconds( 2500000000 ); // 2.5 secs
first_part = std::chrono::duration_cast<s32_t>(t1);
second_part = std::chrono::duration_cast<ms32_t>(t1-first_part);
std::cout << "first part = " << first_part.count() << " s\n"
<< "seconds part = " << second_part.count() << " ms" << std::endl;
auto t2 = std::chrono::nanoseconds( 2800000000 ); // 2.8 secs
first_part = std::chrono::duration_cast<s32_t>(t2);
second_part = std::chrono::duration_cast<ms32_t>(t2-first_part);
std::cout << "first part = " << first_part.count() << " s\n"
<< "seconds part = " << second_part.count() << " ms" << std::endl;
}
Output:
first part = 2 s
seconds part = 500 ms
first part = 2 s
seconds part = 800 ms
I want to be able to measure time elapsed (for frame time) with my Clock class. (Problem described below the code.)
Clock.h
typedef std::chrono::high_resolution_clock::time_point timePt;
class Clock
{
timePt currentTime;
timePt lastTime;
public:
Clock();
void update();
uint64_t deltaTime();
};
Clock.cpp
#include "Clock.h"
using namespace std::chrono;
Clock::Clock()
{
currentTime = high_resolution_clock::now();
lastTime = currentTime;
}
void Clock::update()
{
lastTime = currentTime;
currentTime = high_resolution_clock::now();
}
uint64_t Clock::deltaTime()
{
microseconds delta = duration_cast<microseconds>(currentTime - lastTime);
return delta.count();
}
When I try to use Clock like so
Clock clock;
while(1) {
clock.update();
uint64_t dt = clock.deltaTime();
for (int i=0; i < 10000; i++)
{
//do something to waste time between updates
int k = i*dt;
}
cout << dt << endl; //time elapsed since last update in microseconds
}
For me it prints about 30 times "0" until it finally prints a number which is always very close to something like "15625" microseconds (15.625 milliseconds).
My question is, why isn't there anything between? I'm wondering whether my implementation is wrong or the precision on high_resolution_clock is acting strange. Any ideas?
EDIT: I am using Codeblocks with mingw32 compiler on a windows 8 computer.
EDIT2:
I tried running the following code that should display high_resolution_clock precision:
template <class Clock>
void display_precision()
{
typedef std::chrono::duration<double, std::nano> NS;
NS ns = typename Clock::duration(1);
std::cout << ns.count() << " ns\n";
}
int main()
{
display_precision<std::chrono::high_resolution_clock>();
}
For me it prints: "1000 ns". So I guess high_resolution_clock has a precision of 1 microsecond right? Yet in my tests it seems to have a precision of 16 milliseconds?
What system are you using? (I guess it's Windows? Visual Studio is known to had this problem, now fixed in VS 2015, see the bug report). On some systems high_resolution_clock is defined as just an alias to system_clock, which can have really low resolution, like 16 ms you are seeing.
See for example this question.
I have the same problem with msys2 on Windows 10: the delta returned is 0 for most of my subfunctions tested and suddenly returns 15xxx or 24xxx microseconds. I thought there was a problem in my code as all the tutorials do not mention any problem.
Same thing for difftime(finish, start) in time.h which often returns 0.
I finally changed all my high_resolution clock with steady_clock, and I can find the proper times:
auto t_start = std::chrono::steady_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::steady_clock ::now() - t_start).count() << " microseconds" << std::endl;
// returns the proper value (or at least a plausible value)
whereas this returns mostly 0:
auto t_start = std::chrono::high_resolution_clock::now();
_cvTracker->track(image); // my function to test
std::cout << "Time taken = " << std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now() - t_start).count() << " microseconds" << std::endl;
// returns 0 most of the time
difftime does not seem to work either:
time_t start, finish;
time(&start);
_cvTracker->track(image);
time(&finish);
std::cout << "Time taken= " << difftime(finish, start) << std::endl;
// returns 0 most of the time
I am using boost::posix_time::ptime to measure my simulation run-time and for something else.
assuimg
boost::posix_time::ptime start, stop;
boost::posix_time::time_duration diff;
start = boost::posix_time::microsec_clock::local_time();
sleep(5);
stop = boost::posix_time::microsec_clock::local_time();
diff = stop - stop;
now
std::cout << to_simple_string( diff ) << std::endl;
return the time in hh:mm:ss.ssssss format and i would like to have the time as well in ss.sssssss.
for doing this, i tried
boost::posix_time::time_duration::sec_type x = diff.total_seconds();
but that gave me the answer in format of ss and seconds() returns Returns normalized number of seconds (0..60).
My question how could i get my simulation time in seconds of the format ss.ssssss?
EDIT
i was able to do:
std::cout << diff.total_seconds() << "." << diff.fractional_seconds() << std::endl;
is there something elegant that could plot ss.sssssss?
total_seconds() returns a long value which is not normalized to 0..60s.
So just do this:
namespace bpt = boost::posix_time;
int main(int , char** )
{
bpt::ptime start, stop;
start = bpt::microsec_clock::local_time();
sleep(62);
stop = bpt::microsec_clock::local_time();
bpt::time_duration dur = stop - start;
long milliseconds = dur.total_milliseconds();
std::cout << milliseconds << std::endl; // 62000
// format output with boost::format
boost::format output("%.2f");
output % (milliseconds/1000.0);
std::cout << output << std::endl; // 62.00
}
// whatever time you have (here 1second)
boost::posix_time::ptime pt = boost::posix_time::from_time_t( 1 );
// subtract 0 == cast to duration
boost::posix_time::time_duration dur = pt - boost::posix_time::from_time_t(0);
// result in ms
uint64_t ms = dur.total_milliseconds();
// result in usec
uint64_t us = dur.total_microseconds();
// result in sec
uint64_t s = dur.total_seconds();
std::cout << "s = " << s << ", ms = " << ms << ", us = " << us << std::endl;
s = 1, ms = 1000, us = 1000000
The most straight-forward way I see is something like this output, the rest of the time computations along the lines of nabulke's post:
#include <iomanip>
double dseconds = dur.total_milliseconds() / 1000. ;
std::cout << std::setiosflags(std::ios::fixed) << std::setprecision(3);
std::cout << dseconds << std::endl;
You want to express time in terms of a floating point number, so it's probably best to actually use one and apply the standard stream formatting manipulators.