program supporting real and non-real time modes - c++

I am attempting to transition an existing program to use the new time facilities in C++11 from (homegrown) existing time classes. For real-time processing it is clear how to map the C++11 functionality into the homegrown time classes. It is less clear how the C++11 chrono time facilities can be used to support a non-real time mode (e.g., a "run as fast as you can batch mode", a "run at quarter speed demonstration mode", etc.) which the homegrown classes support. Is this accomplished via defining special clocks that are mapping the wall time to the "playback" speed properly? Any help appreciated and an example would be fantastic.
For example, the code I will transitioning has constructs such as
MessageQueue::poll( Seconds( 1 ) );
or
sleep( Minutes( 2 ) );
where the Seconds or Minutes object is aware of the speed at which the program is being run at to avoid having to use a multiplier or conversion function all of over the place like
MessageQueue::poll( PlaybackSpeed * Seconds( 1 ) );
or
MessageQueue::poll( PlaybackSpeed( Seconds( 1 ) ) );
What I was hoping was possible was to obtain the same sort of behavior with std::chrono::duration and std::chrono::time_point by providing a custom clock.

Whether or not making your own clock will be sufficient depends on how you use the time durations you create. For example if you wanted to run at half speed but somewhere called:
std::this_thread::sleep_for(std::chrono::minutes(2));
The duration would not be adjusted. Instead you'd need to use sleep_until and provide a time point that uses your 'slow' clock. But making a clock that runs slow is pretty easy:
template<typename Clock,int slowness>
struct slow_clock {
using rep = typename Clock::rep;
using period = typename Clock::period;
using duration = typename Clock::duration;
using time_point = std::chrono::time_point<slow_clock>;
constexpr static bool is_steady = Clock::is_steady;
static time_point now() {
return time_point(start_time.time_since_epoch() + ((Clock::now() - start_time)/slowness));
}
static const typename Clock::time_point start_time;
};
template<typename Clock,int slowness>
const typename Clock::time_point
slow_clock<Clock,slowness>::start_time = Clock::now();
The time_points returned from now() will appear to advance at a slower rate relative to the clock you give it. For example here's a program so you can watch nanoseconds slowly tick by:
int main() {
using Clock = slow_clock<std::chrono::high_resolution_clock,500000000>;
for(int i=0;i<10;++i) {
std::this_thread::sleep_until(Clock::now()
+ std::chrono::nanoseconds(1));
std::cout << "tick\n";
}
}
All of the functions you implement, like MessageQueue::poll(), will probably need to be implemented in terms of a global clock typedef.
Of course none of this has anything to do with with how fast the program actually runs, except insofar as you're slowing down the program based on them. Functions that time out will take longer, sleep_until will take be longer, but operations that don't wait for some time point in the future will simply appear to be faster.
// appears to run a million times faster than normal according to (finish-start)
auto start = slow_clock<steady_clock,1000000>::now();
do_slow_operation();
auto finish = slow_clock<steady_clock,1000000>::now();

For this case:
MessageQueue::poll( Seconds( 1 ) );
You could easily use the standard time classes if you just make your MessageQueue understand what "speed" it's supposed to run at. Just call something like MessageQueue::setPlaybackSpeed(0.5) if you want to run at half-speed, and have the queue use that factor from then on when someone gives it an amount of time.
As for this:
sleep( Minutes( 2 ) );
What was your old code doing? I guess whatever object Minutes() created had an implicit conversion operator to int that returned the number of seconds? This seems too magical to me--better to just make a sleep() method on your MessageQueue or some other class, and then you can use the same solution as above.

Related

sleep_until() and steady_clock loop drifting from real time in macOS

Good evening everyone,
I'm trying to learn concurrency using the C++ Concurrency Book by Anthony Williams. Having read the first 2 chapters I thought about coding a simple metronome working in its own thread:
#include <iostream>
#include <thread>
#include <chrono>
#include <vector>
class Metro
{
public:
// beats per minute
Metro(int bpm_in);
void start();
private:
// In milliseconds
int mPeriod;
std::vector<std::thread> mThreads;
private:
void loop();
};
Metro::Metro(int bpm_in):
mPeriod(60000/bpm_in)
{}
void Metro::start()
{
mThreads.push_back(std::thread(&Metro::loop, this));
mThreads.back().detach();
}
void Metro::loop()
{
auto x = std::chrono::steady_clock::now();
while(true)
{
x += std::chrono::milliseconds(mPeriod);
std::cout << "\a" << std::flush;
std::this_thread::sleep_until(x);
}
}
Now, this code seems to work properly, except for the time interval: the period (assuming bpm = 60 => mPeriod = 1000ms) is more than 1100ms. I read that sleep_until is not guaranteed to wake the process up exactly at the correct time (cppreference.com), but the lack of precision should not change the average period time, only delay the single "tic" inside the time grid, am I understanding it correctly? I assumed that storing the steady_clock::now() time only the first time and then using only the increment would be the correct way not to add drifting time at every cycle. Nevertheless, I also tried to change the x var update in the while loop to
x = std::chrono::steady_clock::now() + std::chrono::milliseconds(mPeriod);
but the period increases even more. I also tried using std::chrono::system_clock and high_resolution_clock, but the period didn't improve. Also, I think the properties I'm interested in for this application are monotonicity and steadiness, which steady_clock has. My question is: is there anything completely wrong I did in my code? Am I missing something concerning how to use std::chrono clocks and sleep_until? Or is this kind of method inherently not precise?
I've started analyzing the period by simply comparing it against some known metronomes (Logic Pro, Ableton Live, some mobile apps) and then recorded the output sound to have a better measurement. Maybe the sound buffer has some delay on itself, but same problem happens when making the program output a char. Also, the problem I'm concerned about is the drifting, not the single tic being a bit out of time.
I'm compiling from macos 10.15 terminal with g++ --std=c++11 -pthread and running it on Intel i7 4770hq.

Clock timing changes on different computers

I'm working on an implementation for the DMG-01 (A.K.A gameboy 1989) on my github.
I've already implemented both the APU and the PPU, with (almost) perfect timing on my pc (and the pc of my friends).
However, when I run the emulator on one of my friend's pc, it runs twice as fast as mine or the rest of my friends.
The code for syncronizing the clock (between the gameboy and the pc it's running on) is as follows:
Clock.h Header File:
class Clock
{
// ...
public:
void SyncClock();
private:
/* API::LR35902_HZ_CLOCK is 4'194'304 */
using lr35902_clock_period = std::chrono::duration<int64_t, std::ratio<1, API::LR35902_HZ_CLOCK>>;
static constexpr lr35902_clock_period one_clock_period{1};
using clock = std::chrono::high_resolution_clock;
private:
decltype(clock::now()) _last_tick{std::chrono::time_point_cast<clock::duration>(clock::now() + one_clock_period)};
};
Clock.cpp file
void Clock::SyncClock()
{
// Sleep until one tick has passed.
std::this_thread::sleep_until(this->_last_tick);
// Use time_point_cast to convert (via truncation towards zero) back to
// the "native" duration of high_resolution_clock
this->_last_tick = std::chrono::time_point_cast<clock::duration>(this->_last_tick + one_clock_period);
}
Which gets called in main.cpp like this:
int main()
{
// ...
while (true)
{
// processor.Clock() returns the number of clocks it took for the processor to run the
// current instruction. We need to sleep this thread for each clock passed.
for (std::size_t current_clock = processor.Clock(); current_clock > 0; --current_clock)
{
clock.SyncClock();
}
}
// ...
}
Is there a reason why chrono in this case would be affected in a different way in other computers? Time is absolute, I would understand why in one pc, running the emulator would be slower, but why faster?
I checked out the type of my clock (high_resolution_clock) but I don't see why this would be the case.
Thanks!
I think you may be running into overflow under the hood of <chrono>.
The expression:
clock::now() + one_clock_period
is problematic. clock is high_resolution_clock, and it is common for this to have nanoseconds resolution. one_clock_period has units of 1/4'194'304. The resultant expression will be a time_point with a period of 1/8'192'000'000'000.
Using signed 64 bit integral types, the max() on such a precision is slightly over 13 days. So if clock::now() returns a .time_since_epoch() greater than 13 days, _last_tick is going to overflow, and may some times be negative (depending on how much clock::now() is beyond 13 days).
To correct try casting one_clock_period to the precision of clock immediately:
static constexpr clock::duration one_clock_period{
std::chrono::duration_cast<clock::duration>(lr35902_clock_period{1})};

Keeping Track of Timeout Using std::chrono::duration

I have a function that takes in the number of microseconds before a timeout occurs as a long. This timeout is the timeout for the function to complete its work, even though the function may take longer than the timeout due to things like scheduling and other overhead.
The function does the following:
Performs some setup and launches several threads with std::future and std::async.
Keeps track of the threads using std::future::wait_for() in a loop. Basically, I time each call to wait_for() and subtract the time it took from the timeout. This new timeout is then used when checking the next thread. My goal here is to ensure that all the threads I launch complete their work before the timeout (i.e., the timeout parameter passed to the function) expires.
Pseudo-code below:
void myFunctionWithTimeout(/*some other inputs*/ const long timeout_us) {
auto start_time = std::chrono::steady_clock::now();
double time_remaining_us = std::chrono::microseconds(timeout_us).count();
// Launch threads here using std::future and std::async...
auto end_time = std::chrono::steady_clock::now();
const auto setup_time_us =
std::chrono::duration<double, std::micro>(end_time - start_time);
time_remaining_us -= setup_time_us.count();
for(auto& worker : workers) {
auto start_time = std::chrono::steady_clock::now();
const auto status =
worker.wait_for(std::chrono::duration<double, std::micro>(time_remaining_us));
auto end_time = std::chrono::steady_clock::now();
// Check status and do the appropriate actions.
// Note that it is OK that this time isn't part of the timeout.
const auto wait_time_us =
std::chrono::duration<double, std::micro>(end_time - start_time);
time_remaining_us -= wait_time_us.count();
}
}
My questions:
Is there an easier way to do what I am proposing? My goal is to store the time remaining as a double so in the various computations I can account for fractions of a microsecond. Note that I know that wait_for() won't exactly wait for the duration I specify due to scheduling and what-not, but, at the very least, I don't want to add any round off error in my computations.
Related to #1: Do I need to get the count each time or is there a clean way to update a std::chrono::duration? I'm looking to store the time remaining as a duration and then subtract the setup time or wait time from it.
What happens when time_remaining_us becomes negative? How does this affect the constructor for std::chrono::duration? What happens when a negative duration is passed to std::future::wait_for()? I haven't found these details in the documentation and am wondering if the behavior here is well defined.
=====================================================================
Edited to add:
Per Howard's answer, I looked into using wait_until(), but I don't think it will work for me due to the following issue I found in my research (excerpt from: https://en.cppreference.com/w/cpp/thread/future/wait_until):
The clock tied to timeout_time is used, which is not required to be a monotonic clock.There are no guarantees regarding the behavior of this function if the clock is adjusted discontinuously, but the existing implementations convert timeout_time from Clock to std::chrono::system_clock and delegate to POSIX pthread_cond_timedwait so that the wait honors ajustments to the system clock, but not to the the user-provided Clock. In any case, the function also may wait for longer than until after timeout_time has been reached due to scheduling or resource contention delays.
The way I read that is that even if I use steady_clock for my ending time, it will be converted to system_clock, which means that if the clock is adjusted (say rolled back an hour) I could end up with a timeout of much, much longer than I expected.
That said, I did take the concept of computing the ending time and it simplified my code. Here's some pseudo-code with where I am at currently:
void myFunctionWithTimeout(/*some other inputs*/ const long timeout_us) {
const auto start_time = std::chrono::steady_clock::now();
const auto end_time =
start_time + std::chrono::duration<double, std::micro>(timeout_us);
// Launch threads here using std::future and std::async...
for(auto& worker : workers) {
const auto current_timeout_us =
std::chrono::duration<double, std::micro>(end_time - std::chrono::steady_clock::now());
if (current_timeout_us.count() <= 0) { // Is this needed?
// Handle timeout...
}
const auto status = worker.wait_for(current_timeout_us);
// Check status and do the appropriate actions...
}
}
I'm still unsure whether I can pass in a negative duration to wait_for() so I manually check first. If anyone knows if wait_for() can accept a negative duration, please let me know. Also, if my understanding of the documentation for wait_until() is incorrect, please let me know as well.
Just use wait_until instead of wait_for. Compute the time_point you want to wait until just once, and keep using it. If that time_point starts falling into the past, wait_until will return immediately.
Huge thanks to Howard for putting me on the right track. In my testing, wait_for() does indeed return immediately when passed in a negative duration.
Here is the code I ended up with:
void myFunctionWithTimeout(/*some other inputs*/ const long timeout_us) {
const auto start_time = std::chrono::steady_clock::now();
const auto end_time =
start_time + std::chrono::duration<double, std::micro>(timeout_us);
// Launch threads here using std::future and std::async...
for(auto& worker : workers) {
const auto current_timeout_us =
std::chrono::duration<double, std::micro>(end_time - std::chrono::steady_clock::now());
const auto status = worker.wait_for(current_timeout_us);
// Check status and do the appropriate actions...
}
}
Note that wait_until() is certainly a viable alternative, but I am just a bit too paranoid regarding system_clock changes and therefore am using a monotonic clock.

calculating time elapsed in C++

I need to calculated time elapsed of my function. Right now i am using std::clock and from what i understand this is measuring CPU time, which could be different from real time.
std::clock_t start;
double duration;
start = std::clock();
someFunctionToMeasure();
duration = (std::clock() - start) / (double)CLOCKS_PER_SEC;
So there are 2 things i'd like to know
How does std::clock exactly work? is it just measuring CPU when its computing that function?
Is there a better way to measure time elapsed of computing my function?
Using <chrono>, the code you need could look like this:
using clock = std::chrono::system_clock;
using sec = std::chrono::duration<double>;
// for milliseconds, use using ms = std::chrono::duration<double, std::milli>;
const auto before = clock::now();
someFunctionToMeasure();
const sec duration = clock::now() - before;
std::cout << "It took " << duration.count() << "s" << std::endl;
NB: Thanks to Howard for his helpful comments for the above.
If you need this snippet multiple times and start/end are approximately entry and exit points of the scope in which you invoke someFunctionToMeasure(), it might make sense to wrap it into a utility class that makes the two calls to now() in constructor and destructor.
Just want to throw in the modern approach to timing any callable using <chrono> and the handy std::invoke from C++17. Works on members, lambdas or free function, or any other callable.
// Just for convenience
using Seconds = std::chrono::duration<double>;
// Measure how much time the given function takes to execute using chrono
// Pass the function name, then all relevant arguments, including the object as the first if it's a member function
template<typename Function, typename... Args>
Seconds measure(Function&& toTime, Args&&... a)
{
auto start{std::chrono::steady_clock::now()}; // Start timer
std::invoke(std::forward<Function>(toTime), std::forward<Args>(a)...); // Forward and call
auto stop{std::chrono::steady_clock::now()}; // Stop timer
return (stop - start);
}
This will return the time the function took to execute. If you also need the return value, you could make a std::pair with the Seconds and the return value since std::invoke will correctly return what the callable returns.
Then you can use it like this:
auto t1 = measure(normalFunction);
auto t2 = measure(&X::memberFunction, obj, 4);
auto t3 = measure(lambda, 2, 3);
On a free function, member function and lambda respectively.
Source:
http://en.cppreference.com/w/cpp/chrono/c/clock
The clock is only keeping track of the time that has passed on the process the clock is executing on. So your sample code is keeping track of how much CPU time it took for your function to execute in. This is notably different from keeping track of real time, because the process your function is running on could be preempted and the cpu could execute other code for some time while your function is waiting to finish.
To answer your second question it may help to clarify what you mean by "better". It sounds like you wanted to track the amount of time that your function executed for, and from my understanding this code accomplishes that task. If you wanted to track the amount of time in real time the other answers give examples of that.

c++ get elapsed time platform independent

For a game I wanna measure the time that has passed since the last frame.
I used glutGet(GLUT_ELAPSED_TIME) to do that. But after including glew the compiler can't find the glutGet function anymore (strange). So I need an alternative.
Most sites I found so far suggest using clock in ctime but that function only measures the cpu time of the program not the real time! The time function in ctime is only accurate to seconds. I need at least millisecond accuracy.
I can use C++11.
I don't think there is a high resolution clock built-in C++ before C++11. If you are unable to use C++11 you have to either fix your error with glut and glew or use the platform dependent timer functions.
#include <chrono>
class Timer {
public:
Timer() {
reset();
}
void reset() {
m_timestamp = std::chrono::high_resolution_clock::now();
}
float diff() {
std::chrono::duration<float> fs = std::chrono::high_resolution_clock::now() - m_timestamp;
return fs.count();
}
private:
std::chrono::high_resolution_clock::time_point m_timestamp;
};
Boost provides std::chrono like clocks: boost::chrono
You should consider using std::chrono::steady_clock (or boost equivalent) as opposed to std::chrono::high_resolution_clock - or at least ensure std::chrono::steady_clock::is_steady() == true - if you want to use it to calculate duration, as the time returned by a non-steady clock might even decrease as physical time moves forward.