sleep the thread using milliseconds instead of using seconds in C++ - c++

I have a boost condition variable which I am using to sleep a thread.
boost::condition_variable m_cond;
Currently I am using like this in which I am passing the lock and the seconds to which it has to sleep. Currently it will sleep for 10 seconds
if(!m_cond.timed_wait(lock, boost::posix_time::seconds(10))){
}
Is there any way to do same thing in milliseconds? Instead of passing that as a seconds, can I pass the number of milliseconds it has to wait? Suppose if I need to wait for 2 seconds, then I would like to pass 2000 ms as the value. This doesn't work -
long ms = 2000;
if(!m_cond.timed_wait(lock, ms)){
}
Is there any other way of doing it?

if(!m_cond.timed_wait(lock, boost::posix_time::milliseconds(2000)))

Related

C++ How to make precise frame rate limit?

I'm trying to create a game using C++ and I want to create limit for fps but I always get more or less fps than I want. When I look at games that have fps limit it's always precise framerate. Tried using Sleep() std::this_thread::sleep_for(sleep_until). For example Sleep(0.01-deltaTime) to get 100 fps but ended up with +-90fps.
How do these games handle fps so precisely when any sleeping isn't precise?
I know I can use infinite loop that just checks if time passed but it's using full power of CPU but I want to decrease CPU usage by this limit without VSync.
Yes, sleep is usually inaccurate. That is why you sleep for less than the actual time it takes to finish the frame. For example, if you need 5 more milliseconds to finish the frame, then sleep for 4 milliseconds. After the sleep, simply do a spin-lock for the rest of the frame. Something like
float TimeRemaining = NextFrameTime - GetCurrentTime();
Sleep(ConvertToMilliseconds(TimeRemaining) - 1);
while (GetCurrentTime() < NextFrameTime) {};
Edit: as stated in another answer, timeBeginPeriod() should be called to increase the accuracy of Sleep(). Also, from what I've read, Windows will automatically call timeEndPeriod() when your process exits if you don't before then.
You could record the time point when you start, add a fixed duration to it and sleep until the calculated time point occurs at the end (or beginning) of every loop. Example:
#include <chrono>
#include <iostream>
#include <ratio>
#include <thread>
template<std::intmax_t FPS>
class frame_rater {
public:
frame_rater() : // initialize the object keeping the pace
time_between_frames{1}, // std::ratio<1, FPS> seconds
tp{std::chrono::steady_clock::now()}
{}
void sleep() {
// add to time point
tp += time_between_frames;
// and sleep until that time point
std::this_thread::sleep_until(tp);
}
private:
// a duration with a length of 1/FPS seconds
std::chrono::duration<double, std::ratio<1, FPS>> time_between_frames;
// the time point we'll add to in every loop
std::chrono::time_point<std::chrono::steady_clock, decltype(time_between_frames)> tp;
};
// this should print ~10 times per second pretty accurately
int main() {
frame_rater<10> fr; // 10 FPS
while(true) {
std::cout << "Hello world\n";
fr.sleep(); // let it sleep any time remaining
}
}
The accepted answer sounds really bad. It would not be accurate and it would burn the CPU!
Thread.Sleep is not accurate because you have to tell it to be accurate (by default is about 15ms accurate - means that if you tell it to sleep 1ms it could sleep 15ms).
You can do this with Win32 API call to timeBeginPeriod & timeEndPeriod functions.
Check MSDN for more details -> https://learn.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod
(I would comment on the accepted answer but still not having 50 reputation)
Be very careful when implementing any wait that is based on scheduler sleep.
Most OS schedulers have higher latency turn-around for a wait with no well-defined interval or signal to bring the thread back into the ready-to-run state.
Sleeping isn't inaccurate per-se, you're just approaching the problem all wrong. If you have access to something like DXGI's Waitable Swapchain, you synchronize to the DWM's present queue and get really reliable low-latency timing.
You don't need to busy-wait to get accurate timing, a waitable timer will give you a sync object to reschedule your thread.
Whatever you do, do not use the currently accepted answer in production code. There's an edge case here you WANT TO AVOID, where Sleep (0) does not yield CPU time to higher priority threads. I've seen so many game devs try Sleep (0) and it's going to cause you major problems.
Use a timer.
Some OS's can provide special functions. For example, for Windows you can use SetTimer and handle its WM_TIMER messages.
Then calculate the frequency of the timer. 100 fps means that the timer must fire an event each 0.01 seconds.
At the event handler for this timer-event you can do your rendering.
In case the rendering is slower than the desired frequency then use a syncro flag OpenGL sync and discard the timer-event if the previous rendering is not complete.
You may set a const fps variable to your desired frame rate, then you can update your game if the elapsed time from last update is equal or more than 1 / desired_fps.
This will probably work.
Example:
const /*or constexpr*/ int fps{60};
// then at update loop.
while(running)
{
// update the game timer.
timer->update();
// check for any events.
if(timer->ElapsedTime() >= 1 / fps)
{
// do your updates and THEN renderer.
}
}

Restart the timer in select() of socket programming

I want to use select() to receive update from other server and also send out periodic messages. Consider the following set up:
while(1){
select(... timeout = 5 seconds);
// some other code}
If I receive update at t = 2 seconds, then select() will return and corresponding statement will be executed. When the next loop begins, timeout will be set to 5 seconds again. However, it should be 5 - 2 = 3 seconds. Is there a way to update the timer with the right time?
I thought about to manually start a timer righr before select(), however this timer might not be synchronous with the one used in select(). And will cause other potential problems.
According to the select man page:
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1-2001 permits either behaviour.)
So, you just simply reuse the timeout variable. You only reset its value when you really time-out.
As the warning suggests, relying on this behavior makes for a porting problem, so if you rely on this behavior, make sure you document it so that the right thing is done when porting the code.
Just remember time() in a variable before you call select(), get another time() when select() returns and... in the next while(1) iteration use not 5, but 5 - difference_between_times for timeout value.
Perhaps you'd want to use new_timeout = 5 - difference_between_times % 5, so that if your operation after select returns takes longer than 5 seconds... you still set the timeout in 5 sec interval.
You probably should use not seconds, but some more granular time unit. And think whether above is the behaviour you really want (with modulo). Maybe when difference_between_times > 5, you should wait just for 5 sec. Do as you wish, but you get the idea.
When your app gets a little more complicated, you may have multiple timers with different timeout intervals. We do. Here is how we handle it.
Each timer has a timer object with a time_t of when the timer expires. We store all the timers in a heap data structure, so the soonest timer to expire is at the root of the heap. Before doing a select() we fetch the root of the heap, and subtract the current time from the timer's expiration time and use that delta as the timeout to the select() call.
Timer * t = heap->Root();
time_t now = time(0);
timeval tv;
tv.tv_sec = t->when - now;
tv.tv_usec = 0;
select( ... & tv );

C++ boost thread delay

I want to wait 1.5 seconds in a boost thread. Using boost::xtime I can wait an integer number of seconds:
// Block on the queue / wait for data for up two seconds.
boost::xtime_get(&xt, boost::TIME_UTC);
xt.sec++;
xt.sec++;
....
_condition.timed_wait(_mutex, xt)
How can I wait 1.5 seconds instead?
Would the following not work using the nanoseconds and seconds portion and increasing by 0.5 billion nanoseconds and adding a second which is 1.5 seconds
xt.sec++;
xt.nsec += 500000000;
_condition.timed_wait(_mutex, xt);

Scheduler using Timer Queues

I am working on an application where i need to schedule tasks based on the time set by the user. The user may add/modify/delete the schedules. To implement it i am considering using Timer Queues. Initially i though of using WaitableTimers which suite very much for my purpose but i cant make my thread to sleep for competing the APC.
Now with the Timer Queue i am not sure how to set the timer to signal based on Systemtime. I tried the following code but the callback function is never called
SYSTEMTIME st, lt;
GetSystemTime(&st);
FILETIME ft;
SystemTimeToFileTime(&st, &ft);
ULONGLONG qwResult;
// Copy the time into a quadword.
qwResult = (((ULONGLONG) ft.dwHighDateTime) << 32) + ft.dwLowDateTime;
// Add 20 seconds days.
qwResult += 20 * _SECOND;
HANDLE hTimerQueue = CreateTimerQueue();
HANDLE hTimer;
// Set a timer to call the timer routine in 10 seconds.
if (!CreateTimerQueueTimer( &hTimer, hTimerQueue ,(WAITORTIMERCALLBACK)TimerAPCProc, NULL , qwResult, 0, 0))
{
printf("CreateTimerQueueTimer failed (%d)\n", GetLastError());
return 3;
}
The callback routine will be called in qwResult milliseconds, and file time gives you the time in 100 nanoseconds. You do the math. GetSystemTimeAsFileTime Will give you FILETIME right away if that is the path you want to go.
Personally, I would keep a list of structure with times when the routines should be called and pointers to routines and iterate through the list once in a while and if the time of execution is due I would just call the function (or create a thread). That way your users can always review the scheduled tasks and change them.
It needs to be backed by WaitForSingleObject, or entering the thread into waitable state (using SleepEx for example).
You're passing in an absolute time, but the docs say you need to pass in the number of milliseconds from the current time.
If you want the timer to go off in 20 seconds, pass 20000 instead of qwResult

sleep for many days with resolution of microseconds

Is there a way to put a thread to sleep for many days with a resolution of microseconds? usleep can only put the thread to sleep for 1000000 and sleep works in second steps. Is there a way to, may be, use both sleep and usleep to achieve this?
While it is not yet time to wake up:
Check the current time
Go to sleep a bit shorter than when you want to wake up.
This way, you can periodically check the time, increasingly faster and more detailed as you reach the time you want to wake up.
Just divide the large sleep in several small sleep periods.
int64_t time_to_sleep = ...;
int peroid_to_sleep = ...;
while( time_to_sleep > 0 )
{
usleep( period_slept );
time_to_sleep -= period_slept;
}