C++ Timer Problem - c++

I've written a timer class. After starting the timer, I would like to know if 20 seconds has been passed or not, if it is, I would like to call a function or perform a block of code. That class doesn't work but I Don't know why .
EDIT: By it doesn't work I mean that isTimeTout(seconds) always return true; I would like just to see if few seconds has been passed, and based on that do an action.
class timer {
private:
unsigned long begTime;
public:
void start() {
begTime = clock();
}
unsigned long elapsedTime() {
return ((unsigned long) clock() - begTime) / CLOCKS_PER_SEC;
}
bool isTimeout(unsigned long seconds) {
return seconds >= elapsedTime();
}
};

clock() measures CPU time not wall time. Try using time() along with difftime() instead.

Since you're on Windows, you can stick with using clock().
The error is here:
return seconds >= elapsedTime();
it should be:
return seconds <= elapsedTime();
What you have right now will return true when less than 20 seconds has elapsed. Flipping the comparison should fix it.

Try using time() and difftime() like stated above. I've had this problem before too :)

Related

How to call a function in a certain frequency, C++

I am a beginner to C++, trying to improve my skills by working on a project.
I am trying to have my program call a certain function 100 times a second for 30 seconds.
I thought that this would be a common, well documented problem but so far I did not manage to find a solution.
Could anyone provide me with an implementation example or point me towards one?
Notes: my program is intended to be single-threaded and to use only the standard library.
There are two reasons you couldn't find a trivial answer:
This statement "I am trying to have my program call a certain function 100 times a second for 30 seconds" is not well-defined.
Timing and scheduling is a very complication problem.
In a practical sense, if you just want something to run approximately 100 times a second for 30 seconds, assuming the function doesn't take long to run, you can say something like:
for (int i=0;i<3000;i++) {
do_something();
this_thread::sleep_for(std::chrono::milliseconds(10));
}
This is an approximate solution.
Problems with this solution:
If do_something() takes longer than around 0.01 milliseconds your timing will eventually be way off.
Most operating systems do not have very accurate sleep timing. There is no guarantee that asking to sleep for 10 milliseconds will wait for exactly 10 milliseconds. It will usually be approximately accurate.
You can use std::this_thread::sleep_until and calculate the end time of the sleep according to desired frequency:
void f()
{
static int counter = 0;
std::cout << counter << '\n';
++counter;
}
int main() {
using namespace std::chrono_literals;
using Clock = std::chrono::steady_clock;
constexpr auto period = std::chrono::duration_cast<std::chrono::milliseconds>(1s) / 100; // conversion to ms needed to prevent truncation in integral division
constexpr auto repetitions = 30s / period;
auto const start = Clock::now();
for (std::remove_const_t<decltype(repetitions)> i = 1; i <= repetitions; ++i)
{
f();
std::this_thread::sleep_until(start + period * i);
}
}
Note that this code will not work, if f() takes more than 10ms to complete.
Note: The exact duration of the sleep_until calls may be off, but the fact that the sleep duration is calculated based on the current time by sleep_until should result in any errors being kept to a minimum.
You can't time it perfectly, but you can try like this:
using std::chrono::steady_clock;
using namespace std::this_thread;
auto running{ true };
auto frameTime{ std::chrono::duration_cast<steady_clock::duration>(std::chrono::duration<float>{1.0F / 100.0F}) }
auto delta{ steady_clock::duration::zero() };
while (running) {
auto t0{ steady_clock::now() };
while (delta >= frameTime) {
call_your_function(frameTime);
delta -= frameTime;
}
if (const auto dt{ delta + steady_clock::now() - t0 }; dt < frameTime) {
sleep_for(frameTime - dt);
delta += steady_clock::now() - t0;
}
else {
delta += dt;
}
}

Can i expect anything from Windows-API's Sleep function?

I was trying to make a simple graphics program for Windows (my machine has windows 10) in CPP, and I'm struggling to lock the frame rate.
Here's simple illustration of my problem:
inline LARGE_INTEGER
get_wall_clock()
{
LARGE_INTEGER result;
QueryPerformanceCounter(&result);
return result;
}
static LARGE_INTEGER frequency;
inline float
get_seconds_elapsed(LARGE_INTEGER begin, LARGE_INTEGER end)
{
return (float)(end.QuadPart - begin.QuadPart) / (float)frequency.QuadPart;
}
int main()
{
bool can_sleep = (timeBeginPeriod(1) == TIMERR_NOERROR);
QueryPerformanceFrequency(&frequency);
int target_HZ = 60;
float target_seconds_per_frame = 1.0f / (float)target_HZ;
LARGE_INTEGER last_counter = get_wall_clock();
while(true)
{
do_something();
LARGE_INTEGER end_frame = get_wall_clock();
float seconds_of_work = get_seconds_elapsed(last_counter, end_frame);
float seconds_of_frame = seconds_of_work;
if(seconds_of_work < target_seconds_per_frame)
{
if(can_sleep)
{
int ms_to_sleep = (int)(1000.0f * (target_seconds_per_frame - seconds_of_work));
if(ms_to_sleep)
{
Sleep(ms_to_sleep);
}
}
float frame_duration = get_seconds_elapsed(last_counter, get_wall_clock());
Assert(frame_duration < target_seconds_per_frame);
while(seconds_of_frame < target_seconds_per_frame)
seconds_of_frame = get_seconds_elapsed(last_counter, get_wall_clock());
}
last_counter = get_wall_clock();
}
timeEndPeriod(1);
}
My problem is that the assertion on:
Assert(frame_duration < target_seconds_per_frame);
is almost always firing.
I tried to make some adjustments to the amount of mili-seconds of sleep, even made it so it will sleep for only 90% of the mili-seconds needed - but it didn't seems to help.
But the real weird thing is the following... when i tried to measure the time the Sleep function actually slept for, in the following method:
LARGE_INTEGER sleep_start = get_wall_clock();
Sleep(ms_to_sleep);
float seconds_slept = get_seconds_elapsed(sleep_start, get_wall_clock());
i found out that it sometimes sleep for waaayyy more than the mili-seconds requested. it not uncommon to see a different of 10-20 mili-seconds (i had a time when ms_to_sleep was 12 and it actually slept for over 30 mili-seconds).
Is there anything obvious that I'm missing here?
I know that according to the documentation Sleep is not guaranteed to sleep for the time requested, but i thought that timeBeginPeriod + the flooring of the ms_to_sleep would have cover that...
Is there any other way to wait reliably for the frame to end (other than just looping)?
Thanks in advance guys...
Is there anything obvious that I'm missing here?
Frankly, yes. You didn't read the manual. From the docs:
After the sleep interval has passed, the thread is ready to run. If you specify 0 milliseconds, the thread will relinquish the remainder of its time slice but remain ready. Note that a ready thread is not guaranteed to run immediately. Consequently, the thread may not run until some time after the sleep interval elapses. For more information, see Scheduling Priorities.
Sleep isn't made to sleep for exactly the time you asked for.

C++ Setting Speed of While Loop per Second

I am relatively new to C++, so I don't have a huge amount of experience. I have learned Python, and I am trying to make an improved version of a Python code I wrote in C++. However, I want it to work in real time, so I need to set the speed of a While loop. I'm sure there is an answer, but I couldn't find it. I want a comparable code to this:
rate(timeModifier * (1/dt))
This was the code I used in Python. I can set a variable dt to make calculations more precise, and timeModifier to double or triple the speed (1 sets it to realtime). This means that the program will go through the loop 1/dt times per second. I understand I can include time.h at the header, but I guess I am too new to C++ to understand how to transfer this to my needs.
You could write your own timer class:
#include <ctime>
class Timer {
private:
unsigned long startTime;
public:
void start() {
startTime = clock();
}
unsigned long elapsedTime() {
return ((unsigned long) clock() - startTime) / CLOCKS_PER_SEC;
}
bool isTimeout(unsigned long seconds) {
return seconds >= elapsedTime();
}
};
int main()
{
unsigned long dt = 10; //in seconds
Timer t;
t.start();
while(true)
{
if(t.elapsedTime() < dt)
{
//do something to pass time as a busy-wait or sleep
}
else
{
//do something else
t = Timer(); //reset the timer
}
}
}
Note that busy-waits are discouraged, since they will hog the CPU. If you don't need to do anything, use the sleep command(Windows) or usleep ( Linux). For more information on making timers in C++, see this link.
You can't do it the same manner in C++. You need to manually call some kind of sleep function in calculation loop, Sleep on Windows or usleep on *NIX.
It's been a while since I've done something like this, but something like this will work:
#include <time.h>
time_t t2, t1 = time(NULL);
while(CONDITIONS)
{
time_t t2 = time(NULL);
if(difftime(t2, t1) > timeModifier)
{
//DO the stuff!
t1 = time(NULL);
}
}
I should note, however, that I'm not familiar with the precision of this method, I think it measures the difference in seconds.
If you need something more precise, use the clock() function which has the number of milliseconds since 12:00 AM beginning January 1, 1980, to the nearest 10 milliseconds.
Perhaps something like this:
#include <time.h>
clock_t t2, t1 = clock();
while(CONDITIONS)
{
t2 = clock();
if((t2-t1) > someTimeElapsed*timeModifier)
{
//DO the stuff!
t1 = clock());
}
}
Update:
You can even yield the CPU to other threads and processes by adding this after the end of the if statement:
else
{
usleep(10000); //sleep for ten milliseconds (chosen because of precision on clock())
}
Depending on the accuracy you need, and your platform, you could use usleep This allows you to set the pause time down to microseconds:
#include <unistd.h>
int usleep(useconds_t useconds);
Remember that your loop will always take longer than this because of the inherent processingtime of the rest of the loop but it's a start. For anything more accurate,you'd probably need to look at timer based callbacks.
You should really create a new thread and have it do the timing so that it remains unaffected by the processing work done in the loop.
WARNING: Pseudo code... just to give you an idea of how to start.
Thread* tThread = CreateTimerThread(1000);
tThread->run();
while( conditionNotMet() )
{
tThread->waitForTimer();
doWork();
}
CreateTimerThread() should return the thread object you want, and run would be something like:
run()
{
while( false == shutdownLatch() )
{
Sleep( timeout );
pulseTimerEvent();
}
}
waitForTimer()
{
WaitForSingleObject( m_handle );
return;
}
Under Windows you can use QueryPerformanceCounter, while polling the time (e.g. within another while loop) call Sleep(0) to allow other threads to continue operation.
Remember Sleep is highly inaccurate. For full control just run a loop without operations, however you'll use 100% of the CPU. To relax the strain on the CPU you can call Sleep(10) etc.

Uniformly Regulating Program Execution Rate [Windows C++]

First off, I found a lot of information on this topic, but no solutions that solved the issue unfortunately.
I'm simply trying to regulate my C++ program to run at 60 iterations per second. I've tried everything from GetClockTicks() to GetLocalTime() to help in the regulation but every single time I run the program on my Windows Server 2008 machine, it runs slower than on my local machine and I have no clue why!
I understand that "clock" based function calls return CPU time spend on the execution so I went to GetLocalTime and then tried to differentiate between the start time and the stop time then call Sleep((FPS / 1000) - millisecondExecutionTime)
My local machine is quite faster than the servers CPU so obviously the thought was that it was going off of CPU ticks, but that doesn't explain why the GetLocalTime doesn't work. I've been basing this method off of http://www.lazyfoo.net/SDL_tutorials/lesson14/index.php changing the get_ticks() with all of the time returning functions I could find on the web.
For example take this code:
#include <Windows.h>
#include <time.h>
#include <string>
#include <iostream>
using namespace std;
int main() {
int tFps = 60;
int counter = 0;
SYSTEMTIME gStart, gEnd, start_time, end_time;
GetLocalTime( &gStart );
bool done = false;
while(!done) {
GetLocalTime( &start_time );
Sleep(10);
counter++;
GetLocalTime( &end_time );
int startTimeMilli = (start_time.wSecond * 1000 + start_time.wMilliseconds);
int endTimeMilli = (end_time.wSecond * 1000 + end_time.wMilliseconds);
int time_to_sleep = (1000 / tFps) - (endTimeMilli - startTimeMilli);
if (counter > 240)
done = true;
if (time_to_sleep > 0)
Sleep(time_to_sleep);
}
GetLocalTime( &gEnd );
cout << "Total Time: " << (gEnd.wSecond*1000 + gEnd.wMilliseconds) - (gStart.wSecond*1000 + gStart.wMilliseconds) << endl;
cin.get();
}
For this code snippet, run on my computer (3.06 GHz) I get a total time (ms) of 3856 whereas on my server (2.53 GHz) I get 6256. So it potentially could be the speed of the processor though the ratio of 2.53/3.06 is only .826797386 versus 3856/6271 is .614893956.
I can't tell if the Sleep function is doing something drastically different than expected though I don't see why it would, or if it is my method for getting the time (even though it should be in world time (ms) not clock cycle time. Any help would be greatly appreciated, thanks.
For one thing, Sleep's default resolution is the computer's quota length - usually either 10ms or 15ms, depending on the Windows edition. To get a resolution of, say, 1ms, you have to issue a timeBeginPeriod(1), which reprograms the timer hardware to fire (roughly) once every millisecond.
In your main loop you can
int main()
{
// Timers
LONGLONG curTime = NULL;
LONGLONG nextTime = NULL;
Timers::GameClock::GetInstance()->GetTime(&nextTime);
while (true) {
Timers::GameClock::GetInstance()->GetTime(&curTime);
if ( curTime > nextTime && loops <= MAX_FRAMESKIP ) {
nextTime += Timers::GameClock::GetInstance()->timeCount;
// Business logic goes here and occurr based on the specified framerate
}
}
}
using this time library
include "stdafx.h"
LONGLONG cacheTime;
Timers::SWGameClock* Timers::SWGameClock::pInstance = NULL;
Timers::SWGameClock* Timers::SWGameClock::GetInstance ( ) {
if (pInstance == NULL) {
pInstance = new SWGameClock();
}
return pInstance;
}
Timers::SWGameClock::SWGameClock(void) {
this->Initialize ( );
}
void Timers::SWGameClock::GetTime ( LONGLONG * t ) {
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) t)) {
*t = timeGetTime();
}
cacheTime = *t;
}
LONGLONG Timers::SWGameClock::GetTimeElapsed ( void ) {
LONGLONG t;
// Use timeGetTime() if queryperformancecounter is not supported
if (!QueryPerformanceCounter( (LARGE_INTEGER *) &t )) {
t = timeGetTime();
}
return (t - cacheTime);
}
void Timers::SWGameClock::Initialize ( void ) {
if ( !QueryPerformanceFrequency((LARGE_INTEGER *) &this->frequency) ) {
this->frequency = 1000; // 1000ms to one second
}
this->timeCount = DWORD(this->frequency / TICKS_PER_SECOND);
}
Timers::SWGameClock::~SWGameClock(void)
{
}
with a header file that contains the following:
// Required for rendering stuff on time
#pragma once
#define TICKS_PER_SECOND 60
#define MAX_FRAMESKIP 5
namespace Timers {
class SWGameClock
{
public:
static SWGameClock* GetInstance();
void Initialize ( void );
DWORD timeCount;
void GetTime ( LONGLONG* t );
LONGLONG GetTimeElapsed ( void );
LONGLONG frequency;
~SWGameClock(void);
protected:
SWGameClock(void);
private:
static SWGameClock* pInstance;
}; // SWGameClock
} // Timers
This will ensure that your code runs at 60FPS (or whatever you put in) though you can probably dump the MAX_FRAMESKIP as that's not truly implemented in this example!
You could try a WinMain function and use the SetTimer function and a regular message loop (you can also take advantage of the filter mechanism of GetMessage( ... ) ) in which you test for the WM_TIMER message with the requested time and when your counter reaches the limit do a PostQuitMessage(0) to terminate the message loop.
For a duty cycle that fast, you can use a high accuracy timer (like QueryPerformanceTimer) and a busy-wait loop.
If you had a much lower duty cycle, but still wanted precision, then you could Sleep for part of the time and then eat up the leftover time with a busy-wait loop.
Another option is to use something like DirectX to sync yourself to the VSync interrupt (which is almost always 60 Hz). This can make a lot of sense if you're coding a game or a/v presentation.
Windows is not a real-time OS, so there will never be a perfect way to do something like this, as there's no guarantee your thread will be scheduled to run exactly when you need it to.
Note that in the remarks for Sleep, the actual amount of time will be at least one "tick" and possible one whole "tick" longer than the delay you requested before the thread is scheduled to run again (and then we have to assume the thread is scheduled). The "tick" can vary a lot depending on hardware and the version of Windows. It is commonly in the 10-15 ms range, and I've seen it as bad as 19 ms. For 60 Hz, you need 16.666 ms per iteration, so this is obviously not nearly precise enough to give you what you need.
What about rendering (iterating) based on the time elapsed between rendering of each frame? Consider creating a void render(double timePassed) function and render depending on the timePassed parameter instead of putting program to sleep.
Imagine, for example, you want to render a ball falling or bouncing. You would know it's speed, acceleration and all other physics that you need. Calculate the position of the ball based on timePassed and all other physics parameters (speed, acceleration, etc.).
Or if you prefer, you could just skip the render() function execution if time passed is a value to small, instead of puttin program to sleep.

pthread sleep function, cpu consumption

On behalf, sorry for my far from perfect English.
I've recently wrote my self a demon for Linux (to be exact OpenWRT router) in C++ and i came to problem.
Well there are few threads there, one for each opened TCP connection, main thread waiting for new TCP connections and, as I call it, commander thread to check for status.
Every thing works fine, but my CPU is always at 100%. I now that its because of the commander code:
void *CommanderThread(void* arg)
{
Commander* commander = (Commander*)arg;
pthread_detach(pthread_self());
clock_t endwait;
while(true)
{
uint8_t temp;
endwait = clock () + (int)(1 * CLOCKS_PER_SEC);
for(int i=0;i<commander->GetCount();i++)
{
ptrRelayBoard rb = commander->GetBoard(i);
if (rb!= NULL)
rb->Get(0x01,&temp);
}
while (clock() < endwait);
}
return NULL;
}
As you can see the program do stuff every 1s. Time is not critical here. I know that CPU is always checking did the time passed. I've tried do do something like this:
while (clock() < endwait)
usleep(200);
But when the function usleep (and sleep also) seam to freeze the clock increment (its always a constant value after the usleep).
Is there any solution, ready functions (like phread_sleep(20ms)), or walk around for my problem? Maybe i should access the main clock somehow?
Here its not so critical i can pretty much check how long did the execution of status checking took (latch the clock() before, compare with after), and count the value to put as an argument to the usleep function. But in other thread, I would like to use this form.
Do usleep is putting whole process to freeze?
I'm currently debugging it on Cygwin, but don't think the problem lies here.
Thanks for any answers and suggestions its much appreciated.
J.L.
If it doesn't need to be exactly 1s, then just usleep a second. usleep and sleep put the current thread into an efficient wait state that is at least the amount of time you requested (and then it becomes eligible for being scheduled again).
If you aren't trying to get near exact time there's no need to check clock().
I've I have resolved it other way.
#include <sys/time.h>
#define CLOCK_US_IN_SECOND 1000000
static long myclock()
{
struct timeval tv;
gettimeofday(&tv, NULL);
return (tv.tv_sec * CLOCK_US_IN_SECOND) + tv.tv_usec;
}
void *MainThread(void* arg)
{
Commander* commander = (Commander*)arg;
pthread_detach(pthread_self());
long endwait;
while(true)
{
uint8_t temp;
endwait = myclock() + (int)(1 * CLOCK_US_IN_SECOND);
for(int i=0;i<commander->GetCount();i++)
{
ptrRelayBoard rb = commander->GetBoard(i);
if (rb!= NULL)
rb->Get(0x01,&temp);
}
while (myclock() < endwait)
usleep((int)0.05*CLOCK_US_IN_SECOND);
}
return NULL;
}
Bare in mind, that this code is vulnerable for time change during execution. Don't have idea how to omit that, but in my case its not really important.