timeGetTime() start variable is bigger than end variable - c++

I am using timeGetTime() to limit the framerate to 60 frames per second. The way i intend to do that is get the time it takes to render said 60 frames and then use Sleep to wait the remainder of the second. But for some reason timeGetTime() is returning a way bigger number the first time i call it than when i call it after the 60 frames are rendered.
Here is the code:
Header
#ifndef __TesteMapa_h_
#define __TesteMapa_h_
#include "BaseApplication.h"
#include "Mundo.h"
class TesteMapa : public BaseApplication{
public:
TesteMapa()
virtual ~TesteMapa();
protected:
virtual void createScene();
virtual bool frameRenderingQueued(const Ogre::FrameEvent& evt);
virtual bool frameEnded(const Ogre::FrameEvent& evt);
virtual bool keyPressed(const OIS::KeyEvent &evt);
virtual bool keyReleased(const OIS::KeyEvent &evt);
private:
Mundo mundo = Mundo(3,3,3);
short altura, largura, passos, balanca, framesNoSegundo=0;
Ogre::SceneNode *noSol, *noSolFilho, *noCamera;
DWORD inicioSegundo = 0, finala;//inicioSegundo is the start variable and finala the ending variable
};
#endif
CPP relevant function
bool TesteMapa::frameEnded(const Ogre::FrameEvent& evt){
framesNoSegundo++;
if (inicioSegundo == 0)
inicioSegundo = timeGetTime();
else{
if (framesNoSegundo == 60){
finala = timeGetTime(); //getting this just to see the value being returned
Sleep(1000UL - (timeGetTime() - inicioSegundo));
inicioSegundo = 0;
framesNoSegundo = 0;
}
}
return true;
}
I am using timeBeginPeriod(1) and timeEndPeriod(1) in the main function.

Without even reading the complete question, the following:
using timeGetTime()
t limit the framerate to 60 frames per second
...
Sleep for the remainder of the second
can be answered with a firm "You are doing it wrong". In other words, stop here, and take a different approach.
Neither does timeGetTime have the necessary precision (not even if you use timeBeginPeriod(1)), nor does Sleep have the required precision, nor does Sleep give any guarantees about the maximum duration, nor are the semantics of Sleep even remotely close to what you expect, nor is sleeping to limit the frame rate a correct approach.
Also, calculating the remainder of the second will inevitably introduce a systematic error that will accumulate over time.
The one and only correct approach to limit frame rate is to use vertical sync.
If you need to otherwise limit a simulation to a particular rate, using a waitable timer is the correct approach. That will still be subject to the scheduler's precision, but it will avoid accumulating systematic errors, and priority boost will at least give a de-facto soft realtime guarantee.
In order to understand why what you are doing is (aside from precision and accumulating errors) conceptually wrong to begin with, consider two things:
Different timers, even if they run at apparently the same frequency, will diverge (thus, using any timer other than the vsync interrupt is wrong to limit frame rate). Watch cars at a red traffic light for a real-life analogy. Their blinkers will always be out of sync.
Sleep makes the current thread "not ready" to run, and eventually, some time after the specified time has passed, makes the thread "ready" again. That doesn't mean that the thread will run at that time again. Indeed, it doesn't necessarily mean that the thread will run at all in any finite amount of time.
Resolution is commonly around 16ms (1ms if you adjust the scheduler's granularity, which is an antipattern -- some recent architectures support 0.5ms by using the undocumented Nt API), which is way too coarse for something on the 1/60 second scale.

If you're using Visual Studio 2013 or older, std::chrono uses the 64hz ticker (15.625 ms per tick), which is slow. VS 2015 is supposed to fix this. You can use QueryPerformanceCounter instead. Here is example code that runs at a fixed frequency with no drift, since delays are based off an original reading of the counter. dwLateStep is a debugging aid that gets incremented if one or more steps took too long. The code is Windows XP compatible, where Sleep(1) can take up to 2 ms, which is why the code only does a sleep if there is 2 ms or more of time to delay.
typedef unsigned long long UI64; /* unsigned 64 bit int */
#define FREQ 60 /* frequency */
DWORD dwLateStep; /* late step count */
LARGE_INTEGER liPerfFreq; /* 64 bit frequency */
LARGE_INTEGER liPerfTemp; /* used for query */
UI64 uFreq = FREQ; /* thread frequency */
UI64 uOrig; /* original tick */
UI64 uWait; /* tick rate / freq */
UI64 uRem = 0; /* tick rate % freq */
UI64 uPrev; /* previous tick based on original tick */
UI64 uDelta; /* current tick - previous */
UI64 u2ms; /* 2ms of ticks */
UI64 i;
/* ... */ /* wait for some event to start thread */
QueryPerformanceFrequency(&liPerfFreq);
u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);
timeBeginPeriod(1); /* set period to 1ms */
Sleep(128); /* wait for it to stabilize */
QueryPerformanceCounter(&liPerfTemp);
uOrig = uPrev = liPerfTemp.QuadPart;
for(i = 0; i < (uFreq*30); i++){
/* update uWait and uRem based on uRem */
uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
uRem = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
/* wait for uWait ticks */
while(1){
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
if(uDelta >= uWait)
break;
if((uWait - uDelta) > u2ms)
Sleep(1);
}
if(uDelta >= (uWait*2))
dwLateStep += 1;
uPrev += uWait;
/* fixed frequency code goes here */
/* along with some type of break when done */
}
timeEndPeriod(1); /* restore period */

Related

Frequency measurement with 8051 microcontroller

I simply want to continuously calculate the frequency of a sine signal with a comparator input (on the falling edges). The effective target frequency is about ~122 Hz and my implementation works most the time, but sometimes it calculates a wrong frequency with always about ~61Hz (which cannot be possible, I verified this with an oscilloscope).
It seems my implementation has a weakness, perhaps in form of a race condition or misuse of the timer, since it uses concurrent interrupt service routines and manually starts and stops the timer.
I also think the bug correlates with the measured frequency of about ~122Hz, because one timer overflow is pretty much the same:
One Timer Overflow = 1 / (1/8 MHz * 2^16 [Bits]) = 122.0703125 Hz
I am using a 8051 microcontroller (Silicon Labs C8051F121) with the following code:
// defines
#define PERIOD_TIMER_FREQ 8000000.0 // Timer 3 runs at 8MHz
#define TMR3_PAGE 0x01 /* TIMER 3 */
#define CP1F_VECTOR 12 /* comparator 1 falling edge */
#define TF3_VECTOR 14 /* timer3 reload timer */
sfr TMR3CN = 0xC8; /* TIMER 3 CONTROL */
sfr TMR3L = 0xCC; /* TIMER 3 LOW BYTE */
sfr TMR3H = 0xCD; /* TIMER 3 HIGH BYTE */
// global variables
volatile unsigned int xdata timer3_overflow_tmp; // temporary counter for the current period
volatile unsigned int xdata timer3_lastValue; // snapshot of the last timer value
volatile unsigned int xdata timer3_overflow; // current overflow counter, used in the main routine
volatile unsigned int xdata tempVar; // temporary variable
volatile unsigned long int xdata period; // the caluclated period
volatile float xdata period_in_SI; // calculated period in seconds
volatile float xdata frequency; // calculated frequency in Hertz
// Comparator 1 ISR has priority "high": EIP1 = 0x40
void comp1_falling_isr (void) interrupt CP1F_VECTOR
{
SFRPAGE = TMR3_PAGE;
TMR3CN &= 0xfb; // stop timer 3
timer3_lastValue = (unsigned int) TMR3H;
timer3_lastValue <<= 8;
timer3_lastValue |= (unsigned int) TMR3L;
// check if timer 3 overflow is pending
if (TMR3CN & 0x80)
{
timer3_overflow_tmp++; // increment overflow counter
TMR3CN &= 0x7f; // Clear over flow flag. This will also clear a pending interrupt request.
}
timer3_overflow = timer3_overflow_tmp;
// Reset all the timer 3 values to zero
TMR3H = 0;
TMR3L = 0;
timer3_overflow_tmp = 0;
TMR3CN |= 0x04; // restart timer 3
}
// Timer 3 ISR has priority "low", which means it can be interrupted by the
// comparator ISR: EIP2 = 0x00
// Timer 3 runs at 8MHz in 16 bit auto-reload mode
void timer3_isr(void) interrupt TF3_VECTOR using 2
{
SFRPAGE = TMR3_PAGE;
timer3_overflow_tmp++;
TMR3CN &= 0x7f; // Clear over flow flag. This will also clear a pending interrupt request.
}
void main(void)
{
for(;;)
{
...
calcFrequencyLabel: // this goto label is a kind of synchronization mechanism
// and is used to prevent race conditions caused by the ISRs
// which invalidates the current copied timer values
tempVar = timer3_lastValue;
period = (unsigned long int)timer3_overflow;
period <<= 16;
period |= (unsigned long int)timer3_lastValue;
// If both values are not equal, a race condition has been occured.
// Therefore the the current time values are invalid and needs to be dropped.
if (tempVar != timer3_lastValue)
goto calcFrequencyLabel;
// Caluclate period in seconds
period_in_SI = (float) period / PERIOD_TIMER_FREQ;
// Caluclate period in Hertz
frequency = 1 / period_in_SI; // Should be always stable about ~122Hz
...
}
}
Can someone please help me to find the bug in my implementation?
I can't pin-point the particular bug, but you have some problems in this code.
The main problem is that the 8051 was not a PC, but rather it was the most horrible 8-bit MCU to ever become mainstream. This means that you should desperately avoid things like 32 bit integers and floating point. If you disassemble this code you'll see what I mean.
There is absolutely no reason why you need to use floating point here. And 32 bit variables could probably be avoided too. You should use uint8_t whenever possible and avoid unsigned int too. Your C code shouldn't need to know the time in seconds or the frequency in Hz, but just care about the number of timer cycles.
You have multiple race condition bugs. Your goto hack in main is a dirty solution - instead you should prevent the race condition from happening in the first place. And you have another race condition between the ISRs with timer3_overflow_tmp.
Every variable shared between an ISR and main, or between two different ISR with different priorities, must be protected against race conditions. This means that you must either ensure atomic access or use some manner of guard mechanism. In this case, you could probably just use a "poor man's mutex" bool flag. The other alternative is to change to an 8 bit variable and write the code accessing it in inline assembler. Generally, you cannot have atomic access on an unsigned int on a 8-bit core.
With a slow edge as you would have for low frequency sine and insufficient hysteresis in the input (the default being none), it would only take a little noise for a rising edge to look like a falling edge and result in half the frequency.
The code fragment does not include the setting of CPT1CN where the hysteresis is set. For your signal you probably need to max it out, and ensure that the peak-to-peak noise on your signal is less that 30mV.

How to create a Timer in C of fixed Duration [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
How to create a timer of 200µs in C/C++?
Just perform some task in that time and then timer reset and after every 200µs keeps performing the same task?
C++ has std::chrono::high_resolution_clock which may have nanoseconds precision.
...represents the clock with the smallest tick period provided by the
implementation.
Standard time functions in C aren't very precise. ~15ms error you can expect.
For waiting: Unix implementations provide usleep and nanosleep. On Windows you can use CreateWaitableTimer (example).
For current time: Unix provides clock_gettime, Windows QueryPerformanceCounter (more).
Implementing a timer class which uses these time functions shouldn't be much work but if you use a good framework probably a high resolution timer is already available.
Example windows code for a thread. Note that cpu usage will be 100% for the core running this thread.
/* code for a thread to run at fixed frequency */
typedef unsigned long long UI64; /* unsigned 64 bit int */
#define FREQ 5000 /* frequency */
LARGE_INTEGER liPerfFreq; /* 64 bit frequency */
LARGE_INTEGER liPerfTemp; /* used for query */
UI64 uFreq = FREQ; /* process frequency */
UI64 uOrig; /* original tick */
UI64 uWait; /* tick rate / freq */
UI64 uRem = 0; /* tick rate % freq */
UI64 uPrev; /* previous tick based on original tick */
UI64 uDelta; /* current tick - previous */
/* ... */ /* wait for some event to start thread */
QueryPerformanceFrequency(&liPerfFreq);
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uOrig = uPrev = liPerfTemp.QuadPart;
while(1){
/* update uWait and uRem based on uRem */
uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
uRem = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
/* wait for uWait ticks */
while(1){
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
if(uDelta >= uWait)
break;
}
uPrev += uWait;
/* fixed frequency code goes here */
/* along with some type of break when done */
}

How to coordinate threads properly based on a fixed cycle frequency?

I want to create a gateway to pass messages from a can bus and lcm and vice-versa. If a message in lcm is sent at a specific frequency, its copy on the can bus should be sent at the exact same frequency.
How would you solve this?
In my mind I thought about two threads, one for each direction converting the message from one system to another in a loop. The two threads are coordinated by a timer which is set to a frequency which is much lower than the maximum message passing frequency possible. The timer sents a signal to the threads after each cycle. The threads wait for that event at the end of each loop, i.e. they sleep and free resources until the event occures.
I implemented this idea already but the resulting frequencies are not constant. Am I doing something conceptually wrong?
A solution should run on windows, thus utilize either native windows api or boost threads for example. The gateway should be real-time capable.
For windows, timeSetEvent can be used to set an event at a regular interval, although MSDN lists it as an obsolete function. The replacement for timeSetEvent uses a callback function, so you'd have to set an event in the callback function.
You can increase the tick rate from it's default 64hz == 15.625 ms down to 1 ms using timeBeginPeriod
Some games have threads that run at fixed frequencies, and poll a high frequency counter and Sleep when there's enough delay time remaining in the current cycle. To prevent drift, the delay is based off an original reading of the high frequency counter. Example code that is Windows XP compatible, where a Sleep(1) can take up to 2 milliseconds. dwLateStep is a diagnostic aid and incremented if the code exceeds a cycle period.
/* code for a thread to run at fixed frequency */
typedef unsigned long long UI64; /* unsigned 64 bit int */
#define FREQ 400 /* frequency */
DWORD dwLateStep; /* late step count */
LARGE_INTEGER liPerfFreq; /* 64 bit frequency */
LARGE_INTEGER liPerfTemp; /* used for query */
UI64 uFreq = FREQ; /* process frequency */
UI64 uOrig; /* original tick */
UI64 uWait; /* tick rate / freq */
UI64 uRem = 0; /* tick rate % freq */
UI64 uPrev; /* previous tick based on original tick */
UI64 uDelta; /* current tick - previous */
UI64 u2ms; /* 2ms of ticks */
UI64 i;
/* ... */ /* wait for some event to start thread */
QueryPerformanceFrequency(&liPerfFreq);
u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);
timeBeginPeriod(1); /* set period to 1ms */
Sleep(128); /* wait for it to stabilize */
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uOrig = uPrev = liPerfTemp.QuadPart;
for(i = 0; i < (uFreq*30); i++){
/* update uWait and uRem based on uRem */
uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
uRem = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
/* wait for uWait ticks */
while(1){
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
if(uDelta >= uWait)
break;
if((uWait - uDelta) > u2ms)
Sleep(1);
}
if(uDelta >= (uWait*2))
dwLateStep += 1;
uPrev += uWait;
/* fixed frequency code goes here */
/* along with some type of break when done */
}
timeEndPeriod(1); /* restore period */

Performance of Windows timer functions

I trying to call a watchdog function every 500ms using timeSetEvent.
Normally the watchdog is called without any problems. However when a DVD is inserted into a drive I don't get a callback for up to 8 secs as the system is busy reading the disk. Using WindowsQueue timer things aren't quite so bad and I get a 4 second day.
My last attempt was to set thread priority to time critical and then sleep for 500ms, but again the callback didn't occur for 8secs.
I am still running XP, but I'm not aware of changes made in later operating systems which would make this better.
I am using the DVD insertion as an example. I'd like the code to be robust to other conditions if possible.
Any suggestions greatly appreciated.
The best way we've found of getting decent timing performance from Windows is to avoid the timer functions altogether and roll your own; just sit in a tight loop, eating CPU, until the time for the next timer event comes up. QueryPerformanceCounter seems to be the best way of getting fast, high-resolution time for this purpose. Do whatever you can to give the thread that is running this loop its own CPU core that it will stick to and won't have anything else taking CPU time from.
But be warned:
Timing is still far from perfect. This method leaves the vast bulk of your cycles with Fairly Reasonable Timing (TM) but there will be occasional glitches, mostly in the 100s of ms but very occasionally in range of a few seconds.
This, of course, sets something of a lower bound on your CPU utilisation.
I still don't know how it'll perform with your DVD driver.
In general, if you need accurate timing, Windows is not the place to do it.
Have you looked into the RegisterWaitForSingleObject function (http://msdn.microsoft.com/en-us/library/windows/desktop/ms685061%28v=vs.85%29.aspx)?
You could try this example code, meant to be used in a separate thread to run at fixed frequency, up to 400hz on Windows XP (where Sleep(1) can take up to about 2 ms). There seems to be a system wide delay when a new volume becomes ready, in this case a DVD, but a similar delay can happen if an external USB hard drive is turned on. Seems Windows XP starts scanning the new volume, locking up quite a bit of the system, and even this code example may get impacted, since it's using Sleep(1) to reduce cpu overhead. You could remove the Sleep(1) code, which would keep one of the cpu cores running at 100%, and it's still possible that inserting a DVD would even mess this up.
/* code for a thread to run at fixed frequency */
typedef unsigned long long UI64; /* unsigned 64 bit int */
#define FREQ 2 /* frequency (500 ms) */
LARGE_INTEGER liPerfTemp; /* used for query */
UI64 uFreq = FREQ; /* process frequency */
UI64 uOrig; /* original tick */
UI64 uWait; /* tick rate / freq */
UI64 uRem = 0; /* tick rate % freq */
UI64 uPrev; /* previous tick based on original tick */
UI64 uDelta; /* current tick - previous */
UI64 u2ms; /* 2ms of ticks */
UI64 i;
/* ... */ /* wait for some event to start thread */
timeBeginPeriod(1); /* set period to 1ms */
Sleep(128); /* wait for it to stabilize */
u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uOrig = uPrev = liPerfTemp.QuadPart;
for(i = 0; i < (uFreq*30); i++){
/* update uWait and uRem based on uRem */
uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
uRem = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
/* wait for uWait ticks */
while(1){
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
if(uDelta >= uWait)
break;
if((uWait - uDelta) > u2ms)
Sleep(1);
}
if(uDelta >= (uWait*2))
dwLateStep += 1;
uPrev += uWait;
/* fixed frequency code goes here */
/* along with some type of break when done */
}
timeEndPeriod(1); /* restore period */

Sleep(1) and SDL_Delay(1) takes 15 ms

I am writing a C++/SDL/OpenGL application, and I have had the most peculiar bug. The game seemed to be working fine with a simple variable timestep. But then the FPS started behaving strangely. I figured out that both Sleep(1) and SDL_Delay(1) take 15 ms to complete.
Any input into those functions between 0-15 takes 15ms to complete, locking FPS at about 64. If I set it to 16, it takes 30 MS O.O
My loop looks like this:
while (1){
GLuint t = SDL_GetTicks();
Sleep(1); //or SDL_Delay(1)
cout << SDL_GetTicks() - t << endl; //outputs 15
}
It will very rarely take 1ms as it is supposed to, but the majority of the time it takes 15ms.
My OS is windows 8.1. CPU is an intel i7. I am using SDL2.
The ticker defaults to 64 hz, or 15.625 ms / tick. You need to change this to 1000hz == 1ms with timeBeginPeriod(1). MSDN article:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd757624(v=vs.85).aspx
If the goal here is to get a fixed frequency sequence, you should use a higher resolution timer, but unfortunately these can only be polled, so a combination of polling and sleep to reduce cpu overhead is needed. Example code, which assumes that a Sleep(1) could take up to almost 2 ms (which does happen with Windows XP, but not with later versions of Windows).
/* code for a thread to run at fixed frequency */
#define FREQ 400 /* frequency */
typedef unsigned long long UI64; /* unsigned 64 bit int */
LARGE_INTEGER liPerfFreq; /* used for frequency */
LARGE_INTEGER liPerfTemp; /* used for query */
UI64 uFreq = FREQ; /* process frequency */
UI64 uOrig; /* original tick */
UI64 uWait; /* tick rate / freq */
UI64 uRem = 0; /* tick rate % freq */
UI64 uPrev; /* previous tick based on original tick */
UI64 uDelta; /* current tick - previous */
UI64 u2ms; /* 2ms of ticks */
#if 0 /* for optional error check */
static DWORD dwLateStep = 0;
#endif
/* get frequency */
QueryPerformanceFrequency(&liPerfFreq);
u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);
/* wait for some event to start this thread code */
timeBeginPeriod(1); /* set period to 1ms */
Sleep(128); /* wait for it to stabilize */
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uOrig = uPrev = liPerfTemp.QuadPart;
while(1){
/* update uWait and uRem based on uRem */
uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
uRem = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
/* wait for uWait ticks */
while(1){
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
if(uDelta >= uWait)
break;
if((uWait - uDelta) > u2ms)
Sleep(1);
}
#if 0 /* optional error check */
if(uDelta >= (uWait*2))
dwLateStep += 1;
#endif
uPrev += uWait;
/* fixed frequency code goes here */
/* along with some type of break when done */
}
timeEndPeriod(1); /* restore period */
Looks like 15 ms is the smallest slice the OS will deliver to you. I'm not sure about your specific framework but sleep usually guarantees a minimal sleep time. (ie. it will sleep for at least 1ms.)
SDL_Delay()/Sleep() cannot be used reliably with times below 10-15 milliseconds. CPU ticks don't register fast enough to detect a 1 ms difference.
See the SDL docs here.