What's the simples way of adjusting frame rate in c++? - c++

I have a while loop which displays things on window using openGL, but the animation is too fast compared to how it runs on other computers, so I need something in the loop which will allow to display only 1/40 seconds after previous display, how do I do that? (I'm c++ noob)

You need to check the time at the beginning of you loop, check the time again at the end of the loop after you've finished all of your rendering and update logic and then Sleep() for the difference between the elapsed time and the target frame time (25ms for 40 fps).

This is some code I used in C++ with the SDL library. Basically you need to have a function to start a timer at the start of your loop (StartFpsTimer()) and a function to wait enough time till the next frame is due based on the constant frame rate that you want to have (WaitTillNextFrame()).
The m_oTimer object is a simple timer object that you can start, stop, pause.
GAME_ENGINE_FPS is the frame rate that you would like to have.
// Sets the timer for the main loop
void StartFpsTimer()
{
m_oTimer.Start();
}
// Waits till the next frame is due (to call the loop at regular intervals)
void WaitTillNextFrame()
{
if(this->m_oTimer.GetTicks() < 1000.0 / GAME_ENGINE_FPS) {
delay((1000.0 / GAME_ENGINE_FPS) - m_oTimer.GetTicks());
}
}
while (this->IsRunning())
{
// Starts the fps timer
this->StartFpsTimer();
// Input
this->HandleEvents();
// Logic
this->Update();
// Rendering
this->Draw();
// Wait till the next frame
this->WaitTillNextFrame();
}

Related

Inconsistent chrono::high_resolution_clock delay

I'm trying to implement a MIDI-like clocked sample player.
There is a timer, which increments pulse counter, and every 480 pulses is a quarter, so pulse period is 1041667 ns for 120 beats per minute.
Timer is not sleep-based and running in separate thread, but it seems like delay time is inconsistent: period between samples played in a test file is fluctuating +- 20 ms (in some occasions period is OK and steady, I can't find out dependency of this effect).
Audio backend influence is excluded: i've tried OpenAL as well as SDL_mixer.
void Timer_class::sleep_ns(uint64_t ns){
auto start = std::chrono::high_resolution_clock::now();
bool sleep = true;
while(sleep)
{
auto now = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
if (elapsed.count() >= ns) {
TestTime = elapsed.count();
sleep = false;
//break;
}
}
}
void Timer_class::Runner(void){
// this running as thread
while(1){
sleep_ns(BPMns);
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
}
};
void Player_class::PlayFile(int FileNumber){
#ifdef AUDIO_SDL_MIXER
if(Mix_PlayChannel(-1, WaveData[FileNumber], 0)==-1) {
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
#endif // AUDIO_SDL_MIXER
}
Am i doing something wrong in terms of an approach? Is there any better way to implement timer of this kind?
Deviation higher than 4-5 ms is too much in case of audio.
I see a large error and a small error. The large error is that your code assumes that the main processing in Runner consistently takes zero time:
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
That is, you're "sleeping" for the time you want your loop iteration to take, and then you're doing processing on top of that.
The small error is presuming that you can represent your ideal loop iteration time with an integral number of nanoseconds. This error is so small that it doesn't really matter. However I amuse myself by showing people how they can get rid of this error too. :-)
First lets correct the small error by exactly representing the idealized loop iteration time:
using quarterPeriod = std::ratio<1, 2>;
using iterationPeriod = std::ratio_divide<quarterPeriod, std::ratio<480>>;
using iteration_time = std::chrono::duration<std::int64_t, iterationPeriod>;
I know nothing of music, but I'm guessing the above code is right because if you convert iteration_time{1} to nanoseconds, you get approximately 1041667ns. iteration_time{1} is intended to be the precise amount of time you want each iteration of your loop in Timer_class::Runner to take.
To correct the large error, you need to sleep until a time_point, as opposed to sleeping for a duration. Here's a generic utility to help you do that:
template <class Clock, class Duration>
void
delay_until(std::chrono::time_point<Clock, Duration> tp)
{
while (Clock::now() < tp)
;
}
Now if you code Timer_class::Runner to use delay_until instead of sleep_ns, I think you'll get better results:
void
Timer_class::Runner()
{
auto next_start = std::chrono::steady_clock::now() + iteration_time{1};
while (true)
{
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1);
}
delay_until(next_start);
next_start += iteration_time{1};
}
}
I ended up using #howard-hinnant version of delay, and reducing buffer size in openal-soft, that's what made a huge difference, fluctuations is now about +-5 ms for 1/16th at 120BPM (125 ms period) and +-1 ms for quarter beats. Leaves a lot to be desired, but i guess it's okay

Run two delays at once C++

I want to make a program in which there are two dots blinking (with a break of 10ms) simultaneously, but one with delay 200ms and other with delay of 300ms. How can I play these two dots simultaneously from beginning? Is there a better way to that from following:
for(int i=1;i<100;i++)
{
if (i%2==0)
circle(10,10,2);
if (i%3==0)
circle(20,10,2);
delay(10);
cleardevice();
delay(100);
}
I would do something like this instead:
int t0=0,t1=0,t=0,s0=0,s1=0,render=1;
for (;;)
{
if (some stop condition like keyboard hit ...) break;
// update time, state
if (t>=t0) { render=1; s0=!s0; if (s0) t0+=10; else t0+=200; }
if (t>=t1) { render=1; s1=!s1; if (s1) t1+=10; else t1+=300; }
// render
if (render)
{
render=0;
cleardevice();
if (s0) circle(10,10,2);
if (s1) circle(20,10,2);
}
// update main time
delay(10); // Sleep(10) would be better but I am not sure it is present in TC++
t+=10;
if (t>10000) // make sure overflow is not an issue
{
t -=10000;
t0-=10000;
t1-=10000;
}
}
Beware the code is untested as I wrote it directly in here (so there might be syntax errors or typos).
The basic idea is having one global time t with small enough granularity (10ms). And for each object have time of event (t0,t1) state of object (s0,s1) and periods (10/200 , 10/300).
If main time reach the event time swap the state on/off and update event time to next state swap time.
This way you can have any number of objects just make sure your main time step is small enough.
The render flag just ensures that the scene is rendered on change only.
To improve timing you can use RDTSC instead of t+=10 and actually measure how much time has passed with CPU frequency accuracy.
To display the two circles simultaneously in the first round, you have to satisfy both conditions i%2==0 and i%3==0 at once. You can achieve it by simply changing
for(int i=1;i<100;i++)
to
for(int i=0;i<100;i++)
// ↑ zero here

why did the chromium implement Time::Now ? what is the benefit?

code segment as follows, code come frome chromium, why?
// Initilalize initial_ticks and initial_time
void InitializeClock() {
initial_ticks = TimeTicks::Now();
// Initilalize initial_time
initial_time = CurrentWallclockMicroseconds();
}// static
Time Time::Now() {
if (initial_time == 0)
InitializeClock();
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
//
// To make this work, we initialize the clock (initial_time) and the
// counter (initial_ctr). To compute the initial time, we can check
// the number of ticks that have elapsed, and compute the delta.
//
// To avoid any drift, we periodically resync the counters to the system
// clock.
while (true) {
TimeTicks ticks = TimeTicks::Now();
// Calculate the time elapsed since we started our timer
TimeDelta elapsed = ticks - initial_ticks;
// Check if enough time has elapsed that we need to resync the clock.
if (elapsed.InMilliseconds() > kMaxMillisecondsToAvoidDrift) {
InitializeClock();
continue;
}
return Time(elapsed + Time(initial_time));
}
}
I assume your answer lies in the comment of the code you pasted:
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
So Now gives a time value of high resolution, which is beneficial when you need higher resolution than 10-15ms, as they state in the comment. For instance, if you want to reschedule a task every 100 ns, you need the higher resolution, or if you want to measure the execution time of something - 10-15 ms is an eternity.

Pausing in OpenGL successively

void keyPress(unsigned char key,int x,int y){
int i;
switch(key){
case 'f':
i = 3;
while(i--){
x_pos += 3;
sleep(100);
glutPostRedisplay();
}
}
}
Above is the code snippet written in C++ using GLUT library in Windows 7.
This function takes a character key and mouse co-ordinates x,y and performs translation along x-direction in 3 successive steps on pressing f character. Between each step the program should sleep for 100 ms.
We want to move a robot, and pause successively when he moves forward steps.
We are facing a problem in making the program sleep between the 3 steps. What is the problem in the above code snippet?
Disclaimer: The answer of jozxyqk seems better to me. This answer solves the problem in a dirty way.
You are misusing glutPostRedisplay, as is stated in this answer. The problem being, that glutPostRedisplay marks the current window as needing to be redisplayed, but it will only be done once you get in the glutMainLoop again. That does happen only once, hence only one sleep seems to work.
In fact all three sleeps work, but you get only one redraw after 300 ms.
To solve this, you have to find another way of redrawing the scene.
while(i--){
x_pos += 3;
sleep(100);
yourDrawFunction();
}
Assuming that you are working on a UNIX system.
sleep for 100 ms
sleep(100);
The problem here is, that you are sleeping for 100 seconds, as you are probably using the sleep function of the <unistd.h> header, which defines sleep() as:
extern unsigned int sleep (unsigned int __seconds);
What you want is probably something like
usleep(100000); //sleeps for 100000 microseconds == 100 ms
I believe the issue with your code is your sleep is messing with glut's main loop. The call stack might look something like this
main() -> glutMainLoop() -> keyPress() -> sleep()
#but can't get to this...
main() -> glutMainLoop() -> display()
Until keyPress() returns, glut's main loop cannot continue to render the next frame. It's waiting for the function to return. All glutPostRedisplay() does is say "hey, something's changed so the image is stale and we need to redraw the next time the main loop iterates". It doesn't actually call display().
You'll have to structure your code such that the main loop can continue as normal, but still include a delay between drawing. For example:
In keyPress(), set a moving = true state. Let the function return.
In the idle() function, call sleep() if moving or maybe if you have moved last time (really you might want to look into calculating elapsed time and do the timing yourself so you don't block the entire program)
Again in idle() increase x_pos and decrease your move count, let the function return, glut will draw, then call idle again and you can sleep/move again.

Time based loop and Frame based loop

Trying to understand the concepts of setting constant speed on game loop. My head hurts. I read the deWiTTERS page, but I can't see the why/how...when I get it...it slips.
while(true)
{
player->update() ;
player->draw() ;
}
This will run as fast as possible depending on how fast a processor is...I get that.
To run at the same speed on all computers, the logic is what I don't get. If I am trying to run at 60fps, then it means for every 16ms the objects move by a frame, yeah? What I don't get is how the update() or draw() may be too slow.
deWiTTERS example (I used 60):
const int FRAMES_PER_SECOND = 60;
const int SKIP_TICKS = 1000 / FRAMES_PER_SECOND;
DWORD next_game_tick = GetTickCount();
// GetTickCount() returns the current number of milliseconds
// that have elapsed since the system was started
int sleep_time = 0;
bool game_is_running = true;
while( game_is_running ) {
update_game();
display_game();
next_game_tick += SKIP_TICKS;
sleep_time = next_game_tick - GetTickCount();
if( sleep_time >= 0 ) {
Sleep( sleep_time );
}
else {
// Shit, we are running behind!
}
}
I don't understand why he gets the current time before the loop starts. And when he increments by SKIP_TICKS I understand he increments to the next 16ms interval. But I don't understand this part as well:
sleep_time = nextgametick - GetTickCount()
What does Sleep(sleep_time) mean? The processor leaves the loop and does something else? How does it achieve running 60fps?
In cases where the update_game() and display_game() functions complete in less time than a single frame interval at 60FPs, the loop tries to ensure that the next frame is not processed until that interval is up, by sleeping (blocking the thread) off the excess frame time. Seems like it is trying to ensure that the frame rate is capped to 60FPS, and no higher.
The processor does not 'leave the loop' but rather the thread in which your loop is running is blocked (prevented from continuing execution of your code) until the sleep time is up. Then it continues onto the next frame. In a multi-threaded game engine, sleeping the thread of the main game loop like this gives the CPU time to execute code in other threads, which may be managing physics, AI, audio mixing etc, depending on set up.
Why is GetTickCount() called before the loop starts?
We know from the comment in your code that GetTickCount() returns the milliseconds since system boot.
So lets say that the system has been running for 30 seconds (30,000ms) when you start your program,
and let's say that we didn't call GetTickCount() before entering the loop,
but instead initialized next_game_tick to 0.
We do the update and draw calls (as an example, they take 6ms) and then:
next_game_tick += SKIP_TICKS; // next_game_tick is now 16
sleep_time = next_game_tick - GetTickCount();
// GetTickCount() returns 30000!
// So sleep_time is now 16 - 30000 = -29984 !!!
Since we (sensibly) only sleep when sleep_time is positive,
the game loop would run as fast as possible (potentially faster than 60FPS),
which is not what you want.