Pausing in OpenGL successively - c++

void keyPress(unsigned char key,int x,int y){
int i;
switch(key){
case 'f':
i = 3;
while(i--){
x_pos += 3;
sleep(100);
glutPostRedisplay();
}
}
}
Above is the code snippet written in C++ using GLUT library in Windows 7.
This function takes a character key and mouse co-ordinates x,y and performs translation along x-direction in 3 successive steps on pressing f character. Between each step the program should sleep for 100 ms.
We want to move a robot, and pause successively when he moves forward steps.
We are facing a problem in making the program sleep between the 3 steps. What is the problem in the above code snippet?

Disclaimer: The answer of jozxyqk seems better to me. This answer solves the problem in a dirty way.
You are misusing glutPostRedisplay, as is stated in this answer. The problem being, that glutPostRedisplay marks the current window as needing to be redisplayed, but it will only be done once you get in the glutMainLoop again. That does happen only once, hence only one sleep seems to work.
In fact all three sleeps work, but you get only one redraw after 300 ms.
To solve this, you have to find another way of redrawing the scene.
while(i--){
x_pos += 3;
sleep(100);
yourDrawFunction();
}
Assuming that you are working on a UNIX system.
sleep for 100 ms
sleep(100);
The problem here is, that you are sleeping for 100 seconds, as you are probably using the sleep function of the <unistd.h> header, which defines sleep() as:
extern unsigned int sleep (unsigned int __seconds);
What you want is probably something like
usleep(100000); //sleeps for 100000 microseconds == 100 ms

I believe the issue with your code is your sleep is messing with glut's main loop. The call stack might look something like this
main() -> glutMainLoop() -> keyPress() -> sleep()
#but can't get to this...
main() -> glutMainLoop() -> display()
Until keyPress() returns, glut's main loop cannot continue to render the next frame. It's waiting for the function to return. All glutPostRedisplay() does is say "hey, something's changed so the image is stale and we need to redraw the next time the main loop iterates". It doesn't actually call display().
You'll have to structure your code such that the main loop can continue as normal, but still include a delay between drawing. For example:
In keyPress(), set a moving = true state. Let the function return.
In the idle() function, call sleep() if moving or maybe if you have moved last time (really you might want to look into calculating elapsed time and do the timing yourself so you don't block the entire program)
Again in idle() increase x_pos and decrease your move count, let the function return, glut will draw, then call idle again and you can sleep/move again.

Related

Run two delays at once C++

I want to make a program in which there are two dots blinking (with a break of 10ms) simultaneously, but one with delay 200ms and other with delay of 300ms. How can I play these two dots simultaneously from beginning? Is there a better way to that from following:
for(int i=1;i<100;i++)
{
if (i%2==0)
circle(10,10,2);
if (i%3==0)
circle(20,10,2);
delay(10);
cleardevice();
delay(100);
}
I would do something like this instead:
int t0=0,t1=0,t=0,s0=0,s1=0,render=1;
for (;;)
{
if (some stop condition like keyboard hit ...) break;
// update time, state
if (t>=t0) { render=1; s0=!s0; if (s0) t0+=10; else t0+=200; }
if (t>=t1) { render=1; s1=!s1; if (s1) t1+=10; else t1+=300; }
// render
if (render)
{
render=0;
cleardevice();
if (s0) circle(10,10,2);
if (s1) circle(20,10,2);
}
// update main time
delay(10); // Sleep(10) would be better but I am not sure it is present in TC++
t+=10;
if (t>10000) // make sure overflow is not an issue
{
t -=10000;
t0-=10000;
t1-=10000;
}
}
Beware the code is untested as I wrote it directly in here (so there might be syntax errors or typos).
The basic idea is having one global time t with small enough granularity (10ms). And for each object have time of event (t0,t1) state of object (s0,s1) and periods (10/200 , 10/300).
If main time reach the event time swap the state on/off and update event time to next state swap time.
This way you can have any number of objects just make sure your main time step is small enough.
The render flag just ensures that the scene is rendered on change only.
To improve timing you can use RDTSC instead of t+=10 and actually measure how much time has passed with CPU frequency accuracy.
To display the two circles simultaneously in the first round, you have to satisfy both conditions i%2==0 and i%3==0 at once. You can achieve it by simply changing
for(int i=1;i<100;i++)
to
for(int i=0;i<100;i++)
// ↑ zero here

Precise way to reduce CPU usage in an infinite loop

This is my code using QueryPeformanceCounter as timer.
//timer.h
class timer {
private:
...
public:
...
double get(); //returns elapsed time in seconds
void start();
};
//a.cpp
void loop() {
timer t;
double tick;
double diff; //surplus seconds
t.start();
while( running ) {
tick = t.get();
if( tick >= 1.0 - diff ) {
t.start();
//things that should be run exactly every second
...
}
Sleep( 880 );
}
}
Without Sleep this loop would go on indefinitely calling t.get() every time which causes high CPU usage. For that reason, I make it sleep for about 880 milliseconds so that it wouldn't call t.get() while not necessary.
As I said above, I'm currently using Sleep to do the trick, but what I'm worried about is the accuracy of Sleep. I've read somewhere that the actual milliseconds the program pauses may vary - 20 to 50 ms - the reason I set the parameter to 880. I want to reduce the CPU usage as much as possible; I want to, if possible, pause more than 990 milliseconds EDIT: and yet less than 1000 milliseconds between every loop. What would be the best way to go?
I don't get why you are calling t.start() twice (it resets the clock?), but I would like to propose a kind of solution for the Sleep inaccuracy. Let's take a look at the content of while( running ) loop and follow the algorithm:
double future, remaining, sleep_precision = 0.05;
while (running) {
future = t.get() + 1.0;
things_that_should_be_run_exactly_every_second();
// the loop in case of spurious wakeup
for (;;) {
remaining = future - t.get();
if (remaining < sleep_precision) break;
Sleep(remaining);
}
// next, do the spin-lock for at most sleep_precision
while (t.get() < future);
}
The value of sleep_precision should be set empirically - OSes I know can't give you that.
Next, there are some alternatives of the sleeping mechanism that may better suit your needs - Is there an alternative for sleep() in C?
If you want to pause more than 990 milliseconds, write a sleep for 991 milliseconds. Your thread is guaranteed to be asleep for at least that long. It won't be less, but it could be multiples of 20-50ms more (depending on the resolution of your OS's time slicing, and on the the cost of context switching).
However, this will not give you something running "exactly every second". There is just no way to achieve that on a time-shared operating system. You'll have to program closer to the metal, or rely on an interrupt from a PPS source and just pray your OS lets you run your entire loop iteration in one shot. Or, I suppose, write something to run in kernel mode…?

time delay inside recursive function in opengl

My issue is that my function does the job so quickly that i don't see the progress in the screen, here is my display function:
void showMaze(const Maze::Maze &m){
glPushMatrix();
glTranslatef(-1,1, 0);
for(int i=0;i<m.num_Y;i++)
{
for(int j=0;j<m.num_X;j++)
{
char c = m.maze[i][j];
if(c=='1'){ glColor3f(255,255,0);
glRectf(0.05, -0.05, -0.05,0.05);
}
if(c=='0'){ glColor3f(20,60,60);
glRectf(0.05, -0.05, -0.05,0.05);
}
glTranslatef(0.1,0, 0);
}
glTranslatef(-(m.num_X*0.1),-0.1, 0);
}
glPopMatrix();
}
The recursive function:
bool findPath(Maze* M, char m){
showMaze(*M);
glutPostRedisplay();
if(M->isFinish())
return true;
if (m!='s' && M->north()){
update(M, 'n');
if(isNew(M) && findPath(M, 'n') && M->isFinish()){
return true;
}
else{
M->south();
}
}
// .....
else{
return false;
}
}
void render()
{
glClear( GL_COLOR_BUFFER_BIT );
findPath(N,'z');
glutSwapBuffers();
}
In main:
glutDisplayFunc( render );
so my question is how do i get to wait few seconds ( so that i can see the progress ) whenever findpath is called, i've tried glutimeelapsed, sleep,gettickcount but none of do the job correctly.
Edit1:
when i put something like sleep() right after calling showMaze(), nothing is displayed for few seconds, then the final screen shows, am not an expert in c++, but i suppose showMaze() should be executed first, then the sleep() function, or c++ wait to execute the whole function to display results?
Edit2:
i found a solution for the problem, i took X and Y o the maze whenever they change, and stored the in two vectors, and in my drawing function i display fisrt the empty maze, then i slowly add the changed X and Y.
Nothing will be visible on the screen unless you show what you are rendering by swapping buffers (unless you're rendering to the front buffer, but that's iffy). So you can sleep for however long you like in the middle of your recursive function, but you're not going to see anything until you exit that callstack. And by then, you've drawn over everything.
So instead of merely sleeping, you need to use glutSwapBuffers in your recursive call stack when you want something to be visible.
However, this is a bad idea. The render call should not do things like sleep. The system needs your render to actually render, because the screen may need to be updated for a variety of reasons (another window revealing more of your window, etc). You should not have your render loop suspend itself like this.
Instead, what you ought to be doing is executing one segment per render loop execution, relying on glutPostRedisplay to make sure the loop keeps getting called. And you should either base your animation on the time delta between loop executions, or you should use a sleep timer to make sure that at least X time always passes between cycles.
So you need to unroll your recursive call into an iterative process.
Alternatively, if you have access to Boost.Context or Boost.Coroutine, you could use that to handle things. That way, you could keep your recursive structure. When you have rendered everything you want to display, you simply suspending your coroutine back to render, who will swap buffers and return.

What's the simples way of adjusting frame rate in c++?

I have a while loop which displays things on window using openGL, but the animation is too fast compared to how it runs on other computers, so I need something in the loop which will allow to display only 1/40 seconds after previous display, how do I do that? (I'm c++ noob)
You need to check the time at the beginning of you loop, check the time again at the end of the loop after you've finished all of your rendering and update logic and then Sleep() for the difference between the elapsed time and the target frame time (25ms for 40 fps).
This is some code I used in C++ with the SDL library. Basically you need to have a function to start a timer at the start of your loop (StartFpsTimer()) and a function to wait enough time till the next frame is due based on the constant frame rate that you want to have (WaitTillNextFrame()).
The m_oTimer object is a simple timer object that you can start, stop, pause.
GAME_ENGINE_FPS is the frame rate that you would like to have.
// Sets the timer for the main loop
void StartFpsTimer()
{
m_oTimer.Start();
}
// Waits till the next frame is due (to call the loop at regular intervals)
void WaitTillNextFrame()
{
if(this->m_oTimer.GetTicks() < 1000.0 / GAME_ENGINE_FPS) {
delay((1000.0 / GAME_ENGINE_FPS) - m_oTimer.GetTicks());
}
}
while (this->IsRunning())
{
// Starts the fps timer
this->StartFpsTimer();
// Input
this->HandleEvents();
// Logic
this->Update();
// Rendering
this->Draw();
// Wait till the next frame
this->WaitTillNextFrame();
}

delay loop output in C++

I have a while loop that runs in a do while loop. I need the while loop to run exactly every second no faster no slower. but i'm not sure how i would do that. this is the loop, off in its own function. I have heard of the sleep() function but I also have heard that it is not very accurate.
int min5()
{
int second = 00;
int minute = 0;
const int ZERO = 00;
do{
while (second <= 59){
if(minute == 5) break;
second += 1;
if(second == 60) minute += 1;
if(second == 60) second = ZERO;
if(second < 60) cout << "Current Time> "<< minute <<" : "<< second <<" \n";
}
} while (minute <= 5);
}
The best accuracy you can achieve is by using Operating System (OS) functions. You need to find the API that also has a callback function. The callback function is a function you write that the OS will call when the timer has expired.
Be aware that the OS may lose timing precision due to other tasks and activities that are running while your program is executing.
If you want a portable solution, you shouldn't expect high-precision timing. Usually, you only get that with a platform-dependent solution.
A portable (albeit not very CPU-efficient, nor particularly elegant) solution might make use of a function similar to this:
#include <ctime>
void wait_until_next_second()
{
time_t before = time(0);
while (difftime(time(0), before) < 1);
}
You'd then use this in your function like this:
int min5()
{
wait_until_next_second(); // synchronization (optional), so that the first
// subsequent call will not take less than 1 sec.
...
do
{
wait_until_next_second(); // waits approx. one second
while (...)
{
...
}
} while (...)
}
Some further comments on your code:
Your code gets into an endless loop once minute reaches the value 5.
Are you aware that 00 denotes an octal (radix 8) number (due to the leading zero)? It doesn't matter in this case, but be careful with numbers such as 017. This is decimal 15, not 17!
You could incorporate the seconds++ right into the while loop's condition: while (seconds++ <= 59) ...
I think in this case, it would be better to insert endl into the cout stream, since that will flush it, while inserting "\n" won't flush the stream. It doesn't truly matter here, but your intent seems to be to always see the current time on cout; if you don't flush the stream, you're not actually guaranteed to see the time message immediately.
As someone else posted, your OS may provide some kind of alarm or timer functionality. You should try to use this kind of thing rather than coding your own polling loop. Polling the time means you need to be context switched in every second, which keeps your code running when the system could be doing other stuff. In this case you interrupt someone else 300 times just to say "are we done yet".
Also, you should never make assumptions about the duration of a sleep - even if you had a real time OS this would be unsafe - you should always ask the real time clock or tick counter how much time has elapsed each time because otherwise any errors accumulate so you will get less and less accurate over time. This is true even on a real time system because even if a real time system could sleep accurately for 1 second, it takes some time for your code to run so this timing error would accumulate on each pass through the loop.
In Windows for example, there is a possibility to create a waitable timer object.
If that's Your operating system check the documentation here for example Waitable Timer Objects.
From the code You presented it looks like what You are trying to do can be done much easier with sleep. It doesn't make sense to guarantee that Your loop body is executed exactly every 1 second. Instead make it execute 10 times a second and check if the time that elapsed form the last time, You took some action, is more than a second or not. If not, do nothing. If yes, take action (print Your message, increment variables etc), store the time of last action and loop again.
Sleep(1000);
http://msdn.microsoft.com/en-us/library/ms686298(VS.85).aspx