time delay inside recursive function in opengl - c++

My issue is that my function does the job so quickly that i don't see the progress in the screen, here is my display function:
void showMaze(const Maze::Maze &m){
glPushMatrix();
glTranslatef(-1,1, 0);
for(int i=0;i<m.num_Y;i++)
{
for(int j=0;j<m.num_X;j++)
{
char c = m.maze[i][j];
if(c=='1'){ glColor3f(255,255,0);
glRectf(0.05, -0.05, -0.05,0.05);
}
if(c=='0'){ glColor3f(20,60,60);
glRectf(0.05, -0.05, -0.05,0.05);
}
glTranslatef(0.1,0, 0);
}
glTranslatef(-(m.num_X*0.1),-0.1, 0);
}
glPopMatrix();
}
The recursive function:
bool findPath(Maze* M, char m){
showMaze(*M);
glutPostRedisplay();
if(M->isFinish())
return true;
if (m!='s' && M->north()){
update(M, 'n');
if(isNew(M) && findPath(M, 'n') && M->isFinish()){
return true;
}
else{
M->south();
}
}
// .....
else{
return false;
}
}
void render()
{
glClear( GL_COLOR_BUFFER_BIT );
findPath(N,'z');
glutSwapBuffers();
}
In main:
glutDisplayFunc( render );
so my question is how do i get to wait few seconds ( so that i can see the progress ) whenever findpath is called, i've tried glutimeelapsed, sleep,gettickcount but none of do the job correctly.
Edit1:
when i put something like sleep() right after calling showMaze(), nothing is displayed for few seconds, then the final screen shows, am not an expert in c++, but i suppose showMaze() should be executed first, then the sleep() function, or c++ wait to execute the whole function to display results?
Edit2:
i found a solution for the problem, i took X and Y o the maze whenever they change, and stored the in two vectors, and in my drawing function i display fisrt the empty maze, then i slowly add the changed X and Y.

Nothing will be visible on the screen unless you show what you are rendering by swapping buffers (unless you're rendering to the front buffer, but that's iffy). So you can sleep for however long you like in the middle of your recursive function, but you're not going to see anything until you exit that callstack. And by then, you've drawn over everything.
So instead of merely sleeping, you need to use glutSwapBuffers in your recursive call stack when you want something to be visible.
However, this is a bad idea. The render call should not do things like sleep. The system needs your render to actually render, because the screen may need to be updated for a variety of reasons (another window revealing more of your window, etc). You should not have your render loop suspend itself like this.
Instead, what you ought to be doing is executing one segment per render loop execution, relying on glutPostRedisplay to make sure the loop keeps getting called. And you should either base your animation on the time delta between loop executions, or you should use a sleep timer to make sure that at least X time always passes between cycles.
So you need to unroll your recursive call into an iterative process.
Alternatively, if you have access to Boost.Context or Boost.Coroutine, you could use that to handle things. That way, you could keep your recursive structure. When you have rendered everything you want to display, you simply suspending your coroutine back to render, who will swap buffers and return.

Related

glfwSwapBuffers slow (>3s)

The bounty expires in 7 days. Answers to this question are eligible for a +50 reputation bounty.
Paul Aner is looking for a canonical answer:
I think the reason for this question is clear: I want the main-loop to NOT lock while a compute shader is processing larger amounts of data. I could try and seperate the data into smaller snippets, but if the computations were done on CPU, I would simply start a thread and everything would run nice and smoothly. Altough I of course would have to wait until the calculation-thread delivers new data to update the screen - the GUI (ImGUI) would not lock up...
I have written a program that does some calculations on a compute shader and the returned data is then being displayed. This works perfectly, except that the program execution is blocked while the shader is running (see code below) and depending on the parameters, this can take a while:
void CalculateSomething(GLfloat* Result)
{
// load some uniform variables
glDispatchCompute(X, Y, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
GLfloat* mapped = (GLfloat*)(glMapBuffer(GL_SHADER_STORAGE_BUFFER, GL_READ_ONLY));
memcpy(Result, mapped, sizeof(GLfloat) * X * Y);
glUnmapBuffer(GL_SHADER_STORAGE_BUFFER);
}
void main
{
// Initialization stuff
// ...
while (glfwWindowShouldClose(Window) == 0)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glfwPollEvents();
glfwSwapInterval(2); // Doesn't matter what I put here
CalculatateSomething(Result);
Render(Result);
glfwSwapBuffers(Window.WindowHandle);
}
}
To keep the main loop running while the compute shader is calculating, I changed CalculateSomething to something like this:
void CalculateSomething(GLfloat* Result)
{
// load some uniform variables
glDispatchCompute(X, Y, 1);
GPU_sync = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
}
bool GPU_busy()
{
GLint GPU_status;
if (GPU_sync == NULL)
return false;
else
{
glGetSynciv(GPU_sync, GL_SYNC_STATUS, 1, nullptr, &GPU_status);
return GPU_status == GL_UNSIGNALED;
}
}
These two functions are part of a class and it would get a little messy and complicated if I had to post all that here (if more code is needed, tell me). So every loop when the class is told to do the computation, it first checks, if the GPU is busy. If it's done, the result is copied to CPU-memory (or a calculation is started), else it returns to main without doing anything else. Anyway, this approach works in that it produces the right result. But my main loop is still blocked.
Doing some timing revealed that CalculateSomething, Render (and everything else) runs fast (as I would expect them to do). But now glfwSwapBuffers takes >3000ms (depending on how long the calculations of the compute shader take).
Shouldn't it be possible to switch buffers while a compute shader is running? Rendering the result seems to work fine and without delay (as long as the compute shader is not done yet, the old result should get rendered). Or am I missing something here (queued OpenGL calls get processed before glfwSwapBuffers does something?)?
Edit:
I'm not sure why this question got closed and what additional information is needed (maybe other than the OS, which would be Windows). As for "desired behavior": Well - I'd like the glfwSwapBuffers-call not to block my main loop. For additional information, please ask...
As pointed out by Erdal Küçük an implicit call of glFlush might cause latency. I did put this call before glfwSwapBuffer for testing purposes and timed it - no latency here...
I'm sure, I can't be the only one who ever ran into this problem. Maybe someone could try and reproduce it? Simply put a compute shader in the main-loop that takes a few seconds to do it's calculations. I have read somewhere that similar problems occur escpecially when calling glMapBuffer. This seems to be an issue with the GPU-driver (mine would be an integrated Intel-GPU). But nowhere have I read about latencies above 200ms...
Solved a similar issue with GL_PIXEL_PACK_BUFFER effectively used as an offscreen compute shader. The approach with fences is correct, but you then need to have a separate function that checks the status of the fence using glGetSynciv to read the GL_SYNC_STATUS. The solution (admittedly in Java) can be found here.
An explanation for why this is necessary can be found in: in #Nick Clark's comment answer:
Every call in OpenGL is asynchronous, except for the frame buffer swap, which stalls the calling thread until all submitted functions have been executed. Thus, the reason why glfwSwapBuffers seems to take so long.
The relevant portion from the solution is:
public void finishHMRead( int pboIndex ){
int[] length = new int[1];
int[] status = new int[1];
GLES30.glGetSynciv( hmReadFences[ pboIndex ], GLES30.GL_SYNC_STATUS, 1, length, 0, status, 0 );
int signalStatus = status[0];
int glSignaled = GLES30.GL_SIGNALED;
if( signalStatus == glSignaled ){
// Ready a temporary ByteBuffer for mapping (we'll unmap the pixel buffer and lose this) and a permanent ByteBuffer
ByteBuffer pixelBuffer;
texLayerByteBuffers[ pboIndex ] = ByteBuffer.allocate( texWH * texWH );
// map data to a bytebuffer
GLES30.glBindBuffer( GLES30.GL_PIXEL_PACK_BUFFER, pbos[ pboIndex ] );
pixelBuffer = ( ByteBuffer ) GLES30.glMapBufferRange( GLES30.GL_PIXEL_PACK_BUFFER, 0, texWH * texWH * 1, GLES30.GL_MAP_READ_BIT );
// Copy to the long term ByteBuffer
pixelBuffer.rewind(); //copy from the beginning
texLayerByteBuffers[ pboIndex ].put( pixelBuffer );
// Unmap and unbind the currently bound pixel buffer
GLES30.glUnmapBuffer( GLES30.GL_PIXEL_PACK_BUFFER );
GLES30.glBindBuffer( GLES30.GL_PIXEL_PACK_BUFFER, 0 );
Log.i( "myTag", "Finished copy for pbo data for " + pboIndex + " at: " + (System.currentTimeMillis() - initSphereStart) );
acknowledgeHMReadComplete();
} else {
// If it wasn't done, resubmit for another check in the next render update cycle
RefMethodwArgs finishHmRead = new RefMethodwArgs( this, "finishHMRead", new Object[]{ pboIndex } );
UpdateList.getRef().addRenderUpdate( finishHmRead );
}
}
Basically, fire off the computer shader, then wait for the glGetSynciv check of GL_SYNC_STATUS to equal GL_SIGNALED, then rebind the GL_SHADER_STORAGE_BUFFER and perform the glMapBuffer operation.

c++ -- How to break out of a for loop on a different thread (non-blocking I/O related)?

I currently have the following code:
void isItDone() {
char ans;
while(true) {
std::cout << "done? (y/n) ";
std::cin >> ans;
if(ans == 'y') break;
}
}
int main() {
//some interactive plotting
...
//
std::thread t(isItDone);
t.join();
//get info from interaction with plot
...
//
return 0;
}
There are three things I want to do:
Draw some plots and make some graphical cuts out of them
Confirm that cuts are made
Retrieve and save the cuts
The problem that I have at the moment is that step 2 does not allow for plots to be drawn.
Naively, I expected the above code to first draw the plot, then, on a different thread keep asking if the user is done. Once step 2 is done (i.e. cuts are made), go back to the original thread and finish the main function.
However, what I found is that even though std::cin is in a different thread, it is still blocking the plot drawing process. So, at the moment, no plots are drawn, and I have to break out of step 2 first before anything else can happen.
Please let me know if there is a way to fix this, or if I should approach this differently.

how to delay a function that is called in a while loop without delaying the loop

imagine I have something like this
void color(int a)
{
if (a > 10)
{
return;
}
square[a].red();
sleep(1second);
color(a+1);
}
while (programIsRunning())
{
color(1);
updateProgram();
}
but with something that actually requires a recursive function.
how can I call this recursive function to color the squares one by one.
because on its own its too fast and if the program is being updated every frame.
they instantly get colored when I want them to get colored one by one (with a delay).
sleep() will cause the current thread to stop. That makes it a bad candidate for human-perceptible delays from the main thread.
You "could" have a thread that only handles that process, but threads are expensive, and creating/managing one just to color squares in a sequence is completely overkill.
Instead, you could do something along the lines of: Every time the program updates, check if it's he appropriate time to color the next square.
const std::chrono::duration<double> color_delay{0.1};
auto last_color_time = std::chrono::steady_clock::now();
bool coloring_squares = true;
while (programIsRunning()) {
if (coloring_squares) {
auto now = std::chrono::steady_clock::now();
// This will "catch up" as needed.
while (now - last_color_time >= color_delay) {
last_color_time += color_delay;
coloring_squares = color_next_square();
}
}
updateProgram();
}
How color_next_square() works is up to you. You could possibly "pre-bake" a list of squares to color using your recursive function, and iterate through it.
Also, obviously, this example just uses the code you posted. You'll want to organise all this as part of updateProgram(), possibly in some sort of class SquareAnim {}; stateful wrapper.
N.B. If your program has little jitter, i.e. it has consistent time between updates, and the delay is low, using the following instead can lead to a slightly smoother animation:
if (now - last_color_time >= color_delay) {
last_color_time = now;
// ...

Run two delays at once C++

I want to make a program in which there are two dots blinking (with a break of 10ms) simultaneously, but one with delay 200ms and other with delay of 300ms. How can I play these two dots simultaneously from beginning? Is there a better way to that from following:
for(int i=1;i<100;i++)
{
if (i%2==0)
circle(10,10,2);
if (i%3==0)
circle(20,10,2);
delay(10);
cleardevice();
delay(100);
}
I would do something like this instead:
int t0=0,t1=0,t=0,s0=0,s1=0,render=1;
for (;;)
{
if (some stop condition like keyboard hit ...) break;
// update time, state
if (t>=t0) { render=1; s0=!s0; if (s0) t0+=10; else t0+=200; }
if (t>=t1) { render=1; s1=!s1; if (s1) t1+=10; else t1+=300; }
// render
if (render)
{
render=0;
cleardevice();
if (s0) circle(10,10,2);
if (s1) circle(20,10,2);
}
// update main time
delay(10); // Sleep(10) would be better but I am not sure it is present in TC++
t+=10;
if (t>10000) // make sure overflow is not an issue
{
t -=10000;
t0-=10000;
t1-=10000;
}
}
Beware the code is untested as I wrote it directly in here (so there might be syntax errors or typos).
The basic idea is having one global time t with small enough granularity (10ms). And for each object have time of event (t0,t1) state of object (s0,s1) and periods (10/200 , 10/300).
If main time reach the event time swap the state on/off and update event time to next state swap time.
This way you can have any number of objects just make sure your main time step is small enough.
The render flag just ensures that the scene is rendered on change only.
To improve timing you can use RDTSC instead of t+=10 and actually measure how much time has passed with CPU frequency accuracy.
To display the two circles simultaneously in the first round, you have to satisfy both conditions i%2==0 and i%3==0 at once. You can achieve it by simply changing
for(int i=1;i<100;i++)
to
for(int i=0;i<100;i++)
// ↑ zero here

Pausing in OpenGL successively

void keyPress(unsigned char key,int x,int y){
int i;
switch(key){
case 'f':
i = 3;
while(i--){
x_pos += 3;
sleep(100);
glutPostRedisplay();
}
}
}
Above is the code snippet written in C++ using GLUT library in Windows 7.
This function takes a character key and mouse co-ordinates x,y and performs translation along x-direction in 3 successive steps on pressing f character. Between each step the program should sleep for 100 ms.
We want to move a robot, and pause successively when he moves forward steps.
We are facing a problem in making the program sleep between the 3 steps. What is the problem in the above code snippet?
Disclaimer: The answer of jozxyqk seems better to me. This answer solves the problem in a dirty way.
You are misusing glutPostRedisplay, as is stated in this answer. The problem being, that glutPostRedisplay marks the current window as needing to be redisplayed, but it will only be done once you get in the glutMainLoop again. That does happen only once, hence only one sleep seems to work.
In fact all three sleeps work, but you get only one redraw after 300 ms.
To solve this, you have to find another way of redrawing the scene.
while(i--){
x_pos += 3;
sleep(100);
yourDrawFunction();
}
Assuming that you are working on a UNIX system.
sleep for 100 ms
sleep(100);
The problem here is, that you are sleeping for 100 seconds, as you are probably using the sleep function of the <unistd.h> header, which defines sleep() as:
extern unsigned int sleep (unsigned int __seconds);
What you want is probably something like
usleep(100000); //sleeps for 100000 microseconds == 100 ms
I believe the issue with your code is your sleep is messing with glut's main loop. The call stack might look something like this
main() -> glutMainLoop() -> keyPress() -> sleep()
#but can't get to this...
main() -> glutMainLoop() -> display()
Until keyPress() returns, glut's main loop cannot continue to render the next frame. It's waiting for the function to return. All glutPostRedisplay() does is say "hey, something's changed so the image is stale and we need to redraw the next time the main loop iterates". It doesn't actually call display().
You'll have to structure your code such that the main loop can continue as normal, but still include a delay between drawing. For example:
In keyPress(), set a moving = true state. Let the function return.
In the idle() function, call sleep() if moving or maybe if you have moved last time (really you might want to look into calculating elapsed time and do the timing yourself so you don't block the entire program)
Again in idle() increase x_pos and decrease your move count, let the function return, glut will draw, then call idle again and you can sleep/move again.