How to control finely the glut inner loop - opengl

I would like to control the main loop in a glut program, I would like to better understand what is the order of execution of the following callbacks:
glutDisplayFunc(drawGLScene);
glutIdleFunc(idle);
glutTimerFunc(TIMER_MS, update, 0);
It's difficult for me to understand how glut queues this calls in a program.

As soon as you want fine control of your event loop, it's time to abandon GLUT. Use SDL, GLFW or do it from scratch. Understanding the inner workings of GLUT will not help you to gain fine control.

You can't. If you want to control the main loop you're going to have to use something like GLFW. Freeglut, a more modern extension of glut might let you do this. The way GLUT works is you specify some callbacks, start the main loop, and then it will call the callbacks whenever appropriate.
It probably calls the timer callback at the beginning of the frame so that you can update your time-since-last-frame value, it probably calls the display callback whenever it needs to render a frame, and it probably calls the idle callback whenever it has to wait before rendering the next frame (maybe in the case that your framerate is limited to exactly 60 fps so if you are rendering frames in less than .017 seconds then it will probably call the idle callback until it is ready to push a frame to the screen).

Related

OpenGL render loop

I have an application which renders a 3d object using OpenGL, allowing the user to rotate and zoom and inspect the object. Currently, this is driven directly by received mouse messages (it's a Windows MFC MDI application). When a mouse movement is received, the viewing matrix is updated, and the scene re-rendered into the back buffer, and then SwapBuffers is called. For a spinning view, I start a 20ms timer and render the scene on the timer, with small updates to the viewing matrix each frame. This is OK, but is not perfectly smooth. It sometimes pauses or skips frames, and is not linked to vsync. I would love to make it smoother and smarter with the rendering.
It's not like a game where it needs to be rendered every frame though. There are long periods where the object is not moved, and does not need to be re-rendered.
I have come across GLFW library and the glfwSwapInterval function. Is this a commonly used solution?
Should I create a separate thread for the render loop, rather than being message/timer driven?
Are there other solutions I should investigate?
Are there any good references for how to structure a suitable render loop? I'm OK with all the rendering code - just looking for a better structure around the rendering code.
So, I consider you are using GLFW for creating / operating your window.
If you don't have to update your window on each frame, suggest using glfwWaitEvents() or glfwWaitEventsTimeout(). The first one tells the system to put this process (not window) on sleep state, until any event happens (mouse press / resize event etc.). The second one is similar, but you can specify a timeout for the sleep state. The function will wait till any event happens OR till specified time runs out.
What's for the glfwSwapInterval(), this is probably not the solution you are looking for. This function sets the amount of frames that videocard has to skip (wait) when glfwSwapBuffers() is called.
If you, for example, use glfwSwapInterval(1) (assuming you have valid OpenGL context), this will sync your context to the framerate of your monitor (aka v-sync, but I'm not sure if it is valid to call it so).
If you use glfwSwapInterval(0), this will basicly unset your syncronisation with monitor, and videocard will swap buffers with glfwSwapBuffers() instanly, without waiting.
If you use glfwSwapInterval(2), this will double up the time that glfwSwapBuffers() waits after (or before?) flushing framebuffer to screen. So, if you have, for instance, 60 fps on your display, using glfwSwapInterval(2) will result in 30 fps in your program (assuming you use glfwSwapBuffers() to flush framebuffer).
The glfwSwapInterval(3) will give you 20 fps, glfwSwapInterval(4) - 15 fps and so on.
As for separate render thread, this is good if you want to divide your "thinking" and rendering processes, but it comes with its own advantages, disadvantages and difficulties. Tip: some window events can't be handled "properly" without having separate thread (See this question).
The usual render loop looks like this (as far as I've learned from learnopengl lessons):
// Setup process before...
while(!window_has_to_close) // <-- Run game loop until window is marked "has to
// close". In GLFW this is done using glfwWindowShouldClose()
// https://www.glfw.org/docs/latest/group__window.html#ga24e02fbfefbb81fc45320989f8140ab5
{
// Prepare for handling input events (e. g. callbacks in GLFW)
prepare();
// Handle events (if there are none, this is just skipped)
glfwPollEvents(); // <-- You can also use glfwWaitEvents()
// "Thinknig step" of your program
tick();
// Clear window framebuffer (better also put this in separate func)
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
// Render everything
render();
// Swap buffers (you can also put this in separate function)
glfwSwapBuffers(window); // <-- Flush framebuffer to screen
}
// Exiting operations after...
See this ("Ready your engines" part) for additional info. Wish you luck!

Qt QGLWidget OpenGL rendering from thread blocks on swapBuffers()

I have a strange problem rendering OpenGL to QGLWidget from a different thread than the main thread.
There are a lot of official statements from Qt Developers that it is "perfectly possible" to do rendering from a different thread. I followed the explanation in:
http://doc.qt.nokia.com/qq/qq06-glimpsing.html#writingmultithreadedglapplications
I implemented it nearly the same way. The only difference is, that I dont use QWorkspace with different GLWidgets but instead I just create a MainWindow with GLWidget as central widget.
When I start the application, the rendering thread starts rendering frames with a triangle at a random position. After a while (sometimes 2 seconds, sometimes 10 seconds) the thread starts to block on the swapBuffers() call for a very long time. Sometimes swapBuffers() returns spontanously after several seconds. When I move the mouse pointer over the widget or the main window, the swapBuffers returns immediately and the as long as I move the mouse pointer swapBuffers() does not block. After moving the mouse out of the widget or just stop moving the mouse, rendering continues for some seconds and then swapBuffers start blocking again.
I have absolutely no explanation for this behaviour. I am aware that swapBuffers() regulary blocks until a frame is completed and it's also clear to me, that a wait for vsync also might happen during OpenGL buffer swap call. But that should happen in some milliseconds and not block for several seconds. The environment is X11 with GLX.
Does anybody has an idea wtf is going on here?
I dont even have an idea how to find out what the problem might be..
Does anyone tried to implement the rendering from different thread as explained in the document that I linked above?

repeatedly render loop with Qt and OpenGL

I've made a project with Qt and OpenGL.
In Qt paintGL() was repeatedly call I beleive, so I was able to change values outside of that function and call update() so that it would paint a new image.
I also believe that it called initializeGL() as soon as you start up the program.
Now my question is:
I want that same functionality in a different program. I do not need to draw any images, etc. I just was wondering if there was a way to make a function like paintGL() that keeps being called so the application never closes. I tried just using a while(true) loop that kept my program running, but the GUI was inactive because of the while loop.
Any tips, other than threading preferably.
Thanks.
The exact mechanism will depend on which GUI toolkit you are using. In general, your app needs to service the run loop constantly for events to be dispatched. That is why your app was unresponsive when you had it running in a while loop.
If you need something repainted constantly, the easiest way is to create a timer when your window is created, and then in the timer even handler or callback, you invalidate your window which forces a repaint. Your paint handler can then be called at the frequency of your timer, such as 25 times per second.

non-blocking SwapBuffers() with VSync=on

I am looking for a portable way to make a non-blocking SwapBuffers() even if VSync is activated.
In other words, is it possible to to be notified by an event or to know the delay until the next VSync ?
IIRC this extension helps: http://www.opengl.org/registry/specs/SGI/video_sync.txt, but it is very poorly supported with current drivers.
Firstly, why don't you just call SwapBuffers() at the start of the frame? Or somehow change the pipeline to
Render();
Update(); //Update before swapping buffers
SwapBuffers();
While OpenGL is working away at all of the commands you just threw at it, you can do all of your update logic.
Otherwise there's a few ways to solve this problem.
I know that XNA has a ScanLine Property, which tells you which scanline the screen is currently up to. I don't know if OpenGL exposes this too, but I'm pretty sure it must. (Right?)
Use multithreaded rendering. Many modern engines dedicate a whole thread just for rendering. If it blocks, it's fine, it doesn't disturb the main thread. Alternitavly an easier way is to just handle input etc. on a new thread, this avoids complications with graphics contexts.
Use triple buffering. Using triple buffering means that you have 2 back buffers. Afer you call SwapBuffers, the screen can continue to scan the front buffer, with your newly finished buffer waiting, and the third buffer for you to render the next frame to. Of course, if you have already prerendered two frames, SwapBuffers() will block.

opengl glutmainloop()

i just started off using OpenGL and it seems it is not easy to understand the working of the glutMainLoop() what really happens there? Does it stay there doing nothing till any of the function calls responds?
It calls your display callback over and over, calling idle between so that it can maintain a specific framerate if possible, and others if necessary (such as if you resize the window or trigger an input event).
Essentially, within this function is the main program loop, where GLUT does most of the work for you and allows you to simply set up the specific program logic in these callbacks. It's been a while since I've worked with GLUT, and it is certainly confusing at first.
In your display callback should obviously be your main logic to draw whatever it is that should be going on. In the idle callback should be some very lightweight operations to determine what the change in state should be from the last time display was called to the next time. For example, if you're animating something, this would be where you change its position or orientation.
It is exactly as StrixVaria has stated.
glutMainLoop enters the GLUT event processing loop. This routine should be called at most once in a GLUT program. Once called, this routine will never return. It will call as necessary any callbacks that have been registered.
Taken from here
Using opengl and glut together means that you would be writing a 'glut' program that uses opengl commands in the callback functions. main contains glut functions. many glut functions need a callback function to be registered. Those callback functions usually contain opengl commands.
Coming to your question, now when it is clear that you are mainly writing a glut program, it should also be taken that glutMainLoop function call actually executes the callback functions as and when required, which in return executes the opengl commands.
Well glutMainLoop is the main function the keeps calling and calling the displaying functions and which also keeps your window actually opened. You''ll find out that opengl is not that scary.