3d object wont update in for loop - opengl

I am trying to rotate a 3d object but it doesnt update when applying transforms in a for loop.
The object jumps to the last position.
How does one update a 3d object's position in a sequence of updates if it wont update in a for loop?

Just calling glTranslate, glRotate or such won't change things on the screen. Why? Because OpenGL is a plain drawing API, not a scene graph. All it knows about are points, lines and triangles that draws to a pixel framebuffer. That's it. You want to change something on the screen, you must redraw it, i.e. clear the picture, and draw it again, with the changes.
BTW: You should not use a dedicated loop to implement animations (neither for, nor while, nor do while). Instead perform animation in the idle handler and issue a redraw event.

I reckon you have a wrong understanding what OpenGL does for you.
I'll try to outline:
- Send vertex data to the GPU (once)
(this does only specify the (standard) shape of the object)
- Create matrices to rotate, translate or transform the object (per update)
- Send the matrices to the shader (per update)
(The shader then calculates the screen position using the original
vertex position and the transformation matrix)
- Tell OpenGL to draw the bound vertices (per update)
Imagine programming with OpenGL like being a web client - only specifying the request (changing the matrix and binding stuff) is not enough, you need to explicitly send the request (send the transformation data and tell OpenGL to draw) to receive the answer (having objects on the screen.)

It is possible to draw an animation from a loop.
for ( ...) {
edit_transformation();
draw();
glFlush(); // maybe glutSwapBuffers() if you use GLUT
usleep(100); // not standard C, bad
}
You draw, you flush/swap to make sure that what you just drew is sent to the screen, and you sleep.
However, it is not recommended to do this in an interactive application. The main reason is that while you are in this loop, nothing else can run. Your application will be unresponsive.
That's why window systems are event-based. Every few miliseconds, the window system pings your app so you can update your state, for example do animation. This is the idle function. When the state of your program changed, you tell the window system that you would like to draw again. It is then up the the window system to call your display function. You do your OpenGL calls when the system tells you to.
If you use GLUT for communicating with the window system, this looks like the code below. Other libraries like GLFW have equivalent functions.
int main() {
... // Create window, set everything up.
glutIdleFunc(update); // Register idle function
glutDisplayFunc(display); // Register display function
glutMainLoop(); // The window system is in charge from here on.
}
void update() {
edit_transformation(); // Update your models
glutPostRedisplay(); // Tell the window system that something changed.
}
void display() {
draw(); // Your OpenGL code here.
glFlush(); // or glutSwapBuffers();
}

Related

OpenGL render loop

I have an application which renders a 3d object using OpenGL, allowing the user to rotate and zoom and inspect the object. Currently, this is driven directly by received mouse messages (it's a Windows MFC MDI application). When a mouse movement is received, the viewing matrix is updated, and the scene re-rendered into the back buffer, and then SwapBuffers is called. For a spinning view, I start a 20ms timer and render the scene on the timer, with small updates to the viewing matrix each frame. This is OK, but is not perfectly smooth. It sometimes pauses or skips frames, and is not linked to vsync. I would love to make it smoother and smarter with the rendering.
It's not like a game where it needs to be rendered every frame though. There are long periods where the object is not moved, and does not need to be re-rendered.
I have come across GLFW library and the glfwSwapInterval function. Is this a commonly used solution?
Should I create a separate thread for the render loop, rather than being message/timer driven?
Are there other solutions I should investigate?
Are there any good references for how to structure a suitable render loop? I'm OK with all the rendering code - just looking for a better structure around the rendering code.
So, I consider you are using GLFW for creating / operating your window.
If you don't have to update your window on each frame, suggest using glfwWaitEvents() or glfwWaitEventsTimeout(). The first one tells the system to put this process (not window) on sleep state, until any event happens (mouse press / resize event etc.). The second one is similar, but you can specify a timeout for the sleep state. The function will wait till any event happens OR till specified time runs out.
What's for the glfwSwapInterval(), this is probably not the solution you are looking for. This function sets the amount of frames that videocard has to skip (wait) when glfwSwapBuffers() is called.
If you, for example, use glfwSwapInterval(1) (assuming you have valid OpenGL context), this will sync your context to the framerate of your monitor (aka v-sync, but I'm not sure if it is valid to call it so).
If you use glfwSwapInterval(0), this will basicly unset your syncronisation with monitor, and videocard will swap buffers with glfwSwapBuffers() instanly, without waiting.
If you use glfwSwapInterval(2), this will double up the time that glfwSwapBuffers() waits after (or before?) flushing framebuffer to screen. So, if you have, for instance, 60 fps on your display, using glfwSwapInterval(2) will result in 30 fps in your program (assuming you use glfwSwapBuffers() to flush framebuffer).
The glfwSwapInterval(3) will give you 20 fps, glfwSwapInterval(4) - 15 fps and so on.
As for separate render thread, this is good if you want to divide your "thinking" and rendering processes, but it comes with its own advantages, disadvantages and difficulties. Tip: some window events can't be handled "properly" without having separate thread (See this question).
The usual render loop looks like this (as far as I've learned from learnopengl lessons):
// Setup process before...
while(!window_has_to_close) // <-- Run game loop until window is marked "has to
// close". In GLFW this is done using glfwWindowShouldClose()
// https://www.glfw.org/docs/latest/group__window.html#ga24e02fbfefbb81fc45320989f8140ab5
{
// Prepare for handling input events (e. g. callbacks in GLFW)
prepare();
// Handle events (if there are none, this is just skipped)
glfwPollEvents(); // <-- You can also use glfwWaitEvents()
// "Thinknig step" of your program
tick();
// Clear window framebuffer (better also put this in separate func)
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
// Render everything
render();
// Swap buffers (you can also put this in separate function)
glfwSwapBuffers(window); // <-- Flush framebuffer to screen
}
// Exiting operations after...
See this ("Ready your engines" part) for additional info. Wish you luck!

OpenGL: How to minimize drawing?

My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.

Change scenes in game openGL

i'm continue my dialog with openGL and c++ and for a now i want to make "scenes", that will change each other. for example screen with button "go next", i push button and then begin game.
What is the best approach with openGL in c++ for this ? The main question is what make with glut initialization commands like :
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB);
glutInitWindowSize(screenWidth, screenHeight);
glutInitWindowPosition(0,0);
glutCreateWindow("Puzzle quest!");
// Registration
glutDisplayFunc(Draw);
glutIdleFunc(Draw);
Is i need to implement this method in both classes, or only in one, and then only "show start button/ hide game functionality, hide start button/ show game functionality+start timer" ?
I do not mess with screen settings during scene/mode changes. Screen setings is changed on resize of the OpenGL window not on some button event...
You have to write your visualization and UI logic stuff dependent on some variables for example:
enum _game_screens_enum
{
_game_screen_main_menu=0,
_game_screen_game,
_game_screen_game_over,
_game_screen_help,
_game_screen_intro,
_game_screen_redefine_keys,
_game_screen_high_score,
_game_screen_exit,
};
int screen=_game_screen_main_menu;
Now in draw,update and UI handling functions just add appropriate ifs for example:
void draw()
{
if (screen==_game_screen_main_menu)
{
// draw main menu ...
}
else if (screen==_game_screen_game)
{
// draw in game screen stuff...
}
else ...
}
And that is it ...
Are you looking for a fade effect or some sort of transition where they are blended? Fade effect should be easy if using alpha channel... enable blend... draw black quad in front of everything as its alpha value increases from 0 to 1 over whatever time frame you want the fade to occur. The fade in would be the opposite. Not sure about the blending scenes effect... maybe accum buffer, or read pixels and then draw pixels if you can change alpha values.
Otherwise, glutDisplayFunc is the correct way to switch between scenes/drawing functions.
Try just fading effect, it could be of great help.
You should only initialize glut once.
Normally, the display and idle callbacks would do different things depending on what state you're in.
On a high level:
void idle()
{
if (showingMenu)
menu.idle();
else
currentScene.idle();
}
You could do it by switching functions, but I think that makes debugging more difficult.
(You should probably not use the same function for drawing and idling, though.)

Drawing text in Cinder

I was wondering if there is a way to draw a gl::texture file with out having to use the gl::draw command every loop. Is there a way I can draw it once, and then not worry about it.
Drawing the image on every loop of draw() is slowing down my application, so I'd like to only draw things once on the screen and then update them if need be.
Quoting from Cinder's tutorials:
"When you create a new Cinder project, you will notice there are a few functions declared for you. Every Cinder project is made up of three main functions. You initialize your variables in the setup() method which is called once when your program begins. You make changes to those variables in the update() method. And finally, you draw() content in your program window. Update and draw are the heartbeat of any Cinder project. UpdateSetup, then update and draw, update and draw, update and draw, on and on until you quit the application."
There's a way though to draw objects permanently in OpenGL and concequently in Cinder but I wouldn't recommend it. Is to disable gl::clear() in your draw function. You can't though selectively delete any unneeded object. You will have to render your scene from scratch. Think of OpenGL's frame-buffer more of a canvas. Everytime you call gl::clear() you take the brush and you paint your canvas black or what ever color you specify with gl::clear(). After that the frame-buffer is "tabula rasa" you have to draw everything you want to display from scratch. If you don't state any gl::clear() command when you draw a new object is like your canvas stays intact and you draw your object on top of the already drawn.

Have OpenGL Render while cursor in GLUT UI

I have a main window which has both a glut UI in the top 10% of the screen, and the openGL world in the bottom 90% of the screen. Every time my cursor starts hovering over the GLUT portion, openGL rendering will freeze. It resumes only when the cursor exits the GLUT area.
This is because as long as the cursor is hovering over the GLUT area, presumably glutIdleFunc is never called because glut is not "idle", so openGL stuff is not rendered.
I already tried making a new unrelated thread that just calls the display code and/or glutPostRedisplay but I got a framerate of whopping 20 fps as opposed to the 100+ fps the normal way. I don't know exactly why. (In this test I also disabled glutIdleFunc so there is no idle func, just the separate thread calling the display)
Ways to get around this (other than "stop using glut" which I might do in the future but for now I would like a temporary solution)?
I know this is an old question, but I was having a similar issue and wanted to share how I solved it.
Basically, in your idle function, you should manually set the window to your normal window ID. Thanks to Joel Davis' HexPlanet code ( https://github.com/joeld42/hexplanet/ ) for demonstrating this:
void glut_Idle( void )
{
// According to the GLUT specification, the current window is
// undefined during an idle callback. So we need to explicitly change
// it if necessary
if ( glutGetWindow() != g_glutMainWin )
{
glutSetWindow(g_glutMainWin);
}
...
}
Create a callback for passive motion func:
void passiveMouseFunc(int,int){
glutPostRedisplay();
}
And register it using:
glutPassiveMotionFunc(passiveMouseFunc);