Wait until glutPostRedisplay() refreshes - opengl

Is there some kind of synchronization primitive that allows us to block a thread until an OpenGL display has refreshed, i.e. after a call to glutPostRedisplay()?
static GLubyte *pixels = NULL;
glutSetWindow(mainWindow);
glutPostRedisplay();
pixels = (GLubyte *)realloc(pixels, format_nchannels * sizeof(GLubyte) * width * height);
glReadPixels(0, 0, width, height, FORMAT, GL_UNSIGNED_BYTE, pixels);
I'm trying to copy over the pixels from the GPU to memory after refreshing the drawing. However, I'm finding that after glReadPixels executes, pixels does not necessarily contain the updated image.

glutPostRedisplay does not refresh anything. It just sets a flag and that's it. The flag set by glutPostRedisplay is tested for in the main loop, and if there are not further events to be processed and the flag is set, the main loop calls the display function. If you were waiting for the display to finish after calling glutPostRedisplay but before returning to the main loop your program will wait indefinitely, never returning to the main loop and thus giving it a chance to redisplay.
If you want to take a screenshot, just introduce another flag yourself and process it in the display function.
For robustness reasons you should not glReadPixels from the main framebuffer (its contents may be undefined, by being obscured by a window), but when a screenshot is requested render to a FBO, from which you can save the screenshot and blit it to the main framebuffer.

Related

OpenGL render loop

I have an application which renders a 3d object using OpenGL, allowing the user to rotate and zoom and inspect the object. Currently, this is driven directly by received mouse messages (it's a Windows MFC MDI application). When a mouse movement is received, the viewing matrix is updated, and the scene re-rendered into the back buffer, and then SwapBuffers is called. For a spinning view, I start a 20ms timer and render the scene on the timer, with small updates to the viewing matrix each frame. This is OK, but is not perfectly smooth. It sometimes pauses or skips frames, and is not linked to vsync. I would love to make it smoother and smarter with the rendering.
It's not like a game where it needs to be rendered every frame though. There are long periods where the object is not moved, and does not need to be re-rendered.
I have come across GLFW library and the glfwSwapInterval function. Is this a commonly used solution?
Should I create a separate thread for the render loop, rather than being message/timer driven?
Are there other solutions I should investigate?
Are there any good references for how to structure a suitable render loop? I'm OK with all the rendering code - just looking for a better structure around the rendering code.
So, I consider you are using GLFW for creating / operating your window.
If you don't have to update your window on each frame, suggest using glfwWaitEvents() or glfwWaitEventsTimeout(). The first one tells the system to put this process (not window) on sleep state, until any event happens (mouse press / resize event etc.). The second one is similar, but you can specify a timeout for the sleep state. The function will wait till any event happens OR till specified time runs out.
What's for the glfwSwapInterval(), this is probably not the solution you are looking for. This function sets the amount of frames that videocard has to skip (wait) when glfwSwapBuffers() is called.
If you, for example, use glfwSwapInterval(1) (assuming you have valid OpenGL context), this will sync your context to the framerate of your monitor (aka v-sync, but I'm not sure if it is valid to call it so).
If you use glfwSwapInterval(0), this will basicly unset your syncronisation with monitor, and videocard will swap buffers with glfwSwapBuffers() instanly, without waiting.
If you use glfwSwapInterval(2), this will double up the time that glfwSwapBuffers() waits after (or before?) flushing framebuffer to screen. So, if you have, for instance, 60 fps on your display, using glfwSwapInterval(2) will result in 30 fps in your program (assuming you use glfwSwapBuffers() to flush framebuffer).
The glfwSwapInterval(3) will give you 20 fps, glfwSwapInterval(4) - 15 fps and so on.
As for separate render thread, this is good if you want to divide your "thinking" and rendering processes, but it comes with its own advantages, disadvantages and difficulties. Tip: some window events can't be handled "properly" without having separate thread (See this question).
The usual render loop looks like this (as far as I've learned from learnopengl lessons):
// Setup process before...
while(!window_has_to_close) // <-- Run game loop until window is marked "has to
// close". In GLFW this is done using glfwWindowShouldClose()
// https://www.glfw.org/docs/latest/group__window.html#ga24e02fbfefbb81fc45320989f8140ab5
{
// Prepare for handling input events (e. g. callbacks in GLFW)
prepare();
// Handle events (if there are none, this is just skipped)
glfwPollEvents(); // <-- You can also use glfwWaitEvents()
// "Thinknig step" of your program
tick();
// Clear window framebuffer (better also put this in separate func)
glClearColor(0.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT);
// Render everything
render();
// Swap buffers (you can also put this in separate function)
glfwSwapBuffers(window); // <-- Flush framebuffer to screen
}
// Exiting operations after...
See this ("Ready your engines" part) for additional info. Wish you luck!

Alpha Blending in SDL resets after resizing window

I wanted to implement alpha blending within my Texture class. It works almost completely. I use the following functions for manipulating the alpha value:
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
SDL_SetTextureAlphaMod(texture, alpha);
The only problem I have is that the textures that have been manipulated seem to reset to the normal alpha value of 255 when I resize or maximize the window. I checked the alpha value and recognized that it is still the value I manipulated it to be before. So the value is not 255. Why is the renderer rendering it as if the alpha value was 255 then?
Information about how and when I use these functions:
Within the main game loop I change the alpha value of the texture with a public method of my Texture class:
Texture::setAlphaValue(int alpha)
There the private alpha variable of the Texture class is changed.
Within the Draw method of my Texture class the texture is drawn and I call
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
SDL_SetTextureAlphaMod(texture, alpha);
before
SDL_RenderCopyEx(renderer, texture, &sourceRectangle, &destinationRectangle, 0, 0, SDL_Flip);
Information about how I resize the window:
I basically just set the window mode to a resizable window in my SDL initialization. Then handling it like any normal window is possible:
SDL_CreateWindow(window_Title, x_Position, y_Position, window_Width, window_Height, SDL_WINDOW_RESIZABLE);
My primary loop area:
This is the main game loop:
void Game::Render()
{
// set color and draw window
SDL_SetRenderDrawColor(renderer, windowColor.R(), windowColor.G(), windowColor.B(), 0);
SDL_RenderClear(renderer);
texture.setAlphaValue(100);
texture.Draw(SDL_FLIP_NONE);
// present/draw renderer
SDL_RenderPresent(renderer);
}
Test my project:
I also uploaded my alpha-blending test project to dropbox. In this project I simplified everything, there isn't even a texture class anymore. So the code is really simple, but the bug is still there. Here is the link to the Visual Studio project: http://www.dropbox.com/s/zaipm8751n71cq7/Alpha.rar
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
You should directly change the alpha in this area.
example: alpha = 100;
SDL_SetTextureAlphaMod(texture, alpha); //remember that alpha is an int
SDL_RenderCopy(renderer, texture, NULL, &rect);
P.S. If you're going for a fade-out/fade-in effect, resizing will temporarily pausa alpha changes (in-case you used SDL_GetTicks() and made a float to slowly reduce/increase alpha as time goes by. This is because windows pauses the rendering inside the program but once you stop resizing, it resumes.
Another P.S. Since you're resizing the window make sure to assign the w and h values not as numbers but as products or dynamic numbers(Multiplication is faster than division but you can also use division).
Assigning static numbers would cause the window to resize but the textures inside won't change size.
Happy Coding :)
This has been a reported bug in the SDL library. It is fixed for some time now: https://bugzilla.libsdl.org/show_bug.cgi?id=2202, https://github.com/libsdl-org/SDL/issues/1085

glClear() not obeying scissor region [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I'm drawing Open GL content (direct Win32 - not using GLUT, FreeGLUT, GLFW, etc) using double buffering in an arbitrary Windows 7 window which is already open, for example, a Windows Notepad window. I have the window handle and can draw the content I want as expected, but I am seeing strange behavior with the glClear() function.
It is my understanding that the glClear() function should only affect pixels on the screen which are INSIDE the region defined by the glScissor() function. I have defined the scissor region with glScissor() and then enabled the scissor function using glEnable(GL_SCISSOR_TEST). glClearColor is set to white (0,0,0,1). I'm clearing both color and depth buffers with the glClear() command.
When the SwapBuffers() command is executed in order to render on the screen, my selected clear color of white is painted inside the scissor region as I requested, but the rest of the window OUTSIDE the scissor region is painted black, rather than leaving these pixels untouched as I expected.
As shown in the image, the scissor region (white) and the object (3D cube) are drawn correctly, but the rest of the notepad window's pixels are set to black, and anything previously painted in that Notepad window is covered over.
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // white background
glViewport(0, 0, 300, 300);
glScissor(0, 0, 250, 400);
glEnable(GL_SCISSOR_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//... draw cube inside glBegin()/glEnd()...
SwapBuffers(hDC);
If I get your description correctly, glClear works as intended.
You must not assume that only because you see something on the screen, it is also present in the back buffer. The contents of the Notepad window that you see is either the front buffer, or a copy of the front buffer that was blitted into DWM's own secret render buffer (depending on whether you have compositing or not). Or, something else, a GDI buffer that was blitted to DWM's buffer, or such. Most likely the latter, since it's using GDI to render.
When you flip buffers, the back buffer is displayed over anything that's on-screen in that regin, and what you get is an all-black buffer (actually uninitialized, but presumably the driver was so kind as to zero the memory) except for the area that you cleared to white.
Which is exactly what you should expect -- your glClear affected only a subregion, and the rest is undefined, it happened to be zero (black).
Incidentially, if no compositing is enabled what you can see on-screen can be copied from the front buffer to the back buffer on most graphic cards, so you would be able to still see the original contents of the Notepad window if you wished to have it that way. You will however never have the contents of a GDI window in your back buffer magically (nor will this work with DWM, nor is it something that is guaranteed to work, it only works incidentially most of the time).
The clean solution, if you want the window's original contents, would be to BitBlt from the DC to memory, create a texture, and draw (or blit) that one into the back buffer.

3d object wont update in for loop

I am trying to rotate a 3d object but it doesnt update when applying transforms in a for loop.
The object jumps to the last position.
How does one update a 3d object's position in a sequence of updates if it wont update in a for loop?
Just calling glTranslate, glRotate or such won't change things on the screen. Why? Because OpenGL is a plain drawing API, not a scene graph. All it knows about are points, lines and triangles that draws to a pixel framebuffer. That's it. You want to change something on the screen, you must redraw it, i.e. clear the picture, and draw it again, with the changes.
BTW: You should not use a dedicated loop to implement animations (neither for, nor while, nor do while). Instead perform animation in the idle handler and issue a redraw event.
I reckon you have a wrong understanding what OpenGL does for you.
I'll try to outline:
- Send vertex data to the GPU (once)
(this does only specify the (standard) shape of the object)
- Create matrices to rotate, translate or transform the object (per update)
- Send the matrices to the shader (per update)
(The shader then calculates the screen position using the original
vertex position and the transformation matrix)
- Tell OpenGL to draw the bound vertices (per update)
Imagine programming with OpenGL like being a web client - only specifying the request (changing the matrix and binding stuff) is not enough, you need to explicitly send the request (send the transformation data and tell OpenGL to draw) to receive the answer (having objects on the screen.)
It is possible to draw an animation from a loop.
for ( ...) {
edit_transformation();
draw();
glFlush(); // maybe glutSwapBuffers() if you use GLUT
usleep(100); // not standard C, bad
}
You draw, you flush/swap to make sure that what you just drew is sent to the screen, and you sleep.
However, it is not recommended to do this in an interactive application. The main reason is that while you are in this loop, nothing else can run. Your application will be unresponsive.
That's why window systems are event-based. Every few miliseconds, the window system pings your app so you can update your state, for example do animation. This is the idle function. When the state of your program changed, you tell the window system that you would like to draw again. It is then up the the window system to call your display function. You do your OpenGL calls when the system tells you to.
If you use GLUT for communicating with the window system, this looks like the code below. Other libraries like GLFW have equivalent functions.
int main() {
... // Create window, set everything up.
glutIdleFunc(update); // Register idle function
glutDisplayFunc(display); // Register display function
glutMainLoop(); // The window system is in charge from here on.
}
void update() {
edit_transformation(); // Update your models
glutPostRedisplay(); // Tell the window system that something changed.
}
void display() {
draw(); // Your OpenGL code here.
glFlush(); // or glutSwapBuffers();
}

Is there a way to wait the last command to GPU before next command?

I have the following code in the display: (it's the whole display func, and the data changes in the idle func)
glClear(GL_COLOR_BUFFER_BIT);
glDrawPixels(100,100,GL_RGBA,GL_FLOAT,data);
glutSwapBuffers();
glutPostRedisplay();
And when I compiled this code, sometimes it flashes the same color as the background.
I think that the GPU catches the drawpixel before the clear color, and it clears the buffer.
I could delay between the clear and the draw, but I change data almost between every frame.
What should I do?
Is there a flush like command?
I suspect you forgot to specify GLUT_DOUBLE in your glutInitDisplayMode() call, and were thus given a non-double-buffered context.
Also, it's not customary to include a glutPostRedisplay() within your display function--that should be at the end of your idle function instead.