Difference between single buffered(GLUT_SINGLE) and double buffered drawing(GLUT_DOUBLE) - opengl

I'm using example here it works under
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
but it become a transparent window when I set it to
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
but I need that example work with some drawing under GLUT_DOUBLE mode.
So what's the difference between GLUT_DOUBLE and GLUT_SINGLE?

When using GL_SINGLE, you can picture your code drawing directly to the display.
When using GL_DOUBLE, you can picture having two buffers. One of them is always visible, the other one is not. You always render to the buffer that is not currently visible. When you're done rendering the frame, you swap the two buffers, making the one you just rendered visible. The one that was previously visible is now invisible, and you use it for rendering the next frame. So the role of the two buffers is reversed each frame.
In reality, the underlying implementation works somewhat differently on most modern systems. For example, some platforms use triple buffering to prevent blocking when a buffer swap is requested. But that doesn't normally concern you. The key is that it behaves as if you had two buffers.
The main difference, aside from specifying the different flag in the argument for glutInitDisplayMode(), is the call you make at the end of the display function. This is the function registered with glutDisplayFunc(), which is DrawCube() in the code you linked.
In single buffer mode, you call this at the end:
glFlush();
In double buffer mode, you call:
glutSwapBuffers();
So all you should need to do is replace the glFlush() at the end of DrawCube() with glutSwapBuffers() when using GLUT_DOUBLE.

When drawing to a single buffered context (GLUT_SINGLE), there is only one framebuffer that is used to draw and display the content. This means, that you draw more or less directly to the screen. In addition, things draw last in a frame are shown for a shorter time period then objects at the beginning.
In a double buffered scenario (GLUT_DOUBLE), there exist two framebuffer. One is used for drawing, the other one for display. At the end of each frame, these buffers are swapped. Doing so, the view is only changed at once when a frame is finished and all objects are visible for the same time.
That being said: Are you sure that a transparent window is caused by GL_DOUBLE and not by using GL_RGBA instead of GL_RGB?

Related

Single buffering in OpenGL without clearing

I'd like to draw points (a lot of them) and let them appear progressively on the screen. So my idea is to draw them on the screen without clearing it.
However this would work only using single buffering : with double buffering, half of the dots would be on one buffer, and the other half on the second.
I have two questions:
How can I have a single buffer that allows me to see the drawing appear as it is drawn (here a lot of dots making geometrical forms) ?
Will this affect the performance of OpenGL aka will it be significantly slower (a bit slower is not a problem)
The simplest solution is to stop rendering to the default framebuffer entirely. Render to a texture attached to an FBO instead (if you need a depth buffer, you'll need to create one of those too). When you want to display the image, then blit your texture FBO to the default framebuffer and issue a swap-buffers call.

Do you have to call glViewport every time you bind a frame buffer with a different resolution?

I have a program with about 3 framebuffers of varying sizes. I initialise them at the start, give them the appropriate render target and change the viewport size for each one.
I originally thought that you only had to call glViewport only when you initialise the framebuffer, however this creates problems in my program so I assume that's wrong? Because they all differ in resolution, right now when I render in each frame I bind the first framebuffer, change the viewport size to fit that framebuffer, bind the second framebuffer, change the viewport size to fit the resolution of the second framebuffer, bind the third framebuffer, change the viewport size to fit it, then bind the window frame buffer and change the viewport size to the resolution of the window.
Is this necessary, or is something else in the program to blame? This is done every frame, so I'm worried it would have a slight unnecessary overhead if I don't have to do it.
You always need to call glViewport() before starting to draw to a framebuffer with a different size. This is necessary because the viewport is not part of the framebuffer state.
If you look at for example the OpenGL 3.3 spec, section 6.2, titled "State Tables", starting on page 278, contains tables with the entire state, showing the scope of each piece of state:
Table 6.23 on page 299 lists "state per framebuffer object". The only state listed are the draw buffers and the read buffer. If the viewport were part of the framebuffer state, it would be listed here.
The viewport is listed in table 6.8 "transformation state". This is global state, and not associated with any object.
OpenGL 4.1 introduces multiple viewports. But they are still part of the global transformation state.
If you wonder why it is like this, the only real answer is that it was defined this way. Looking at the graphics pipeline, it does make sense. While the glViewport() call makes it look like you're specifying a rectangle within the framebuffer that you want to render to, the call does in fact define a transformation that is applied as part of the fixed function block between vertex shader (or geometry shader, if you have one) and fragment shader. The viewport settings determine how NDC (Normalized Device Coordinates) are mapped to Window Coordinates.
The framebuffer state, on the other hand, determines how the fragment shader output is written to the framebuffer. So it controls an entirely different part of the pipeline.
From the way viewports are normally used by applications, I think it would have made more sense to make the viewport part of the framebuffer state. But OpenGL is really an API intended as an abstraction of the graphics hardware, and from that point of view, the viewport is independent of the framebuffer state.
I originally thought that you only had to call glViewPort only when you initialise the framebuffer, however this creates problems in my program so I assume that's wrong?
Yes it is a wrong assumption (probably fed by countless of bad tutorials which misplace glViewport).
glViewport always belongs into the drawing code. You always call glViewport with the right parameters just before you're about to draw something into a framebuffer. The parameters set by glViewport are used in the transformation pipeline, so you should think of glViewport of a command similar to glTransform (in the fixed function pipeline) or glUniform.

opengl don't call glClear() in render

I draw GLUT primitives each render, more and more. To make things faster, I decided not to clear every time, just put new primitives. Is that just wrong?
When I did this, I got blinking. Putting sleep() showed that one frame is ok and second is empty and so on.
EDIT:
Brief code in render(display) that is executed once(I use Java's JOGL):
gl.glPushMatrix();
gl.glColor3f(1, 1, 0);
gl.glTranslatef(0, 0, 0);
glut.glutSolidCube(10);
gl.glPopMatrix();
drawable.swapBuffers();
Sure it is empty.When you clear, you clear the front buffer frame.Then , when swapBuffers() called the back-buffer frame becomes front , and in the meanwhile your stuff is being drawn to the front buffer frame ,which just has become back-buffer frame.Then ,when the backbuffer frame is finished the buffer swap is done(triggered by the call to swapBuffers().That is how double buffering works.If you don't clear the frame color you will get your drawings accumulated in the front buffer over time ,which I am not sure is desired result.
Clearing the front buffer once in the beginning of every render loop is not a big performance hit.The problem appears when you call glClear() frequently ,like calling it before each object drawing which also doesn't make sense as in such a case you will see only the last drawn object.
For the flickering - you should be more descriptive on how you do it all.From your example it is unclear why it happens.
gl.glDisable(GL_DEPTH_TEST);
?
It's hard to say without seeing more of your code.
When ever I get unexpected results in openGL code, I mentally go through the list of state possibilities and set them either enabled or disabled:
Depth Test
Texturing
Lighting
Blending
Culling
Framebuffers
Shaders
Swapbuffers(HDC) doesn't actually copy the contents of the buffer but merely swaps the front and back buffer, that is why you see every odd frame, but not the even ones.

How to make a step by step display animation in openGL?

How to make a step by step display animation in openGL??
I'M doing a reprap printer project to read a GCode file and interpret it into graphic.
now i have difficulty make a step by step animation of drawing the whole object.
i need to draw many short lines to make up a whole object.
for example:
|-----|
| |
| |
|-----|
the square is made up of many short lines, and each line is generated by code like:
glPushMatrix();
.....
for(int i=0; i< instruction[i].size(),i++)
{ ....
glBegin(GL_LINES);
glVertex3f(oldx, oldy, oldz);
glVertex3f(x, y, z);
glEnd();
}
glPopMatrix();
now i want to make a step animation to display how this square is made. I tried to refresh the screen each time a new line is drawn, but it doesn't work, the whole square just come out at once. anyone know how to make this?
Typical OpenGL implementations will queue up large number of calls to batch them together into bursts of activity to make optimal use of available communication bandwidth and GPU time resources.
What you want to do is basically the opposite of double buffered rendering, i.e. rendering where each drawing step is immediately visible. One way to do this is by rendering to a single buffered window and call glFinish() after each step. Major drawback: It's likely to not work well on modern systems, which use compositing window managers and similar.
Another approach, which I recommend, is using a separate buffer for incremental drawing, and constantly refreshing the main framebuffer from this one. The key subjects are Frame Buffer Object and Render To Texture.
First you create a FBO (tons of tutorials out there and as answers on StackOverflow). A FBO is basically an abstraction to which you can connect target buffers, like textures or renderbuffers, and which can be bound as the destination of drawing calls.
So how to solve your problem with them? First you should not do the animation by delaying a drawing loop. This has several reasons, but the main issue is, that you loose program interactivity by this. Instead you maintain a (global) counter at which step in your animation you are. Let's call it step:
int step = 0;
Then in your drawing function you have to phases: 1) Texture update 2) Screen refresh
Phase one consists of binding your framebuffer object as render target. For this to work the target texture must be unbound
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, animFBO);
glViewport(0, 0, fbo.width, fbo.height);
set_animFBO_projection();
the trick now is, that you clear the animFBO only once, namely after creation, and then never again. Now you draw your lines according to the animation step
draw_lines_for_step(step);
and increment the step counter (could do this as a compound statement, but this is more explicit)
step++;
After updating the animation FBO it's time to update the screen. First unbind the animFBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);
We're now on the main, on-screen framebuffer
glViewport(0, 0, window.width, window.height);
set_window_projection(); //most likely a glMatrixMode(GL_PROJECTION); glOrtho(0, 1, 0, 1, -1, 1);
Now bind the FBO attached texture and draw it to a full viewport quad
glBindTexture(GL_TEXTURE_2D, animFBOTexture);
draw_full_viewport_textured_quad();
Finally do the buffer swap to show the animation step iteration
SwapBuffers();
You should have the SwapBuffer method called after each draw call.
Be sure you don't screw the matrix stack and you'll probably need something to "pause" the rendering like a breakpoint.
If you only want the Lines to appear one after another and you dont have to be nit-picking about efficiency or good programming style try something like:
(in your drawing routine)
if (timer > 100)
{
//draw the next line
timer = 0;
}
else
timer++;
//draw all the other lines (you have to remember wich one already appeared)
//for example using a boolean array "lineDrawn[10]"
The timer is an integer that tells you, how often you have drawn the scene. If you make it larger, stuff happens more slowly on the screen when you run your program.
Of course this only works if you have a draw routine. If not, I strongly suggest using one.
->plenty tutorials pretty everywhere, e.g.
http://nehe.gamedev.net/tutorial/creating_an_opengl_window_%28win32%29/13001/
Goor luck to you!
PS: I think you have done nearly the same, but without a timer. thats why everything was drawn so fast that you thought it appeared all at the same time.

How to render offscreen on OpenGL? [duplicate]

This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
How can I do this?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.
I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.
Here are your options:
Pixel buffers
A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB (pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.
The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels to read back from it.
You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.
The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).
The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB information about OpenGL versions and profiles.
Framebuffer Objects
Framebuffer Objects are more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.
FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS (make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).
Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.
The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image format for your glRenderbufferStorage call.
The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.
If you need to render something that exceeds the maximum FBO size of your GL implementation libtr works pretty well:
The TR (Tile Rendering) library is an OpenGL utility library for doing
tiled rendering. Tiled rendering is a technique for generating large
images in pieces (tiles).
TR is memory efficient; arbitrarily large image files may be generated
without allocating a full-sized image buffer in main memory.
The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).
The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples