GL_COLOR_BUFFER_BIT regenerating which memory? - opengl

This is the code that probably all libgdx apps have:
Gdx.gl.glClearColor( 1, 0, 0, 1 );
Gdx.gl.glClear( GL20.GL_COLOR_BUFFER_BIT );
or
Gdx.gl.glClear(GL20.GL_DEPTH_BUFFER_BIT);
This sets the color with witch the screen will be flushed(first line) and than flush it(second line). But what is the meaning of the GL20.GL_COLOR_BUFFER_BIT? From the docs I got that GL is an interface wrapping all the methods of OpenGL ES 2.0 and it is there so I can call methods. The meaning of GL_COLOR_BUFFER_BIT is puzzling to me. It should regenerate memory currently enabled for color writing... Does it mean that it would erase all images? Will it erase ShapeRenderer objects? Is anything on the screen that isn't part of color writing and will not be erased when this constant is used? Does GL_DEPTH_BUFFER_BIT erases the Z-position of textures?

When you draw things on the screen you dont draw them directly. Instead they are first drawn to a so called "back-buffer". This is a block of memory (a buffer) that contains four bytes for every pixel of the screen, one byte for each color component (red, green, blue and alpha) of each pixel. When you are ready drawing (when your render method finishes) this buffer is presented at once on the screen.
The existing value of the buffer is important. For example when you draw an image on the screen and then draw a semi transparent image on top of that, then the result is a mix of the two images. The first image is drawn to the back-buffer causing the memory of the back-buffer to contain the pixel data of that image. Next the second image is drawn and is blended on top of the existing data of the back-buffer.
Each byte of the block of memory always has a value, e.g. 0 for black, 255 for white, etc. Even if you havent drawn anything to the buffer it has to have some value. Calling glClear(GL20.GL_COLOR_BUFFER_BIT) instructs the GPU to fill the entire back buffer with some specified value (color). This value can be set using the call to glClearColor. Note that you don't have to call glClearColor each time glClear is called, the driver will remember the previous value.
Besides the back-buffer for the color values of the screen (the color buffer), the GPU can have other type of buffers, one of which is the depth buffer. This is again a block of memory of a few bytes per pixel, but this time it contains a depth value for each pixel. This makes it possible, e.g. when 3D rendering, to make sure that objects which are behind other objects are not being drawn. This buffer needs to be cleared using GL20.GL_DEPTH_BUFFER_BIT.
Note that you can clear both of them together using a bitwise or operation: Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
In practice, calling glClear should be the first thing you do in your render method (or when binding a FBO for example). This is because it tells the driver that you don't care about the existing values of the buffer. This allows the driver to do optimizations because it doesn't have to reconstruct (copy) the original memory block.

Related

SDL_LockTexture returning black pixels

I'm currently following LazyFoo's SDL tutorial and I've reached the Texture Manipulation part but I'm running into a problem. The first time I call SDL_LockTexture it works fine to update the texture. However, after SDL_UnlockTexture, if I attempt to lock the texture again it now returns a pointer to a set of zeroed pixels, instead of what they were set to after the last unlock.
SDL_LockTexture(newTexture, NULL, &pixels, &pitch);
memcpy(pixels, formattedSurface->pixels, formattedSurface->pitch * formattedSurface->h);
SDL_UnlockTexture(newTexture);
pixels = NULL;
This is the initial code that copies the pixels of a loaded surface to an empty texture, and if rendered at this point it displays the correct image.
However, if the texture is locked and unlocked again then the image displayed is a black box (the value of each pixel being 0)
I'm completely stumped. Please let me know if any more is needed.
As stated on the wiki here the pixel data made available by SDL_LockTexture is intended only for writing data to a texture. It may not actually contain the current data for that texture. If you need a copy of a texture's pixel data you need to keep that stored somewhere else.

Is it possible to have a display framebuffer in OpenGL

I want to display a 2D array of pixels directly to the screen. The pixel-data is not static and changes on user triggered event like a mousemove. I wish to have a display framebuffer to which I could write directly to the screen.
I have tried to create a texture with glTexImage2D(). I then render this texture to a QUAD. And then I update the texture with glTexSubImage2D() whenever a pixel is modified.
It works!
But this is not the efficient way I guess. The glTexSubImage2D copies whole array including the unmodified pixels back to the texture which is not good performance wise.
Is there any other way, like having a "display-framebuffer" to which I could write only the modified pixels and change will reflect on the screen.
glBlitFramebuffer is what you want.
Copies a rectangular block of pixels from one frame buffer to another. Can stretch or compress, but doesn't go through shaders and whatnot.
You'll probably also need some combination of glBindFramebuffer, glFramebufferTexture, glReadBuffer to set up the source and destination.

Zoom window in OpenGL

I've implemented Game of Life using OpenGL buffers (as specified here: http://www.glprogramming.com/red/chapter14.html#name20). In this implementation each pixel is a cell in the game.
My program receives the initial state of the game (2d array). The size array ,in my implementation, is the size of the window. This of course makes it "unplayable" if the array is 5x5 or some other small values.
At each iteration I'm reading the content of the framebuffer into a 2D array (its size is the window size):
glReadPixels(0, 0, win_x, win_y, GL_RGB, GL_UNSIGNED_BYTE, image);
Then, I'm doing the necessary steps to calculate the living and dead cells, and then draw a rectangle which covers the whole window, using:
glRectf(0, 0, win_x, win_y);
I want to zoom (or enlarge) the window without affecting the correctness of my code. If I resize the window, then the framebuffer content won't fit inside image(the array). Is there a way of zooming the window(so that each pixel be drawn as several pixels) without affecting the framebuffer?
First, you seem to be learning opengl 2, I would suggest instead learning a newer version, as it is more powerful and efficient. A good tutorial can be found here http://www.opengl-tutorial.org/
If i understand this correctly, you read in an initial state and draw it, then continuously read in the pixels on the screen, update the array based on the game of life logic then draw it back? this seems overly complicated.
The reading of the pixels on the screen is unnecessary, and will cause complications if you try to enlarge the rects to more than a pixel.
I would say a good solution would be to keep a bit array (1 is a organism, 0 is not), possibly as a 2d array in memory, updating the logic every say 30 iterations (for 30 fps), then drawing all the rects to the screen, black for 1, white for 0 using glColor(r,g,b,a) tied to an in statement in a nested for loop.
Then, if you give your rects a negative z coord, you can "zoom in" using glTranslate(x,y,z) triggered by a keyboard button.
Of course in a newer version of opengl, vertex buffers would make the code much cleaner and efficient.
You can't store your game state directly the window framebuffer and then resize it for rendering, since what is stored in the framebuffer is by definition what is about to be rendered. (You could overwrite it, but then you lose your game state...) The simplest solution would just to store the game state in an array (on the client side) and then update a texture based on that. Thus for each block that was set, you could set a pixel in a texture to be the appropriate color. Each frame, you then render a full screen quad with that texture (with GL_NEAREST filtering).
However, if you want to take advantage of your GPU there are some tricks that could massively speed up the simulation by using a fragment shader to generate the texture. In this case you would actually have two textures that you ping-pong between: one containing the current game state, and the other containing the next game state. Each frame you would use your fragment shader (along with a FBO) to generate the next state texture from the current state texture. Afterwards, the two textures are swapped, making the next state become the current state. The current state texture would then be rendered to the screen the same way as above.
I tried to give an overview of how you might be able to offload the computation onto the GPU, but if I was unclear anywhere just ask! For a more detailed explanation feel free to ask another question.

When exactly is glClear(GL_DEPTH_BUFFER_BIT) and glClear(GL_COLOR_BUFFER_BIT) necessary?

In a nutshell, when should the color buffer be cleared and when should the depth buffer be cleared? Is it always at the start of the drawing of the current scene/frame? Are there times when you would draw the next frame without clearing these? Are there other times when you would clear these?
Ouff... even though it's a simple question it's a bit hard to explain ^^
It should be cleared before drawing the scene again. Yes, every time if you want to avoid strange and nearly uncontrollable effect.
You don't want to clear the two Buffers after swapping when you bind the scene to a texture () but right after that there's no more use for it.
The Color Buffer is as the name says a buffer, storing the computed color data. For better understanding, imagine you draw on a piece of paper. Each Point on the Paper knows which color was drawn on top of it - and that's basically all of it.
But: without the Depth Buffer, your Color Buffer is (except some special cases like multiple renderpasses for FX effects) nearly useless. It's like a second piece of paper but in a gray scale. How dark the gray is decides how far the last drawn pixel is away from the screen (relative to the zNear and zFar clipping plane).
If you instruct OpenGl to draw another primitive, it goes pixel by pixel and checks, which depth value the pixel would have. If it's higher than the value stored in the Depth Buffer for that pixel, it draws over the Color Buffer in that position and does nothing if the value is lower.
To recap, the Color Buffer stores the picture to be drawn on your screen, and the Depth Buffer decides weather a part of your primitive get's drawn in this position.
So clearing the Buffers for a new scene is basically changing the Paper to draw in top. If you want to mess with the old picture, then keep it, but it's in better hands on the monitor (or on the wall^^).

What exactly is a buffer in OpenGL, and how can I use multiple ones to my advantage?

Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.