I want to display a 2D array of pixels directly to the screen. The pixel-data is not static and changes on user triggered event like a mousemove. I wish to have a display framebuffer to which I could write directly to the screen.
I have tried to create a texture with glTexImage2D(). I then render this texture to a QUAD. And then I update the texture with glTexSubImage2D() whenever a pixel is modified.
It works!
But this is not the efficient way I guess. The glTexSubImage2D copies whole array including the unmodified pixels back to the texture which is not good performance wise.
Is there any other way, like having a "display-framebuffer" to which I could write only the modified pixels and change will reflect on the screen.
glBlitFramebuffer is what you want.
Copies a rectangular block of pixels from one frame buffer to another. Can stretch or compress, but doesn't go through shaders and whatnot.
You'll probably also need some combination of glBindFramebuffer, glFramebufferTexture, glReadBuffer to set up the source and destination.
Related
I've been learning a bit of OpenGL lately, and I just got to the Framebuffers.
So by my current understanding, if you have a framebuffer of your own, and you want to draw the color buffer onto the window, you'll need to first draw a quad, and then wrap the texture over it? Is that right? Or is there something like glDrawArrays(), glDrawElements() version for framebuffers?
It seems a bit... Odd (clunky? Hackish?) to me that you have to wrap a texture over a quad in order to draw the framebuffer. This doesn't have to be done with the default framebuffer. Or is that done behind your back?
Well. The main point of framebuffer objects is to render scenes to buffers that will not get displayed but rather reused somewhere, as a source of data for some other operation (shadow maps, High dynamic range processing, reflections, portals...).
If you want to display it, why do you use a custom framebuffer in the first place?
Now, as #CoffeeandCode comments, there is indeed a glBlitFramebuffer call to allow transfering pixels from one framebuffer to another. But before you go ahead and use that call, ask yourself why you need that extra step. It's not a free operation...
I'm trying to render some text, but at the moment I'm rendering each glyph separately, which is slow and ineffective.
Therefore I want to change the system, so the text is just rendered into a separate texture once, whenever it changes, and then that texture should be rendered onscreen in the main render pass.
So far so good, the problem is, to draw it over the main scene, I only have two options. I could specify a specific color (e.g. green) as 'transparent', clear the frame buffer texture of the text with that color, draw the text and use a shader afterwards to render the result onto the main scene, minus the transparent color.
While that would work, I wouldn't be able to use that color for the actual text anymore.
Instead I'd much rather clear the alpha of the frame buffer texture entirely (to get a colorless, blank slate essentially) and then draw the text, but that doesn't seem to be possible?
glColorMask(GL_FALSE,GL_FALSE,GL_FALSE,GL_TRUE);
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
Doing this will just apply the specified rgb values with the alpha as 'intensity' of those colors. In this case it wouldn't do anything at all because the color components are disabled. But I need to change the existing alpha of the texture in the frame buffer, without using glDrawPixels (which is too slow).
Now, I could of course write an additional shader to set the alpha-value for each fragment to 0, but that doesn't seem as effective / fast.
What's the best way to handle something like this?
So far so good, the problem is, to draw it over the main scene, I only have two options. I could specify a specific color (e.g. green) as 'transparent', clear the frame buffer texture of the text with that color, draw the text and use a shader afterwards to render the result onto the main scene, minus the transparent color.
You're overcomplicating the whole thing. If you render your text/glyphs into a texture that has just a single channel that's being used as alpha channel, that gives you the glphys shape. The color is controlled in form of a vertex attribute and combined with the alpha from the texture upon rendering.
If you want to get fancy, instead of rendering the bare glyphs to the texture, you might instead want to produce a signed distance field map, to save on texture size, while retaining high quality text output.
Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.
I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective
in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt