glDrawPixels(GLsizei width, GLsizei height, GLenum format, GLenum type, const ovid *pixels);
Is there a function like this, except instead of accessing CPU memory, it accesses GPU memory? [Either a texture of a frame buffer object]
Let's cover all the bases here.
First, a direct answer: yes, there is such a function. It's called glDrawPixels. I'm not kidding.
glDrawPixels can certainly read from GPU memory, provided that you are using buffer objects as their source data (commonly called "pixel buffer objects"). glDrawPixels can use pixel buffer objects as the source for pixel data. Buffer objects are (theoretically, at least) in GPU memory, so they qualify.
However, you add onto this "Either a texture of a frame buffer object". Under this qualification, you're asking, "is there a way to copy pixel data from one texture/framebuffer to the current framebuffer?"
Yes. glBlitFramebuffer can do that. It blits from the GL_READ_FRAMEBUFFER to the GL_DRAW_FRAMEBUFFER. And since you can add images from textures to FBOs, you can copy from images just fine. You can even copy from the default framebuffer to some renderbuffer or texture.
You can also employ glCopyImageSubData, which copies pixel rectangles from one image to another. It's a lot more convenient than glBlitFramebuffer if all you're doing is copying pixel data. This is quite new at present (GL 4.3, or ARB_copy_image). It cannot be used to copy data to the default framebuffer.
If it is in a texture:
set up orthographic frustum
disable blending, depth test, etc.
bind texture
draw screen-aligned textured quad with correct texture coordinates
I use this in for example in Compositor::_drawPixels
glDrawPixels can read from a Buffer Object. Just do a
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, XXX)
before calling glDrawPixels.
Caveat: glDrawPixels is deprecated...
Use glBlitFramebuffer, which operates on frambuffer objects (Link). Ans this is not deprecated.
You can take advantage of format conversion, scaling and multisampling.
Related
I'm using texture in grids: firstly a large texture (such as 1024x1024 or 2048x2048) is created without data, then areas being used are set with glTexSubImage2d calls. However, I want to have all pixels to have initial value of 0xffff, not zero. And I feel it's stupid to allocate megabytes of all-0xffff host memory only for initialize texture value. So is it possible to set all pixels of a texture to a specific value, with just a few calls?
Specifically, is it possible in OpenGL 2.1?
There is glClearTexImage, but it was introduced in OpenGL 4.4; see if it's available to you with the ARB_clear_texture extension.
If you're absolutely restricted to the core OpenGL 2.1, allocating client memory and issuing a glTexImage2D call is the only way of doing that. In particular you cannot even render to a texture with unextended OpenGL 2.1, so tricks like binding the texture to a framebuffer (OpenGL 3.0+) and calling glClearColor aren't applicable. However, a one-time allocation and initialization of a 1-16MB texture isn't that big of a problem, even if it feels 'stupid'.
Also note that a newly created texture image is undetermined; you cannot rely on it being all zeros, thus you have to initialize it one way or another.
I succeeded in render to texture with Texturebuffer, using VAO and shaders.
But FBO has another options for color buffer, it's Renderbuffer. I searched a lot on the internet, but cannot found any example related to draw Renderbuffer as Texturebuffer with shaders
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
If yes, please lead me or give any example to draw render buffer as texture buffer.
My target is just for study, but I'd like to know is that a better way to draw textures? Should we use it frequently?
First of all, don't use the term "texture buffer" when you really just mean texture. A "buffer texture"/"texture buffer object" is a different conecpt, completely unrelated here.
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
No. Renderbuffers were there when FBOs were first invented. One being faster than the other is not generally true either, but these are implementation details. But it is also irrelevant.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
Nope. You cant use the contents of a renderbuffer directly as a source for texture mapping. Renderbuffesr are just abstract memory regions the GPU renders to, and they are not in the format required for texturing. You can read back the results to the CPU using glReadPixels, our you could copy the data into a texture object, e.g. via glCopyTexSubImage - but that would be much slower than directly rendering into textures.
So renderbuffers are good for a different set of use cases:
offscreen rendering (e.g. where the image results will be written to a file, or encoded to a video)
as helper buffers during rendering, like the depth buffer or stencil buffer, where you do not care anbout the final contents of these buffers anyway
as intermediate buffer when the image data can't be directly used by the follwoing steps, e.g. when using multisampling, and copying the result to a non-multisampled framebuffer or texture
It appears that you have your terminology mixed up.
You attach images to Framebuffer Objects. Those images can either be a Renderbuffer Object (this is an offscreen surface that has very few uses besides attaching and blitting) or they can be part of a Texture Object.
Use whichever makes sense. If you need to read the results of your drawing in a shader then obviously you should attach a texture. If you just need a depth buffer, but never need to read it back, a renderbuffer might be fine. Some older hardware does not support multisampled textures, so that is another situation where you might favor renderbuffers over textures.
Performance wise, do not make any assumptions. You might think that since renderbuffers have a lot fewer uses they would somehow be quicker, but that's not always the case. glBlitFramebuffer (...) can be slower than drawing a textured quad.
I have a texture that was created by another part of my code (with QT5's bindTexture, but this isn't relevant).
How can I set an OpenGL hint that this texture will be frequently updated?
glBindTexture(GL_TEXTURE_2D, textures[0]);
//Tell opengl that I plan on streaming this texture
glBindTexture(GL_TEXTURE_2D, 0);
There is no mechanism to indicating that a texture will be updated repeatedly; that is only related to buffers (e.g., VBOs, etc.) through the usage parameter. However, there are two possibilities:
Attache your texture as a framebuffer object and update it that way. That's probably the most efficient method to do what you're asking. The memory associated with the texture remains resident on the GPU, and you can update it at rendering speeds.
Try using a pixel buffer object (commonly called a PBO, and has an OpenGL buffer type of GL_PIXEL_UNPACK_BUFFER) as the buffer that Qt writes its generated texture into, and mark that buffer as GL_DYNAMIC_DRAW. You'll still need to call glTexImage*D() with the buffer offset of the PBO (i.e., probably zero) for each update, but that approach may afford some efficiency over just blasting texels to the pipe directly through glTexImage*D().
There is no such hint. OpenGL defines functionality, not performance. Just upload to it whenever you need to.
I have discovered that there are still a fair number of drivers out there that don't support NPOT textures so I'm trying to retro-fit my 2D engine (based on OpenTK, which is in turn based on OpenGL) with Texture2D support instead of relying on GL_ARB_texture_rectangle. As part of this I am forcing all NPOTS texture bitmaps to allocate extra space up to the next power-of-2 size so they won't cause errors on these drivers. My question is, do I really have to resize the real bitmap and texture and allocate all that extra memory, or is there a way to tell OpenGL that I want a power-of-2 size texture, but I'm only going to use a portion of it in the upper left?
Right now my call looks like this:
GL.TexImage2D(texTarget, 0, PixelInternalFormat.Rgba8, bmpUse.Width, bmpUse.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, bits.Scan0);
This is after I have made bmpUse be a copy of my real texture bitmap with extra space on the right and bottom.
Use glTexImage2D with empty data to initialize the texture and glTexSubImage2D to fill a portion of it with data. Technically OpenGL allows the data parameter given to glTexImage{1,2,3}D to be a null pointer, indicating that the texture object is just to be initializd. It depends on the language binding, if that feature remains supported in the target language – just test what happens if you pass a null pointer.
datenwolf is right on how to initialize the texture with just a partial image, but there are 2 issues with this you need to be aware of:
you need to remap the texture coordinates of your mesh, as the [0-1] texture range of the full texture now also contains uninitialized data, as opposed to your full texture. The useful range is now [0-orig_width/padded_width]
wrapping of your texture will only wrap the whole texture, not your sub-part.
What is the difference between FBO and PBO? Which one should I use for off-screen rendering?
What is the difference between FBO and PBO?
A better question is how are they similar. The only thing that is similar about them is their names.
A Framebuffer Object (note the capitalization: framebuffer is one word, not two) is an object that contains multiple images which can be used as render targets.
A Pixel Buffer Object is:
A Buffer Object. FBOs are not buffer objects. Again: framebuffer is one word.
A buffer object that is used for asynchronous uploading/downloading of pixel data to/from images.
If you want to render to a texture or just a non-screen framebuffer, then you use FBOs. If you're trying to read pixel data back to your application asynchronously, or you're trying to transfer pixel data to OpenGL images asynchronously, then you use PBOs.
They're nothing alike.
A FBO (Framebuffer object) is a target where you can render images other than the default frame buffer or screen.
A PBO (Pixel Buffer Object) allows asynchronous transfers of pixel data to and from the device. This can be helpful to improve overall performance when rendering if you have other things that can be done while waiting for the pixel transfer.
I would read VBOs, PBOs and FBOs:
Apple has posted two very nice bits of
sample code demonstrating PBOs and
FBOs. Even though these are
Mac-specific, as sample code they're
good on any platoform because PBOs and
FBOs are OpenGL extensions, not
windowing system extensions.
So what are all these objects? Here's
the situation:
I want to highlight something.
FBO it not memory block. I think it look like struct of pointer. You Must attach Texture to FBO to use it. After attach Texture you now can draw to it for offscreen render or for second pass effect.
struct FBO{
AttachColor0 *ptr0;
AttachColor1 *ptr1;
AttachColor2 *ptr2;
AttachDepth *ptr3;
};
In the other hand, PBO is memory block "block to hold type of memory. "Try to think of it as malloc of x size, then you can use memcpy to copy data from it to texture/FBO or to it".
Why to use PBO?
Create intermediate memory buffer to interface with Host memory and not stop OpenGL drawing will upload texture to or from host.