how to use glCopyImage2D - opengl

I'm trying something like
glEnable(texture_2d)
glBindTexture
glCopyTexImage2D
glDisable(GL_TEXTURE_2D);
I think glCopyTexImage2D won't work with a non-power of two image, so that's one problem; I've also tried glReadPixels, but it's too slow for my purposes.

If glReadPixels is too slow for you, then glCopyTexImage2D and glCopyTexSubImage2D aren’t going to be a whole lot faster. On platforms with support for framebuffer objects, like iOS, the recommended (i.e. faster) way to get GPU-rendered image data into a texture is to use that texture as the color attachment for a framebuffer object and render directly into it. That said, if you still want to pursue this method, here’s what you need to do to fix it:
First, you’re passing bad arguments to glCopyTexImage2D. The third argument, internalformat, should probably be GL_RGBA instead of 0. If you had called glGetError after calling glCopyTexImage2D, you would probably have gotten GL_INVALID_OPERATION. See the OpenGL ES 1.1 man pages for glCopyTexImage2D and glCopyTexSubImage2D.
Second, as you’ve already observed, glCopyTexImage2D requires its width and height arguments to be power-of-two as well. The correct way to deal with this is to allocate a texture image using glTexImage2D (you can pass NULL for pixels here), then use glCopyTexSubImage2D to copy your framebuffer contents into a rectangle. Note that glCopyTexSubImage2D doesn’t take an internalformat argument—because it’s updating a subrectangle of a texture, it uses the texture’s existing format.
For the record, glGetTexImage doesn’t exist in OpenGL ES 1.1 or 2.0, which is why you’re getting an implicit declaration.

You can check if the video card supports non-power of 2 textures if it supports the ARB_texture_non_power_of_two extension. See here for info.

glCopyTexImage2D does work with NPOT image.
NPT image (non-power of two) is limited supported by OpenGLES 2/OpenGL 1 or WebGL, In OpenGLES 3/OpenGL 2 or later it is fully supported.
If you want to copy color attachment of fbo to newTexture.
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, newTexture);
glTexImage2D(bindTarget, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glCopyTexSubImage2D(target, level, 0, 0, 0, 0, width, height);
NPT image will output black color in fragment shader sampling if texture mipmap, magnification filter and repeat mode setting is wrong.

To help figure out if the "non-power of two" thing is a problem, use glGetError() like this:
printf("error: %#06x\n", glGetError());
Put that in different places in your code to make sure what line is causing the problem, then check the error code here: https://www.khronos.org/opengl/wiki/OpenGL_Error
To copy a texture I did:
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0);
glGenerateMipmap(GL_TEXTURE_2D);
after binding the texture. Check the docs on those two functions for more info.

Related

Blitting several textures at once with glBlitFramebuffer

I have got a small OpenGL app and I am looking for the optimal way of blitting several texture buffers at once.
Let's say I have got two framebuffers (fbo1, fbo2) that each contain two texture buffers. And I have got a target fbo (fbo3) with four texture buffers. And I want to blit all the textures from fbo1 and fbo2 to fbo3.
Currently I am doing it separately for each texture like,
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo1)
glReadBuffer(GL_COLOR_ATTACHMENT0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo3)
glDrawBuffer(GL_COLOR_ATTACHMENT0)
glBlitFramebuffer(0, 0, width, height, 0, 0, ds_width, ds_height, GL_COLOR_BUFFER_BIT, GL_LINEAR)
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
How is it usually done? And is that even doable?
It isn't "usually" done because people generally don't go around copying a bunch of framebuffer images a lot. Indeed, if you are, that strongly suggests that you're probably doing something wrong.
The only way to do it is the way you've done here (though the needless rebinding of the framebuffers can go away): change the read/draw buffers each time and blit.

glDrawPixels vs textures to draw a 2d buffer in OpenGL

I have a 2d graphic library that I want to use in OpenGL, to be able to mix 2d and 3d graphic. The simplest way seems to be with glDrawPixels, but many recent tutorial, and forums, suggest to use a texture with the command glTexSubImage2D, and then to draw a square with such a texture.
My question is: why? where is the advantage? It just adds one more step (memory buffer->texture->video buffer, instead of memory buffer->video buffer).
There are two main reasons:
glDrawPixels() is deprecated, and not available in the OpenGL core profile, or in OpenGL ES.
When drawing the image multiple times, a lot of repeated work can be saved by storing the image data in a texture.
It's quite rare that you would have to draw an image only once. Much more commonly, you'll draw it repeatedly, on each redraw. With glDrawPixels() you have to pass the image data into OpenGL each time. If you store it in a texture, you can draw it repeatedly, and OpenGL can reuse the same data each time.
To draw the content of a texture, you don't necessarily have to set up a shader, draw a quad, etc. You can use glBlitFramebuffer() to copy the texture content to the display.
Since OpenGL use a video memory, use a simple "draw pixel" must be really slow because you will do a lot GPU/CPU synchronisation for each draw.
When you use glTexSubImage2D, you ensure that your image will reside(all the time) into the video memory which is fast.
One way to load a texture inside video memory could be :
glCreateTextures(GL_TEXTURE_2D, 1, &texture->mId);
glTextureParameteri(mId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTextureParameteri(mId, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
GLsizei numMipmaps = ((GLsizei)log2(std::max(surface->w, surface->h)) + 1);
glTextureStorage2D(*texture, numMipmaps, internalFormat, surface->w, surface->h);
glTextureSubImage2D(*texture, 0, 0, 0, surface->w, surface->h,
format, GL_UNSIGNED_BYTE, surface->pixels);
glGenerateTextureMipmap(*texture);
Don't forget binding if you do not want to use direct state access.
However, if you still want to perform pixel draw (for example for procedural rendering), you must write your own fragment shader to be as fast as possible

DevIL - how do I save filters to my GL_QUADS texture?

I used DevIL to load in my images. DevIL has some nice filter functions, such as alienify(), contrast(), etc. there is a problem though.
These filters showed fine when I drew the pixels to the color buffer. When I started using geometry, such as glBegin(GL_QUADS), the original texture shows, but the filter does not. How can I update the texture with iluAlienify()?
The solution is to use glTexSubImage2D​() when a texture is already existing.
For example,
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, m_handle);
iluAlienify(); // filters a texture
glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, m_width, m_height, m_imageFormat, m_imageType, ilGetData());

How to read a pixel from a Depth Texture efficiently?

I have an OpenGL Texture and want to be able to read back a single pixel's value, so I can display it on the screen. If the texture is a regular old RGB texture or the like, this is no problem: I take an empty Framebuffer Object that I have lying around, attach the texture to COLOR0 on the framebuffer and call:
glReadPixels(x, y, 1, 1, GL_RGBA, GL_FLOAT, &c);
Where c is essentially a float[4].
However, when it is a depth texture, I have to go down a different code path, setting the DEPTH attachment instead of the COLOR0, and calling:
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &c);
where c is a float. This works fine on my Windows 7 computer running NVIDIA GeForce 580, but causes an error on my old 2008 MacBook pro. Specifically, after attaching the depth texture to the framebuffer, if I call glCheckFrameBufferStatus(GL_READ_BUFFER), I get GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER.
After searching the OpenGL documentation, I found this line, which seems to imply that OpenGL does not support reading from a depth component of a framebuffer if there is no color attachment:
GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER is returned if GL_READ_BUFFER is not GL_NONE
and the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is GL_NONE for the color
attachment point named by GL_READ_BUFFER.
Sure enough, if I create a temporary color texture and bind it to COLOR0, no errors occur when I readPixels from the depth texture.
Now creating a temporary texture every time (EDIT: or even once and having GPU memory tied up by it) through this code is annoying and potentially slow, so I was wondering if anyone knew of an alternative way to read a single pixel from a depth texture? (Of course if there is no better way I will keep around one texture to resize when needed and use only that for the temporary color attachment, but this seems rather roundabout).
The answer is contained in your error message:
if GL_READ_BUFFER is not GL_NONE
So do that; set the read buffer to GL_NONE. With glReadBuffer. Like this:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo); //where fbo is your FBO.
glReadBuffer(GL_NONE);
That way, the FBO is properly complete, even though it only has a depth texture.

low resolution in OpenGL to mimic older games

I'm interested in know how is the right way to mimic the low resolution of the older games (like Atari 2600) in OpenGL to do a fps game. I imagine the best way to do it is writing the buffer into a texture, put onto a quad and display it to the screen resolution.
Take a look of http://www.youtube.com/watch?v=_ELRv06sa-c, for example (great game!)
Any advice, help or sample-code will be welcome.
I think the best way to do it would be like you said, render everything into a low-res texture (best done using FBOs) and then just display the texture by drawing a sceen-sized quad (of course using GL_NEAREST as magnification filter for the texture). Maybe you can also use glBlitFramebuffer for copying directly from the low-res FBO into the high-res framebuffer, although I don't know if you can copy directly into the default framebuffer (the displayed one) this way.
EDIT: After looking up the specification for framebuffer_blit it seems you can just copy from the low-res FBO into the high-res default framebuffer using glBlitFramebuffer(EXT/ARB). This might be faster than using a texture mapped quad as it completely bypasses the vertex-fragment-pipeline (although this would have been a simple one). And another advantage is that you also get the low-res depth and stencil buffers if needed and can this way render high-res content on top of the low-res background which might be an interesting effect. So it would happen somehow like this:
generate FBO with low-res renderbuffers for color and depth (and stencil)
...
glBindFramebuffer(GL_FRAMEBUFFER, lowFBO);
render_scene();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, 640, 480, 0, 0, 1024, 768,
GL_COLOR_BUFFER_BIT [| GL_DEPTH_BUFFER_BIT], GL_NEAREST);