glReadPixels from GL_FRONT fails right after SwapBuffers - c++

I have tried to read the front and back buffer with diffrent buffers. Back buffer before swap and front buffer after swap.
glReadBuffer(GL_BACK);
glReadPixels(0, 0, 1, 1, GL_BGRA, GL_UNSIGNED_BYTE, buffer_back);
SimpleGLContext::instance().swapBuffers();
glReadBuffer(GL_FRONT);
glReadPixels(0, 0, 1, 1, GL_BGRA, GL_UNSIGNED_BYTE, buffer_front);
Here buffer_back has the BGRA values correctly, but buffer_front is still giving null value. So please give me advice on that. Thanks in advance.

Related

How to grow a GL_TEXTURE_2D_ARRAY?

I have created 2d texture array like this
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0, // No mipmaps
GL_RGBA8, // Internal format
width, height, 100, // width,height,layer count
0, // border?
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
0); // pointer to data
How do I increase its size from 100 to 200 for example? I guess I would have to create a new 2d array with size 200 and copy the images with glCopyTexSubImage3D over?
glBindTexture(GL_TEXTURE_2D_ARRAY, texture_id);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0,
0, 0, 0,
0, 0,
width, height
);
glDeleteTextures(1, &texture_id);
GLuint new_tex_id;
glGenTextures(1, &new_tex_id);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0,
GL_RGBA8,
width, height, 200,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
0);
//How do I get the data in `GL_READ_BUFFER` into my newly bound texture?
texture_id = new_tex_id;
But how do I actually get the data out of the GL_READ_BUFFER?
glCopyTexSubImage copies data from the framebuffer, not from a texture. That's why it doesn't take two texture objects to copy with.
Copying from a texture into another texture requires glCopyImageSubData. This is an OpenGL 4.3 function, from ARB_copy_image. A similar function can also be found in NV_copy_image, which may be more widely supported.
BTW, you should generally avoid doing this operation at all. If you needed a 200 element array texture, you should have allocated that the first time.
The glCopyImageSubData() function that #NicolBolas pointed out is the easiest solution if you're ok with requiring OpenGL 4.3 or later.
You can use glCopyTexSubImage3D() for this purpose. But since the source for this function is the current read framebuffer, you need to bind your original texture as a framebuffer attachment. The code could roughly look like this:
GLuint fbo = 0;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0, 0, 0, layer, 0, 0, width, height);
}
You can also use glBlitFramebuffer() instead:
GLuint fbos[2] = {0, 0};
glGenFramebuffers(2, fbos);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[0]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[1]);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glFramebufferTextureLayer(GL_DRAW_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, new_tex_id, 0, layer);
glBlitFramebuffer(
0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
}
The two options should be more or less equivalent. I would probably go with glBlitFramebuffer() since it's a newer function (introduced in 3.0), and it might be much more commonly used. So it might be more optimized. But if this is performance critical in your application, you should try both, and compare.

Am I using TexStorage correctly or is this a driver bug?

I'm trying to compress a texture to BPTC using this code but I am getting a corrupted image.
glTextureStorage2DEXT(texture, GL_TEXTURE_2D, 10, GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, 512, 512);
glTextureSubImage2DEXT(texture, GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);
On the other had, if I use this it works.
glTextureImage2DEXT(texture, GL_TEXTURE_2D, 0, GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);
Also, if I do this:
glTextureStorage2DEXT(texture, GL_TEXTURE_2D, 10, GL_SRGB8_ALPHA8, 512, 512);
glTextureSubImage2DEXT(texture, GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);
My texture appears overexposed, but if I do this:
glTextureImage2DEXT(texture, GL_TEXTURE_2D, 0, GL_SRGB8_ALPHA8, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_buffer);
It appears correctly.
For the record, glGetError and KHR_debug report no errors and nothing changes if I used the non-direct_state_access versions of these functions. Can somebody please tell me if I am using these functions incorrectly or if this is a driver bug.
EDIT: OK, so if this isn't already confusing enough. I decided to use glGetTexLevelParameteriv to actually check the internal format of the texture after creation. When I create the texture with glTexImage2D and GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, glGetTexLevelParameteriv comes back with a format of GL_SRGB_ALPHA. If I create the texture with glTexStorage2D and GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, glGetTexLevelParameteriv comes back with a format of GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM, but of course if I use glTexSubImage2D on it, it ends up corrupted (colorful noise). I tried GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT. glTexImage2D seems to work fine. It says the texture is actually S3TC. glTexStorage2D is also able to handle it, without corrupting the image, but as with GL_SRGB8_ALPHA8 the image comes out looking overexposed, as if it is being treated as linear data.

texture loading | wrong color

With using texture image, I have created objects in the scene. But, there is a problem ; instead of blue, its colour is green. What is the possible reason to that bug ?
Try use GL_BGR instead.
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);

How to get Z values from Z Buffer

I'm having problems with drawing in OpenGL and I need to see exactly what values are being placed in the depth buffer. Can anyone tell me how to retrieve these values?
Thanks
Chris
Use glReadPixels with format = GL_DEPTH_COMPONENT, for example:
float depth;
glReadPixels(0, 0, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth);
Will get the depth of pixel (0, 0).

SDL_surface to OpenGL texture

Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture