glBlitFramebuffer causing Access Violation - opengl

I'm trying to copy from a FBO to the window's framebuffer. As far as I know, the window framebuffer has 8 bits for each of R, G, B, and A, and has a depthbuffer (probably 24 bits). The FBO has a single texture attachment (format RGBA8) and no renderbuffers.
The problem is that when I try to blit the FBO to the screen, I get an access violation (Windows term for SIGSEGV). Blit code:
//Earlier: const int screen_rect[4] = {0,0,512,512};
glBindFramebuffer(GL_READ_FRAMEBUFFER,fbo->framebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glFinish();
//checking GL errors here gives no error
glBlitFramebuffer(
screen_rect[0],screen_rect[1],screen_rect[2],screen_rect[3],
screen_rect[0],screen_rect[1],screen_rect[2],screen_rect[3],
GL_COLOR_BUFFER_BIT,
GL_NEAREST //EDIT: I've also tried GL_LINEAR
);
glFinish();
//never reaches here
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER,0);
The FBO is GL_FRAMEBUFFER_COMPLETE_EXT and no GL errors occur at any point. The FBO and the window framebuffer are the same size.
Running on NVIDIA GeForce 580M GTX with driver 301.42 (to date, latest).
Any ideas why this might be happening?
[EDIT: I have found that the problem does not occur when blitting from a FBO to another FBO, although no data seems to be copied]

It seems that this implementation is EXTREMELY picky about the order commands go in. I figured out the following after reverse engineering some existing code. Perhaps there's some arcane reason they must be in this order, but I don't know what.
In any case, I believe the segfaulting behavior to be a bug in NVIDIA's OpenGL implementation.
Without further ado, the key commands, in order:
GLenum buffers1[] = {GL_BACK};
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
glDrawBuffers(1,buffers1);
glBindFramebuffer(GL_READ_FRAMEBUFFER,fbo->framebuffer);
glReadBuffer(GL_COLOR_ATTACHMENT1);
glBlitFramebuffer(...)

Related

Most Efficient Way to Retrieve Texture Pixel Data?

I know Directx for Dx9 at least, has a texture object where you are able to get only a small portion of the texture to CPU accessible memory. It was a function called "LockRect" I believe. OpenGL has glGetTexImage() but it grabs the entire image and if the format isn't the same as the texture's then it is going to have to convert the entire texture into the new pixel format on top of transferring the entire texture. This function is also not in OpenGL ES. Framebuffers is another option but where I could potentially bind a framebuffer where a color attachment in connected to a texture. Then there is glReadPixels which reads from the framebuffer, so it should be reading from the texture. glReadPixels has limited pixel format options so a conversion is going to have to happen, but I can read the pixels I need (which is only 1 pixel). I haven't used this method but it seems like it is possible. If anyone can confirm the framebuffer method, that it is a working alternative. Then this method would also work for OpenGL ES 2+.
Are there any other methods? How efficient is the framebuffer method (if it works), does it end up having to convert the entire texture to the desired format before it reads the pixels or is it entirely implementation defined?
Edit: #Nicol_Bolas Please stop removing OpenGL from tags and adding OpenGL-ES, OpenGL-ES isn't applicable, OpenGL is. This is for OpenGL specifically but I would like it to be Open ES 2+ compatible if possible, though it doesn't have to be. If a OpenGL only solution is available then it is a consideration I will make if it is worth the trade off. Thank you.
Please note, I do not have that much experience with ES in particular, so there might be better ways to do this specifically in that context. The general gist applies in either plain OpenGL or ES, though.
First off, the most important performance consideration should be when you are doing the reading. If you request data from the video card while you are rendering, your program (the CPU end) will have to halt until the video card returns the data, which will slow rendering due to your inability to issue further render commands. As a general rule, you should always upload, render, download - do not mix any of these processes, it will impact speed immensely, and how much so can be very driver/hardware/OS dependent.
I suggest using glReadPixels( ) at the end of your render cycle. I suspect the limitations on formats for that function are connected to limitations on framebuffer formats; besides, you really should be using 8 bit unsigned or floating point, both of which are supported. If you have some fringe case not allowing any of those supported formats, you should explain what that is, as there may be a way to handle it specifically.
If you need the contents of the framebuffer at a specific point in rendering (rather than the end), create a second texture + framebuffer (again with the same format) to be an effective "backbuffer" and then copy from the target framebuffer to that texture. This occurs on the video card, so it does not impose the bus latency directly reading does. Here is something I wrote that does this operation:
glActiveTexture( GL_TEXTURE0 + unit );
glBindTexture( GL_TEXTURE_2D, backbufferTextureHandle );
glBindFramebuffer( GL_READ_FRAMEBUFFER, framebufferHandle );
glCopyTexSubImage2D(
GL_TEXTURE_2D,
0, // level
0, 0, // offset
0, 0, // x, y
screenX, screenY );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, framebufferHandle );
Then when you want the data, bind the backbuffer to GL_READ_FRAMEBUFFER and use glReadPixels( ) on it.
Finally, you should keep in mind that a download of data will still halt the CPU end. If you download before displaying the framebuffer, you will put off displaying the image until after you can again execute commands, which might result in visible latency. As such, I suggest still using a non-default framebuffer even if you only care about the final buffer state, and ending your render cycle to the effect of:
(1.) Blit to the default framebuffer:
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, 0 ); // Default framebuffer
glBindFramebuffer( GL_READ_FRAMEBUFFER, framebufferHandle );
glBlitFramebuffer(
0, 0, screenX, screenY,
0, 0, screenX, screenY,
GL_COLOR_BUFFER_BIT,
GL_NEAREST );
(2.) Call whatever your swap buffers command may be in your given situation.
(3.) Your download call from the framebuffer (be it glReadPixels( ) or something else).
As for the speed impact of the blit/texcopy operations, it's quite good on most modern hardware and I have not found it to have a noticeable impact even done 10+ times a frame, but if you are dealing with antiquated hardware, it might be worth a second thought.

GL_TEXTURE8 not working with GL_TEXTURE_2D_ARRAY

I have a weird issue. Whenever I bind GL_TEXTURE_2D_ARRAY to GL_TEXTURE8, I get black textures. I am using an Intel HD3000 GPU that has 16 texture units. Any other types of texture work fine except GL_TEXTURE_2D_ARRAY.
Is it possible, that this is a driver or hardware issue? Is there a way to check if something failed during the process of uploading textures?
glGenTextures(1, &id);
glActiveTexture(GL_TEXTURE8);
glBindTexture(GL_TEXTURE_2D_ARRAY, id);
glIsTexture(id); // Returns true (1)

Can't render to texture on different hardware OpenGL

My friend and I are debugging our code on different computers.
My code is working while his is not. By process of elimination I determined the problem was that his system was not drawing to the custom frame buffer I use to render to a texture. The texture remained black.
Everything else is the same except for the system. Any advice here?
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
throw new RuntimeException("yo frame buffer is broken);
this does not throw any exceptions so the frame buffer should be made correctly.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
I added these two lines for my color attachment texture and it worked.
Why does it work now? not completely sure why. GL was saying the FBO was complete even without having this

Mipmapping in OpenGL

I'm having a lot of trouble getting mipmaps to work. I'm using OpenGL 1.1, and I don't have glu, so I'm using the following texture initiation code:
glGenTextures(1,&texname);
glBindTexture(GL_TEXTURE_2D,texname);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST_MIPMAP_NEAREST);
w=width;h=height;
for(int i=0;i<mipmaps;i++,w/=2,h/=2)
glTexImage2D(GL_TEXTURE_2D,i,GL_RGBA8,w,h,0,GL_RGBA,GL_UNSIGNED_BYTE,tex[i]);
Variables:
// data types:
unsigned long int *tex[20];
int mipmaps, width, height, w, h;
GLuint texname;
tex is an array that holds the list of the texture mipmap pixel arrays. The mipmaps are processed correctly (I tested them individually). mipmaps is the number of mipmaps that reduce down the original image to a 1x1 pixel texture (the original texture is 256x256 - so at this point in the code it's 8). width and height are the dimensions of the original texture (256x256).
The result is that it doesn't even use a texture. Everything just appears flat grays (gray due to the lighting).
Is there something I'm forgetting? I've checked this reference, and I can't find any conflicts.
Other details: In total, I'm enabling GL_DEPTH_TEST, GL_TEXTURE_2D, GL_LIGHTING, GL_CULL_FACE, GL_FOG (and GL_LIGHT0, GL_LIGHT1 which probably don't make a difference).
Also, I am using Mesa 3D's implementation of OpenGL (Mesa version 4.0 which translates to OpenGL version 1.3) if that might have anything to do with it.
EDIT:
The issue is, the texture works fine (not using mipmaps) the moment I change GL_NEAREST_MIPMAP_NEAREST to GL_NEAREST. So, I can't see how it could be any other code - at least I can't think of anything else it might be.
The value of mipmaps is 8. Your image is 256x256. Therefore, you should have 9 levels of mipmapping (256,128,64,32,16,8,4,2,1). If one is missing, you'll lose your texture.

How to read a pixel from a Depth Texture efficiently?

I have an OpenGL Texture and want to be able to read back a single pixel's value, so I can display it on the screen. If the texture is a regular old RGB texture or the like, this is no problem: I take an empty Framebuffer Object that I have lying around, attach the texture to COLOR0 on the framebuffer and call:
glReadPixels(x, y, 1, 1, GL_RGBA, GL_FLOAT, &c);
Where c is essentially a float[4].
However, when it is a depth texture, I have to go down a different code path, setting the DEPTH attachment instead of the COLOR0, and calling:
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &c);
where c is a float. This works fine on my Windows 7 computer running NVIDIA GeForce 580, but causes an error on my old 2008 MacBook pro. Specifically, after attaching the depth texture to the framebuffer, if I call glCheckFrameBufferStatus(GL_READ_BUFFER), I get GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER.
After searching the OpenGL documentation, I found this line, which seems to imply that OpenGL does not support reading from a depth component of a framebuffer if there is no color attachment:
GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER is returned if GL_READ_BUFFER is not GL_NONE
and the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is GL_NONE for the color
attachment point named by GL_READ_BUFFER.
Sure enough, if I create a temporary color texture and bind it to COLOR0, no errors occur when I readPixels from the depth texture.
Now creating a temporary texture every time (EDIT: or even once and having GPU memory tied up by it) through this code is annoying and potentially slow, so I was wondering if anyone knew of an alternative way to read a single pixel from a depth texture? (Of course if there is no better way I will keep around one texture to resize when needed and use only that for the temporary color attachment, but this seems rather roundabout).
The answer is contained in your error message:
if GL_READ_BUFFER is not GL_NONE
So do that; set the read buffer to GL_NONE. With glReadBuffer. Like this:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo); //where fbo is your FBO.
glReadBuffer(GL_NONE);
That way, the FBO is properly complete, even though it only has a depth texture.