GL_TEXTURE8 not working with GL_TEXTURE_2D_ARRAY - opengl

I have a weird issue. Whenever I bind GL_TEXTURE_2D_ARRAY to GL_TEXTURE8, I get black textures. I am using an Intel HD3000 GPU that has 16 texture units. Any other types of texture work fine except GL_TEXTURE_2D_ARRAY.
Is it possible, that this is a driver or hardware issue? Is there a way to check if something failed during the process of uploading textures?
glGenTextures(1, &id);
glActiveTexture(GL_TEXTURE8);
glBindTexture(GL_TEXTURE_2D_ARRAY, id);
glIsTexture(id); // Returns true (1)

Related

How to use glClearTexImage for packed depth/stencil textures?

Is it possible to use glClearTexImage to clear a packed depth/stencil texture in OpenGL? And if so, how?
I'm using a multisample texture with pixel format GL_FLOAT_32_UNSIGNED_INT_24_8_REV. Attempting to clear the texture with
glClearTexImage(textureId, 0, GL_FLOAT_32_UNSIGNED_INT_24_8_REV, GL_FLOAT, nullptr);
yields the error
GL_INVALID_OPERATION error generated. <format> must be GL_DEPTH_STENCIL for depth-stencil textures.
The fact that depth/stencil textures are considered in this error message lets me believe it should be possible somehow.
If I change the format parameter to GL_DEPTH_STENCIL, I get the error
GL_INVALID_OPERATION error generated. Texture type and format combination is not valid.
Am I mixing up the parameters? Or did I miss that clearing of packed textures was not within specs? Using a simple GL_DEPTH_COMPONENT32F instead of the packed format works flawlessly. I also tried creating a texture view with only the depth component and provide its id as first parameter to glClearTexImage, which however yields the same error.
I am testing this in an OpenGL 4.6 context on an Nvidia Titan RTX in Windows 10.
I have mixed up the internal format and the data type, which is why I used the wrong parameters. The packed depth/stencil texture can be cleared with
glClearTexImage(textureId, 0, GL_DEPTH_STENCIL, GL_FLOAT_32_UNSIGNED_INT_24_8_REV, nullptr);

GL_TEXTURE_3D color and stencil FBO attachments

I am doing a layered rendering to an offscreen FBO using OpenGL 4.3.I used GL_TEXTURE_3D with several layers as COLOR attachment.Then I use geometry shader to index into different layers when writing the output.It works ok.Now I need also stencil attachment for stencil test I am performing during the rendering.First I tried just to attach a render buffer as in the case with 2D attachments.
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT,
GL_RENDERBUFFER, _stencilBuffer)
In this case,when checking FBO for completeness I am getting frame buffer error:
GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB
Then I assumed that if the color attachment is 3D so the stencil attachment also must be 3D.And because there is no 3D render buffer I tried to attach a 3D texture for depth stencil slot of the FBO.
glTexImage3D(GL_TEXTURE_3D, 0, GL_DEPTH24_STENCIL8, width, height, depth,
0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8);
Where width - texture width,height-texture height,depth-number of layers inside texture 3D.
//Attach to FBO:
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, texId, 0));
Doing it this way I am getting:
GL_FRAMEBUFFER_INCOMPLETE
INVALID_OPERATION
I have searched any possible example to find how such a setup should be done,but found nothing.I also tried to use GL_TEXTURE_2D_ARRAY instead,but same problem. (for some reason this actually fixed the problem which persisted in my earlier tests)
UPDATE
My fault as got confused with some of my findings during the debug.Basically half of what I wrote above can be discarded.But because other people may get into the same issues I will explain what happened.
At first,when I attached a 3d texture to COLOR attachment of FBO I created a render buffer for GL_DEPTH_STENCIL attachment.And yes,on completeness check I got:
GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB
Next,I tried instead:
glTexImage3D(GL_TEXTURE_3D, 0, GL_DEPTH24_STENCIL8, width, height, depth,
0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8);
which thrown:
INVALID_OPERATION
Now,instead of GL_TEXTURE_3D target I tried GL_TEXTURE_2D_ARRAY which finally caused the FBO to be complete.So,while I would still like to understand why GL_TEXTURE_3D causes INVALID_OPERATION(feel free to post an answer),this change has solved the problem.
Based on the spec, GL_FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS means (quoted from OpenGL 4.5 spec):
If any framebuffer attachment is layered, all populated attachments must be layered. Additionally, all populated color attachments must be from textures of the same target (three-dimensional, one- or two-dimensional array, cube map, or cube map array textures).
Based on the first part of this, your initial attempt of using a single layer stencil attachment with a layered 3D texture color attachment was clearly illegal.
The second part sounds somewhat unclear to me. Since it only talks about "color attachments", it suggests that using a GL_TEXTURE_3D color attachment and a GL_TEXTURE_2D_ARRAY stencil attachment would be legal. But I'm not convinced that this is actually the intention. Unfortunately I couldn't find additional confirmation of this in the rest of the spec.
Using GL_TEXTURE_3D for a stencil or depth/stencil texture is a non-starter. There's no such thing as a 3D stencil texture. From the 4.5 spec, pages 191-192 in section "8.5 Texture Image Specification"
Textures with a base internal format of DEPTH_COMPONENT, DEPTH_STENCIL, or STENCIL_INDEX are supported by texture image speciļ¬cation commands only if target is TEXTURE_1D, TEXTURE_2D, TEXTURE_2D_MULTISAMPLE, TEXTURE_1D_ARRAY, TEXTURE_2D_ARRAY, TEXTURE_2D_MULTISAMPLE_ARRAY, TEXTURE_RECTANGLE, TEXTURE_CUBE_MAP, TEXTURE_CUBE_MAP_ARRAY, PROXY_TEXTURE_1D, PROXY_TEXTURE_2D, PROXY_TEXTURE_2D_MULTISAMPLE, PROXY_TEXTURE_1D_ARRAY, PROXY_TEXTURE_2D_ARRAY, PROXY_TEXTURE_2D_MULTISAMPLE_ARRAY, PROXY_TEXTURE_RECTANGLE, PROXY_TEXTURE_CUBE_MAP, or PROXY_TEXTURE_CUBE_MAP_ARRAY.
That's a long list, but TEXTURE_3D is not in it.
Based on this, I believe that what you found to be working is the only option. You need to use textures with target GL_TEXTURE_2D_ARRAY for both the color and stencil attachment.

How to read a pixel from a Depth Texture efficiently?

I have an OpenGL Texture and want to be able to read back a single pixel's value, so I can display it on the screen. If the texture is a regular old RGB texture or the like, this is no problem: I take an empty Framebuffer Object that I have lying around, attach the texture to COLOR0 on the framebuffer and call:
glReadPixels(x, y, 1, 1, GL_RGBA, GL_FLOAT, &c);
Where c is essentially a float[4].
However, when it is a depth texture, I have to go down a different code path, setting the DEPTH attachment instead of the COLOR0, and calling:
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &c);
where c is a float. This works fine on my Windows 7 computer running NVIDIA GeForce 580, but causes an error on my old 2008 MacBook pro. Specifically, after attaching the depth texture to the framebuffer, if I call glCheckFrameBufferStatus(GL_READ_BUFFER), I get GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER.
After searching the OpenGL documentation, I found this line, which seems to imply that OpenGL does not support reading from a depth component of a framebuffer if there is no color attachment:
GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER is returned if GL_READ_BUFFER is not GL_NONE
and the value of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is GL_NONE for the color
attachment point named by GL_READ_BUFFER.
Sure enough, if I create a temporary color texture and bind it to COLOR0, no errors occur when I readPixels from the depth texture.
Now creating a temporary texture every time (EDIT: or even once and having GPU memory tied up by it) through this code is annoying and potentially slow, so I was wondering if anyone knew of an alternative way to read a single pixel from a depth texture? (Of course if there is no better way I will keep around one texture to resize when needed and use only that for the temporary color attachment, but this seems rather roundabout).
The answer is contained in your error message:
if GL_READ_BUFFER is not GL_NONE
So do that; set the read buffer to GL_NONE. With glReadBuffer. Like this:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo); //where fbo is your FBO.
glReadBuffer(GL_NONE);
That way, the FBO is properly complete, even though it only has a depth texture.

glBlitFramebuffer causing Access Violation

I'm trying to copy from a FBO to the window's framebuffer. As far as I know, the window framebuffer has 8 bits for each of R, G, B, and A, and has a depthbuffer (probably 24 bits). The FBO has a single texture attachment (format RGBA8) and no renderbuffers.
The problem is that when I try to blit the FBO to the screen, I get an access violation (Windows term for SIGSEGV). Blit code:
//Earlier: const int screen_rect[4] = {0,0,512,512};
glBindFramebuffer(GL_READ_FRAMEBUFFER,fbo->framebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glFinish();
//checking GL errors here gives no error
glBlitFramebuffer(
screen_rect[0],screen_rect[1],screen_rect[2],screen_rect[3],
screen_rect[0],screen_rect[1],screen_rect[2],screen_rect[3],
GL_COLOR_BUFFER_BIT,
GL_NEAREST //EDIT: I've also tried GL_LINEAR
);
glFinish();
//never reaches here
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER,0);
The FBO is GL_FRAMEBUFFER_COMPLETE_EXT and no GL errors occur at any point. The FBO and the window framebuffer are the same size.
Running on NVIDIA GeForce 580M GTX with driver 301.42 (to date, latest).
Any ideas why this might be happening?
[EDIT: I have found that the problem does not occur when blitting from a FBO to another FBO, although no data seems to be copied]
It seems that this implementation is EXTREMELY picky about the order commands go in. I figured out the following after reverse engineering some existing code. Perhaps there's some arcane reason they must be in this order, but I don't know what.
In any case, I believe the segfaulting behavior to be a bug in NVIDIA's OpenGL implementation.
Without further ado, the key commands, in order:
GLenum buffers1[] = {GL_BACK};
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
glDrawBuffers(1,buffers1);
glBindFramebuffer(GL_READ_FRAMEBUFFER,fbo->framebuffer);
glReadBuffer(GL_COLOR_ATTACHMENT1);
glBlitFramebuffer(...)

how to copy a texture into a pbo in PyOpenGL?

after having used PyOpenGL happily for some time, I'm now seriously stuck. I am working on a Python package that allows me to use GLSL shaders and OpenCL programs for image processing, using textures as the standardized way to get my data in and out of the GLSL shaders and OpenCL programs.
Everything works, except that I can not succeed in copying a texture into a pbo (pixel buffer object).
I'm using pbo's to get my texture data in/out of OpenCL and that works nice and fast in PyOpenCL: I can copy my OpenCL output from its
pbo to a texture and display it, and I also can load data from the cpu into a pbo. But I am hopelessly stuck trying to fill my pbo with texture data already on the GPU, which is what I need to do to load my images produced by GLSL shaders into OpenCL for further processing.
I've read about two ways to do this:
variant 1 binds the pbo, binds the texture and uses glGetTexImage()
variant 2 attaches the texture to a frame buffer object, binds the fbo and the pbo and uses glReadPixels()
I also read that the PyOpenGL versions of both glReadPixels() and glGetTexImage() have trouble with the 'Null'-pointers one should use when having a bound pbo, so for that reason I am using the OpenGL.raw.GL variants.
But in both these cases I get an 'Invalid Operation' error, and I really do not see what I am doing wrong. Below two versions
of the load_texture() method of my pixelbuffer Python class, I hope I didn't strip them down too far...
variant 1:
def _load_texture(self, texture):
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, self.id)
glEnable(texture.target)
glActiveTexture(GL_TEXTURE0_ARB)
glBindTexture(texture.target, texture.id)
OpenGL.raw.GL.glGetTexImage(texture.target, 0, texture.gl_imageformat,
texture.gl_dtype, ctypes.c_void_p(0))
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0)
glDisable(texture.target)
variant 2:
def _load_texture(self, texture):
fbo = FrameBufferObject.from_textures([texture])
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
texture.target, texture.id, 0)
glReadBuffer(GL_COLOR_ATTACHMENT0)
glBindFramebuffer(GL_FRAMEBUFFER, fbo.id)
glBindBuffer(GL_PIXEL_PACK_BUFFER, self.id)
OpenGL.raw.GL.glReadPixels(0, 0, self.size[0], self.size[1],
texture.gl_imageformat, texture.gl_dtype,
ctypes.c_void_p(0))
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_RECTANGLE_ARB, 0, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
EDIT (adding some information about the error and initialization of my pbo):
the Error I am getting for variant 1 is:
OpenGL.error.GLError: GLError(
err = 1282,
description = 'invalid operation',
baseOperation = glGetTexImage,
cArguments = (
GL_TEXTURE_RECTANGLE_ARB,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
c_void_p(None),
)
and i'm initializing my pbo like this:
self.usage = usage
if isinstance(size, tuple):
size = size[0] * size[1] * self.imageformat.planecount
bytesize = self.imageformat.get_bytesize_per_plane() * size
glBindBuffer(self.arraytype, self.id)
glBufferData(self.arraytype, bytesize, None, self.usage)
glBindBuffer(self.arraytype, 0)
the 'self.arraytype' is GL_ARRAY_BUFFER, self.usage I have tried all possibilities just in case, but GL_STREAM_READ seemed the most logical for my type of use.
the size I am typically using is 1024 by 1024, 4 planes, 1 byte per plane since it is unisgned ints. This works fine when transferring pixel data from the host.
Also I am on Kubuntu 11.10, using a NVIDIA GeForce GTX 580 with 3Gb of memory on the GPU, using the proprietary driver, version 295.33
what am I missing ?
Found a solution myself without really understanding why it makes that huge difference.
The code I had (for both variants) was basically correct but needs the call to glBufferData in there for it to work. I already made that identical call when initializing my pbo in my original code, but my guess is that there was enough going on between that initialization and my attempt to load the texture, for the pbo memory somehow to become disallocated in the meantime.
Now I only moved that call closer to my glGetTexImage call and it works without changing anything else.
Strange, I'm not sure if that is a bug or a feature, if it is related to PyOpenGL, to the NVIDIA driver or to something else. It sure is not documented anywhere easy to find if it is expected behaviour.
The variant 1 code below works and is mighty fast too, variant 2 works fine as well when treated in the same way, but at about half the speed.
def _load_texture(self, texture):
bytesize = (self.size[0] * self.size[1] *
self.imageformat.planecount *
self.imageformat.get_bytesize_per_plane())
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, self.id)
glBufferData(GL_PIXEL_PACK_BUFFER_ARB,
bytesize,
None, self.usage)
glEnable(texture.target)
glActiveTexture(GL_TEXTURE0_ARB)
glBindTexture(texture.target, texture.id)
OpenGL.raw.GL.glGetTexImage(texture.target, 0, texture.gl_imageformat,
texture.gl_dtype, ctypes.c_void_p(0))
glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0)
glDisable(texture.target)