glGetTexImage doesn't work with CVOpenGLTextureRef - opengl

I'm writing an app for Mac OS X with OpenGL 2.1
I have a CVOpenGLTextureRef which holds the texture that I render with GL_QUADS and everything works fine.
I now need to determine which pixels of the texture are black, therefore I have written this code to read raw data from texture:
//"image" is the CVOpenGLTextureRef
GLenum textureTarget = CVOpenGLTextureGetTarget(image);
GLuint textureName = CVOpenGLTextureGetName(image);
glEnable(textureTarget);
glBindTexture(textureTarget, textureName);
GLint textureWidth, textureHeight;
int bytes;
glGetTexLevelParameteriv(textureTarget, 0, GL_TEXTURE_WIDTH, &textureWidth);
glGetTexLevelParameteriv(textureTarget, 0, GL_TEXTURE_HEIGHT, &textureHeight);
bytes = textureWidth*textureHeight;
GLfloat buffer[bytes];
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, &buffer);
GLenum error = glGetError();
glGetError() reports GL_NO_ERROR but buffer is unchanged after the call to glGetTexImage()...it's still blank.
Am I doing something wrong?
Note that I can't use glReadPixels() because I modify the texture before rendering it and I need to get raw data of the unmodified texture.
EDIT: I tried even with the sequent approach but I still have zero buffer as output
unsigned char *buffer = (unsigned char *)malloc(textureWidth * textureHeight * sizeof(unsigned char));
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);
EDIT2: Same problem is reported here and here

Try this:
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, buffer);
Perhaps you were thinking of this idiom:
vector< GLfloat > buffer( bytes );
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, &buffer[0]);
EDIT: Setting your pack alignment before readback may also be worthwhile:
glPixelStorei(GL_PACK_ALIGNMENT, 1);

I have discovered that this is an allowed behavior of glGetTexImage() with respect to CVOpenGLTextureRef. The only sure way is to draw texture into a FBO and then to read from that with glReadPixels()

Related

How to grow a GL_TEXTURE_2D_ARRAY?

I have created 2d texture array like this
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0, // No mipmaps
GL_RGBA8, // Internal format
width, height, 100, // width,height,layer count
0, // border?
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
0); // pointer to data
How do I increase its size from 100 to 200 for example? I guess I would have to create a new 2d array with size 200 and copy the images with glCopyTexSubImage3D over?
glBindTexture(GL_TEXTURE_2D_ARRAY, texture_id);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0,
0, 0, 0,
0, 0,
width, height
);
glDeleteTextures(1, &texture_id);
GLuint new_tex_id;
glGenTextures(1, &new_tex_id);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0,
GL_RGBA8,
width, height, 200,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
0);
//How do I get the data in `GL_READ_BUFFER` into my newly bound texture?
texture_id = new_tex_id;
But how do I actually get the data out of the GL_READ_BUFFER?
glCopyTexSubImage copies data from the framebuffer, not from a texture. That's why it doesn't take two texture objects to copy with.
Copying from a texture into another texture requires glCopyImageSubData. This is an OpenGL 4.3 function, from ARB_copy_image. A similar function can also be found in NV_copy_image, which may be more widely supported.
BTW, you should generally avoid doing this operation at all. If you needed a 200 element array texture, you should have allocated that the first time.
The glCopyImageSubData() function that #NicolBolas pointed out is the easiest solution if you're ok with requiring OpenGL 4.3 or later.
You can use glCopyTexSubImage3D() for this purpose. But since the source for this function is the current read framebuffer, you need to bind your original texture as a framebuffer attachment. The code could roughly look like this:
GLuint fbo = 0;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0, 0, 0, layer, 0, 0, width, height);
}
You can also use glBlitFramebuffer() instead:
GLuint fbos[2] = {0, 0};
glGenFramebuffers(2, fbos);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[0]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[1]);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glFramebufferTextureLayer(GL_DRAW_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, new_tex_id, 0, layer);
glBlitFramebuffer(
0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
}
The two options should be more or less equivalent. I would probably go with glBlitFramebuffer() since it's a newer function (introduced in 3.0), and it might be much more commonly used. So it might be more optimized. But if this is performance critical in your application, you should try both, and compare.

OpenGL tiled texture uploading with PBO

I'm using PBO as follows:
glGenBuffersARB(1, &pboIds);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, FB_SIZE, 0, GL_DYNAMIC_DRAW_ARB);
unsigned char* ptr = (unsigned char*)glMapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY_ARB);
memcpy(ptr, g_fb_addr, FB_SIZE);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, FB_WIDTH, FB_HEIGHT, FB_FORMAT, GL_UNSIGNED_BYTE, 0);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
And, I'm using textureId to be displayed on screen. BTW, g_fb_addr, which is the source of image has tiled memory layout. So the displayed image is striped in horizontal axis.
My question is, is there a way to upload a tiled image into PBO?

GLSL - Unpack in a SSB (V2)

This question continue the subject here : Unpack in a SSB
With the previous setup, I found myself incapable to reset my SSB using the Pixel Unpack buffer.
My init function :
//Storage Shader buffer
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(
GL_SHADER_STORAGE_BUFFER,
1 * sizeof(uint),
NULL,
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);
//Texture
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_R32UI, 1, 1);
glTexBufferRange(
GL_TEXTURE_BUFFER,
GL_R32UI,
m_buffer,
0,
1 * 1 * sizeof(GLuint));
glBindTexture(GL_TEXTURE_2D, 0);
//Unpack buffer
uint clearData = 5;
glGenBuffers(1, &m_clearBuffer);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBufferData(
GL_PIXEL_UNPACK_BUFFER,
1 * 1 * sizeof(GLuint),
&clearData,
GL_STATIC_COPY);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
My clearing function
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
1,
1,
GL_RED_INTEGER,
GL_UNSIGNED_INT,
NULL);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
The clear function doesn't work. If I try to access the value in the buffer with glBufferSubData(), BAADF00D is returned. If instead of an upack operation I use a simple glBufferSubData() It works.
How do I reset properly my SSB with the Pixel Unpack buffer ?
ANSWER :
The problem was binding my texture to GL_TEXTURE_2D instead of GL_TEXTURE_BUFFER. However, there is an easier way to unpack inside my SSB :
m_pFunctions->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
m_pFunctions->glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
m_pFunctions->glCopyBufferSubData(
GL_PIXEL_UNPACK_BUFFER,
GL_ARRAY_BUFFER,
0,
0,
1);
m_pFunctions->glBindBuffer(GL_ARRAY_BUFFER, 0);
m_pFunctions->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
This way I don't even need a texture.
You are using texture buffer objects incorrectly. You are creating an ordinary 2D texture (including the actual storage) and then seem to try define a buffer of the storage. Your glTexBufferRange() call will fail since you don't have any texture object bound to the GL_TEXTURE_BUFFER target.
But simply binding m_texture there will not make sense either. The point of TBOs is to make a buffer object available as a texture. You can not modify the TBO contents via the texture paths, glTex(Sub)Image/glTexStorage are not allowed for buffer textures, you have to use the buffer update mechanisms.
I don't see why you even try to do it via the texture path. Modyfing the underlying data storage is enough. And you can simply copy the contents of your PBO (or whatever kind of buffer you want to use) over to the buffer defining the storage for your TBO via glCopyBufferSubData(). Or, with modern GL, the most efficient approach might be using glClearBufferData directly on the SSBO.

Unpack in a SSB

I use part of a SSB as a matrix 3D of linked lists. Each voxel of the matric is a uint that gives the location of the first element of the list.
Before each rendering, I need to re-init this matrix, but not the whole SSB. So I associated the part corresponding to the matrix with a texture 1D to be able to unpack a buffer inside it.
//Storage Shader buffer
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER,
headerMatrixSizeInByte + linkedListSizeInByte,
NULL,
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);
//Texture
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_1D, m_texture);
glTexBufferRange(
GL_TEXTURE_BUFFER,
GL_R32UI,
m_buffer,
0,
headerMatrixSizeInByte);
glBindTexture(GL_TEXTURE_1D, 0);
//Unpack buffer
GLuint* clearData = new uchar[m_headerMatrixSizeInByte];
memset(clearData, 0xff, headerMatrixSizeInByte);
glGenBuffers(1, &m_clearBuffer);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBufferData(
GL_PIXEL_UNPACK_BUFFER,
headerMatrixSizeInByte,
clearData,
GL_STATIC_COPY);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
delete[] clearData;
So this is the initialization, now here is the clear attempt :
GLuint err;
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBindTexture(GL_TEXTURE_1D, m_texture);
err = m_pFunctions->glGetError(); //no error
glTexSubImage1D(
GL_TEXTURE_1D,
0,
0,
m_textureSize,
GL_RED_INTEGER,
GL_UNSIGNED_INT,
NULL);
err = m_pFunctions->glGetError(); //err GL_INVALID_VALUE
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_1D, 0);
My questions are :
Is it possible to do what I'm attempting to ?
If yes, where did I screw up ?
Thanks to Andon again who got half the answer. There is two problem in the code above :
m_textureSize = 32770 which exceeds the limit in one dimension for many hardware. The easy workaround is to use a texture 2D. Since I don't care about the content after the linked list in the buffer, I can write whatever I want in it. In the next rendering call, it will be overwritten in the shaders.
When creating the texture, one function call was missing : glTexStorage2D(GL_TEXTURE_2D, 1, width, height);

issues with mixing glGetTexImage and imageStore on nvidia opengl

I wrote some code, too long to paste here, that renders into a 3D 1 component float texture via a fragment shader that uses bindless imageLoad and imageStore.
That code is definitely working.
I then needed to work around some GLSL compiler bugs, so wanted to read the 3D texture above back to the host via glGetTexImage. Yes, I did do a glMemoryBarrierEXT(GL_ALL_BARRIER_BITS).
I did check the texture info via glGetTexLevelparameteriv() and everything I see matches. I did check for OpenGL errors, and have none.
Sadly, though, glGetTexImage never seems to read what was written by the fragment shader. Instead, it only returns the fake values I put in when I called glTexImage3D() to create the texture.
Is that expected behavior? The documentation implies otherwise.
If glGetTexImage actually works that way, how can I read back the data in that 3D texture (resident on the device?) Clearly the driver can do that as it does when the texture is made non-resident. Surely there's a simple way to do this simple thing...
I was asking if glGetTexImage was supposed to work that way or not. Here's the code:
void Bindless3DArray::dump_array(Array3D<float> &out)
{
bool was_mapped = m_image_mapped;
if (was_mapped)
unmap_array(); // unmap array so it's accessible to opengl
out.resize(m_depth, m_height, m_width);
glBindTexture(GL_TEXTURE_3D, m_textureid); // from glGenTextures()
#if 0
int w,h,d;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_WIDTH, &w);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_HEIGHT, &h);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_DEPTH, &d);
int internal_format;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_INTERNAL_FORMAT, &internal_format);
int data_type_r, data_type_g;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_RED_TYPE, &data_type_r);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_GREEN_TYPE, &data_type_g);
int size_r, size_g;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_RED_SIZE, &size_r);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_GREEN_SIZE, &size_g);
#endif
glGetTexImage(GL_TEXTURE_3D, 0, GL_RED, GL_FLOAT, &out(0,0,0));
glBindTexture(GL_TEXTURE_3D, 0);
CHECK_GLERROR();
if (was_mapped)
map_array_to_cuda(); // restore state
}
Here's the code that creates the bindless array:
void Bindless3DArray::allocate(int w, int h, int d, ElementType t)
{
if (!m_textureid)
glGenTextures(1, &m_textureid);
m_type = t;
m_width = w;
m_height = h;
m_depth = d;
glBindTexture(GL_TEXTURE_3D, m_textureid);
CHECK_GLERROR();
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 0); // ensure only 1 miplevel is allocated
CHECK_GLERROR();
Array3D<float> foo(d, h, w);
// DEBUG -- glGetTexImage returns THIS data, not what's on device
for (int z=0; z<m_depth; ++z)
for (int y=0; y<m_height; ++y)
for (int x=0; x<m_width; ++x)
foo(z,y,x) = 3.14159;
//-- Texture creation
if (t == ElementInteger)
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32UI, w, h, d, 0, GL_RED_INTEGER, GL_INT, 0);
else if (t == ElementFloat)
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32F, w, h, d, 0, GL_RED, GL_FLOAT, &foo(0,0,0));
else
throw "Invalid type for Bindless3DArray";
CHECK_GLERROR();
m_handle = glGetImageHandleNV(m_textureid, 0, true, 0, (t == ElementInteger) ? GL_R32UI : GL_R32F);
glMakeImageHandleResidentNV(m_handle, GL_READ_WRITE);
CHECK_GLERROR();
#ifdef USE_CUDA
checkCuda(cudaGraphicsGLRegisterImage(&m_image_resource, m_textureid, GL_TEXTURE_3D, cudaGraphicsRegisterFlagsSurfaceLoadStore));
#endif
}
I allocate the array, render to it via an OpenGL fragment program, and then I call dump_array() to read the data back. Sadly, I only get what I loaded in the allocate call.
The render program looks like
void App::clear_deepz()
{
deepz_clear_program.bind();
deepz_clear_program.setUniformValue("sentinel", SENTINEL);
deepz_clear_program.setUniformValue("deepz", deepz_array.handle());
deepz_clear_program.setUniformValue("sem", semaphore_array.handle());
run_program();
glMemoryBarrierEXT(GL_ALL_BARRIER_BITS);
// glMemoryBarrierEXT(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
// glMemoryBarrierEXT(GL_SHADER_GLOBAL_ACCESS_BARRIER_BIT_NV);
deepz_clear_program.release();
}
and the fragment program is:
#version 420\n
in vec4 gl_FragCoord;
uniform float sentinel;
coherent uniform layout(size1x32) image3D deepz;
coherent uniform layout(size1x32) uimage3D sem;
void main(void)
{
ivec3 coords = ivec3(gl_FragCoord.x, gl_FragCoord.y, 0);
imageStore(deepz, coords, vec4(sentinel));
imageStore(sem, coords, ivec4(0));
discard; // don't write to FBO at all
}
discard; // don't write to FBO at all
That's not what discard means. Oh, it does mean that. But it also means that all Image Load/Store writes will be discarded too. Indeed, odds are, the compiler will see that statement and just do nothing for the entire fragment shader.
If you want to just execute the fragment shader, you can employ the GL 4.3 feature (available on your NVIDIA hardware) of having an empty framebuffer object. Or you could use a compute shader. If you can't use GL 4.3 yet, then use a write mask to turn off all color writes.
As Nicol mentions above, if you want side effects only of image load and store, the proper way is to use an empty frame buffer object.
The bug of mixing glGetTexImage() and bindless textures was in fact a driver bug, and has been fixed as of driver version 335.23. I filed the bug and have confirmed my code is now working properly.
Note I am using empty frame buffer objects in the code, and don't use "discard" any more.