How to copy CUDA generated PBO to Texture with Mipmapping - opengl

I'm trying to copy a PBO into a Texture with automipmapping enabled, but it seems only the top level texture is generated (in other words, no mipmapping is occuring).
I'm building a PBO using
//Generate a buffer ID called a PBO (Pixel Buffer Object)
glGenBuffers(1, pbo);
//Make this the current UNPACK buffer
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, *pbo);
//Allocate data for the buffer. 4-channel 8-bit image
glBufferData(GL_PIXEL_UNPACK_BUFFER, size_tex_data, NULL, GL_DYNAMIC_COPY);
cudaGLRegisterBufferObject(*pbo);
and I'm buildilng a texture using
// Enable Texturing
glEnable(GL_TEXTURE_2D);
// Generate a texture identifier
glGenTextures(1,textureID);
// Make this the current texture (remember that GL is state-based)
glBindTexture( GL_TEXTURE_2D, *textureID);
// Allocate the texture memory. The last parameter is NULL since we only
// want to allocate memory, not initialize it
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA_FLOAT32_ATI, size_x, size_y, 0,
GL_RGBA, GL_FLOAT, NULL);
// Must set the filter mode, GL_LINEAR enables interpolation when scaling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
Later in a kernel I modify the PBO using something like:
float4* aryPtr = NULL;
cudaGLMapBufferObject((void**)&aryPtr, *pbo);
//Pixel* gpuPixelsRawPtr = thrust::raw_pointer_cast(&gpuPixels[0]);
//... do some cuda stuff to aryPtr ...
//If we don't unmap the PBO then OpenGL won't be able to use it:
cudaGLUnmapBufferObject(*pbo);
Now, before I draw to the screen using the texture generated above I call:
(note that rtMipmapTex = *textureID above and rtResultPBO = *pbo above)
glEnable(GL_DEPTH);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, rtMipmapTex);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, rtResultPBO);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
This all works fine and correctly displays the texture. But, if I change that last line to
glTexSubImage2D(GL_TEXTURE_2D, 1, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
which, as far as I understand, should show me the first level instead of the zeroth level texture in the texture pyramid, I just get a blank white texture.
How do I copy the texture from the PBO in such a way that the auto-mipmapping is triggered?
Thanks.

I was being an idiot. The above code works perfectly, the problem was that
glTexSubImage2D(GL_TEXTURE_2D, 1, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
doesn't select the mipmap level from the texture, it selects it from the pbo, which is not mipmapped. Instead you can display the particular mipmapped level with:
glTexEnvi(GL_TEXTURE_FILTER_CONTROL,GL_TEXTURE_LOD_BIAS, 4);

Related

OpenGL pixel transfer inside GPU

I have been working with VR recently and encountered some OpenGL related problem.
The API i use for VR capture a video stream and write it to a texture, I, then, want to submit this texture to a headset. But there is an incompatibility in the API : the texture I get from the stream has an undefined internal format and cannot be submitted to the headset directly.
I am working on a workaround, for now, I have used a GPU -> CPU -> GPU transfer :
I read the first texture pixel (with glReadPixels) and write them into a buffer, then I use this buffer to create a texture with the correct format. This works fine but has some latencies due to data transfers.
I have been trying to do a direct GPU copy but failed :
I tried using PBO but have problems with invalid operation (following http://www.songho.ca/opengl/gl_pbo.html), here is the code with a invalid operation on glReadPixels
// Initialization
glGenBuffers(1, &pbo);
glGenTextures(1, &dstTexture);
glBindTexture(GL_TEXTURE_2D, dstTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
// Copy to GPU
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, bufferSize, NULL, GL_DYNAMIC_DRAW);
glBindTexture(GL_TEXTURE_2D, id); // texture given by the API
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo);
glBufferData(GL_PIXEL_UNPACK_BUFFER, bufferSize, NULL, GL_DYNAMIC_READ);
glBindTexture(GL_TEXTURE_2D, dstTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
I tried using FBO but encountered pointer exceptions.
glCopyImageSubData does not work because the first texture internal format is not recognised.
What are the steps to do a direct GPU copy?

OpenGL ES3 FrameBuffer Object error

Here is my problem : I would like to render 2 images offscreen (which come from previous rendering) and then blend them together with a special algorithm. My first image/texture is RGB and the second one is in grayscale, so I only got the luminance.
I think the best way to achieve my goal is to use Frame Buffer Object. So I create one FBO for the first image, then I associate to that FBO a new texture which will received the rendering output. The texture's format is GL_RGB and it works fine, my first FBO contains the rendering in its texture.
Then I do the same with the second image which its format is GL_LUMINANCE. But in the draw function, when I bind to this FrameBuffer and check the Status I got the error GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. And if I change the texture to RGB it works.
But for the blending I need to access to the luminance and not the rgb component of my image.
Do you have any solutions?
I'm working with OpenGL ES 3.0 on the imx6q Freescale's SOC.
Here is my code :
1 - Creation of the Frame Buffer Object and texture
glGenFramebuffers(1, &m_uiFrameBufferObject);
glBindFramebuffer(GL_FRAMEBUFFER, m_uiFrameBufferObject);
glGenTextures(1, &m_uiTextureId[1]);
glBindTexture(GL_TEXTURE_2D, m_uiTextureId[1]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_LUMINANCE, 1280, 960,
0, GL_LUMINANCE, GL_UNSIGNED_BYTE, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_uiTextureId[1], 0);
glDrawBuffers(1, attachments); //GL_COLOR_ATTACHMENT0
Then in the draw function :
glBindFramebuffer(GL_FRAMEBUFFER, m_uiFrameBufferObject);
if( glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT )
{
// I go there
}
glViewport(0, 0, 1280, 960);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(m_ProgCamera.GetHandle());
glVertexAttribPointer(0, 3, GL_FLOAT, 0, 0, vVertices);
glEnableVertexAttribArray(0);
// Bind the color attributes
glVertexAttribPointer(1, 3, GL_FLOAT, 0, 0, g_vertexColors);
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 2, GL_FLOAT, 0, 0, g_vertexTexCoords);
glEnableVertexAttribArray(2);
// Select Our Texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureCamera.GetHandle());
glDrawArrays( GL_TRIANGLES, 0, 6 );
// Cleanup
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
Thanks for your time.
GL_LUMINANCE is not a color renderable format in OpenGL ES 3.0. The specification states in Section 4.4.4 (Framebuffer Completeness):
An internal format is color-renderable if it is one of the formats from table
3.12 noted as color-renderable or if it is unsized format RGBA or RGB. No
other formats, including compressed internal formats, are color-renderable.
Since GL_LUMINANCE is not included in this list, it is impossible to render to such a texture. Depending on your needs you could, for example, use GL_R8 instead.

Creating and reading 1D textures in OpenGL 4.x

I have problems to use 1D textures in OpenGL 4.x.
I create my 1d texture this way (BTW: I removed my error checks to make the code more clear and shorter - usually after each gl call a BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed."); follows):
glGenTextures(1, &textureId_);
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
// tells OpenGL how the data that is going to be uploaded is aligned
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
BLUE_ASSERTEx(description.data, "Invalid data provided");
glTexImage1D(
GL_TEXTURE_1D, // Specifies the target texture. Must be GL_TEXTURE_1D or GL_PROXY_TEXTURE_1D.
0, // Specifies the level-of-detail number. Level 0 is the base image level. Level n is the nth mipmap reduction image.
GL_RGBA32F,
description.width,
0, // border: This value must be 0.
GL_RGBA,
GL_FLOAT,
description.data);
BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed.");
// texture sampling/filtering operation.
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_1D, 0);
After the creation I try to read the pixel data of the created texture this way:
const int width = width_;
const int height = 1;
// Allocate memory for depth buffer screenshot
float* pixels = new float[width*height*sizeof(buw::vector4f)];
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, width, height, GL_RGBA, GL_FLOAT, pixels);
glBindTexture(GL_TEXTURE_1D, 0);
buw::Image_4f::Ptr img(new buw::Image_4f(width, height, pixels));
buw::storeImageAsFile(filename.toCString(), img.get());
delete pixels;
But the returned pixel data is different to the input pixel data (input: color ramp, ouptut: black image)
Any ideas how to solve the issue? Maybe I am using a wrong API call.
Replacing glReadPixels by glGetTexImage does fix the issue.

Depth values and FBO

This is the lovely image I keep seeing gDebugger as I attempt to implement shadow maps. For reference I'm using the tutorial at http://fabiensanglard.net/shadowmapping/index.php. I've tried a couple different things with varying results, mostly either an FBO that has flat depth values(no slight variance indicating weird depth ranges) or this awesome guy. I've tried a huge amount of different things but alas, sans complete FBO failure, I mostly come back to this. For reference, here is what gDebugger spits out as my actual depth buffer:
So, with all that in mind, here's my code. Shader code is withheld because I have yet to get something in my FBO that's worth processing.
Here's how I initialize the FBO/Depth tex:
int shadow_width = 1024;
int shadow_height = 1024;
// Try to use a texture depth component
glGenTextures(1, &depth_tex);
glBindTexture(GL_TEXTURE_2D, depth_tex);
// GL_LINEAR does not make sense for depth texture. However, next tutorial shows usage of GL_LINEAR and PCF
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Remove artefact on the edges of the shadowmap
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
// No need to force GL_DEPTH_COMPONENT24, drivers usually give you the max precision if available
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24,
shadow_width, shadow_height, 0, GL_DEPTH_COMPONENT,
GL_FLOAT, 0);
// attach the texture to FBO depth attachment point
glBindTexture(GL_TEXTURE_2D, 0);
// create a framebuffer object
glGenFramebuffers(1, &fbo_id);
glBindFramebuffer(GL_FRAMEBUFFER, fbo_id);
// Instruct openGL that we won't bind a color texture with the currently binded FBO
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D, depth_tex, 0);
// switch back to window-system-provided framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
And here's my render code:
glLoadIdentity();
float lpos[4] = {0,0,0,1};
float lpos3[4] = {5000,-500,5000,1};
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_FRAMEBUFFER,fbo_id);
glViewport(0, 0, 1024, 1024);
glClear(GL_DEPTH_BUFFER_BIT);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
setup_mats(lpos[0],lpos[1],lpos[2],lpos3[0],lpos3[1],lpos3[2]);
glCullFace(GL_FRONT);
mdlman.render_models(R_STENCIL);
mdlman.render_model(R_STENCIL,1);
set_tex_mat();
glBindFramebuffer(GL_FRAMEBUFFER,0);
glViewport( 0, 0, (GLsizei)800, (GLsizei)600 );
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
this->r_cam->return_LookAt();
gluPerspective(45,800/600,0.5f,2000.0f);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(&r_cam->getProjectionMatrix()[0][0]);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(&r_cam->getViewMatrix()[0][0]);
glCullFace(GL_BACK);
mdlman.render_models(R_NORM);
And here's how I draw objects for the FBO:
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER,models.at(mdlid).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(mdlid).i_buff);
glNormalPointer(GL_FLOAT,sizeof(vert),BUFFER_OFFSET(sizeof(GLfloat)*2));
glVertexPointer(3, GL_FLOAT,sizeof(vert),BUFFER_OFFSET(sizeof(GLfloat)*5));
glDrawElements(GL_TRIANGLES,(models.at(mdlid).f_index.size())*3,GL_UNSIGNED_INT,0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
Some closing notes: the depth buffer image from gDebugger IS from the lights POV. My camera is quaternion based and uses matrices aligned with OGL for it's lookAt function(which is used to translate to light position/view.) I can provide the code for that if need be. Everything else is pretty much carbon-copied from the tutorial to make sure that I'm not doing or setting up something stupid.

Error rendering with GLSL framebuffer

I am trying to get into screen a video frame in a 2d texture covering the screen, I must use framebuffers, because eventually I wish to do the ping-pong rendering technique. But for the time being I want first to achieve just to render on the screen by using framebuffers.
This is my setup code:
// Texture setup
int[] text = new int[1];
glGenTextures(1, text, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 512, 512, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, null);
// Frambuffer setup
int[] fbo = new int[1];
glGenFramebuffers(1, fbo, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, text[0], 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Till here all ok, I didn't write here the testing code to keep it small, but right after I check if the frame buffer was created correctly, and all is fine.
Now in my Render loop I do next:
// Use the GLSL program
glUseProgram(programHandle);
// Swap to my FBO
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glViewport(0, 0, 512, 512);
// Pass the new image data to the program so the fragment shader processes it.
// Using glTexSubImage2D to speed up
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glUniform1i(glGetUniformLocation(programHandle, "u_texture"), 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_LUMINANCE, GL_UNSIGNED_BYTE, NewImageData);
// Draw the quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Swap back to the default screen frambuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
At this point I get a black screen, and on the log I can see a GL_INVALID_FRAMEBUFFER_OPERATION 1286.
I tried putting the glDrawArrays after the glBindFramebuffer call, but then the application crashes.
Any thoughts? Thanks in advance.
LUMINANCE textures are not renderable, this means you can't render to them using a FBO. This problem is solved with the GL_ARB_texture_rg extension, which introduces one and two channel texture formats that are renderable.