Error rendering with GLSL framebuffer - opengl

I am trying to get into screen a video frame in a 2d texture covering the screen, I must use framebuffers, because eventually I wish to do the ping-pong rendering technique. But for the time being I want first to achieve just to render on the screen by using framebuffers.
This is my setup code:
// Texture setup
int[] text = new int[1];
glGenTextures(1, text, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 512, 512, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, null);
// Frambuffer setup
int[] fbo = new int[1];
glGenFramebuffers(1, fbo, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, text[0], 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Till here all ok, I didn't write here the testing code to keep it small, but right after I check if the frame buffer was created correctly, and all is fine.
Now in my Render loop I do next:
// Use the GLSL program
glUseProgram(programHandle);
// Swap to my FBO
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[0]);
glViewport(0, 0, 512, 512);
// Pass the new image data to the program so the fragment shader processes it.
// Using glTexSubImage2D to speed up
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, text[0]);
glUniform1i(glGetUniformLocation(programHandle, "u_texture"), 0);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_LUMINANCE, GL_UNSIGNED_BYTE, NewImageData);
// Draw the quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Swap back to the default screen frambuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
At this point I get a black screen, and on the log I can see a GL_INVALID_FRAMEBUFFER_OPERATION 1286.
I tried putting the glDrawArrays after the glBindFramebuffer call, but then the application crashes.
Any thoughts? Thanks in advance.

LUMINANCE textures are not renderable, this means you can't render to them using a FBO. This problem is solved with the GL_ARB_texture_rg extension, which introduces one and two channel texture formats that are renderable.

Related

Get depth stored in different FBOs that has different shaders

The goal of my program is to compare between the depth of the RGB-D camera and OpenGL depth.
The camera is facing a model (real one) and it has its own shaders (fragment and vertex).
To make comparison, a 3D Model is made, this one has also its own shaders (the depth is normally linearized as I followed what is explained in this tutorial: https://learnopengl.com/Advanced-OpenGL/Depth-testing).
So the comparison will be done on the depth received from the camera and the depth related to the 3D model.
So,the steps I have done are:
1.In the initialization function
Create two textures for each one (color and depth textures).
Generate two FBOs (one for the camera, the other one for the 3D Model).
Attach textures to their FBOs.
/* Other stuff needed for initialization */
...
// Generate color texture
glGenTextures(1, &colorTex);
glBindTexture(GL_TEXTURE_2D, colorTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, _windowWidth, _windowHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
// Generate depth texture
glGenTextures(1, &depthTex);
glBindTexture(GL_TEXTURE_2D, depthTex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, _windowWidth, _windowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
// Attach the color and depth textures to current FrameBuffer
glGenFramebuffers(1, &_fboGL);
glBindFramebuffer(GL_FRAMEBUFFER, _fboGL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colorTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTex, 0);
In the main function for Drawing and rendering
Render the camera stream by binding its FBO.
Draw 3D Model by binding its own FBO.
Try to print openGL depth related to the 3D Model.
A fragment of the code looks like below:
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, camColorTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, camDepthTex, 0);
glViewport(0, 0, windowWidth, windowHeight);
glEnable(GL_DEPTH_TEST);
glClearColor(0.25f, 0.25f, 0.25f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/*******************************************/
/* Some code for displaying camera stream */
/*******************************************/
.............
drawModelFunction();
printDepthFunction();
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, camColorTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, camDepthTex, 0);
/***********************************************************/
/* Some stuff to render a GUI to interact with the program */
/***********************************************************/
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, _windowWidth, _windowHeight, 0, 0, _windowWidth, _windowHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glBlitFramebuffer(0, 0, _windowWidth, _windowHeight, 0, 0, _windowWidth, _windowHeight, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
A fragment code for drawModelFunction()
glBindFramebuffer(GL_FRAMEBUFFER, fboGL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colorTex, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTex, 0);
/*************************************/
/* Some stuff to render the 3D Model */
/*************************************/
A fragment code for printDepthFunction()
GLfloat *pixels = (GLfloat *)malloc(_windowWidth * _windowHeight * sizeof *pixels);
assert(pixels);
glBindTexture(GL_TEXTURE_2D, depthTex);
glGetTexImage(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels);
/* print the the values stored in **pixels** */
free(pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
When I try to display the values stored in the depth, I only obtain one value "1.0f" and the color returns me only the value "0.0f". In addition to that, the rendering is not here as I mixed everything with FBOs. I have a screen with mixture of camera stream, a red square inserted in the middle of the screen with the GUI in it (that took the red color of the square). And No 3D Model. (This is the result I have obtained after binding the framebuffers everywhere like shown before).

OpenGL: Depth Attachment breaks Framebuffer

I need a fresh pair of eyes. While working on rewriting my engine, I stumbled upon this issue while writing the Deferred Rendering path. The framebuffer displays only if I don't use a depth attachment, which means that the rendering is faulty, but if I do, all the outputs are blank. I wrote a lot of graphics handling classes but I broke down the code here:
Initialization:
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_CULL_FACE);
glClearColor(0, 0, 0, 1);
glViewport(0, 0, 1024, 768);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
ShaderPreparation();
numBuffer = 3;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
numBuffers = numBuffer;
targetBuffer = 0;
textures = new unsigned int[numBuffers];
glGenTextures(numBuffers, textures);
This is done three times:
glBindTexture(GL_TEXTURE_2D, textures[targetBuffer]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, colorType, width, height, 0, colorFormat, colorDataType, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + targetBuffer, GL_TEXTURE_2D, textures[targetBuffer], 0);
targetBuffer++;
Then I create the Depth Attachment:
glGenTextures(1, &renderBuffer);
glBindTexture(GL_TEXTURE_2D, renderBuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, renderBuffer, 0);
EDIT 2: Forgot the last bit of the FBO:
GLenum *DrawBuffers = new GLenum[numBuffers];
for (size_t i = 0; i < numBuffers; i++)
DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i;
glDrawBuffers(numBuffers, DrawBuffers);
// Report errors
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
fprintf(stderr, "Framebuffer Error. Status 0x%x\n", status);
}
// Unbind
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
Finally, my drawing (all calculations for geometry shader happens before this):
geometryShader->Use();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
graphicsWrapper->render(Geometry);
int val[3] = { 0,1,2 };
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
deferredShader->Use();
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glActiveTexture(GL_TEXTURE0 + 2);
glBindTexture(GL_TEXTURE_2D, textures[2]);
// I use this system so it's compatible with Uniform Buffer
// Objects and the rendering code of other graphics languages
deferredShader->PassData(&val);
deferredShader->SetInteger();
deferredShader->SetInteger();
deferredShader->SetInteger();
glClear(GL_COLOR_BUFFER_BIT);
vaoQuad->Bind();
graphicsWrapper->DrawVertexArray(4);
vaoQuad->Unbind();
Note that my code is much more object oriented than this, and I had to take a lot of this code out of context. My question is, why does attaching the depth attachment to the framebuffer cause the framebuffer to blank, while removing it works, and how do I fix it?
EDIT: I know that it's not a renderbuffer, I just called the depth texture that because it USED to be a renderbuffer, and I forgot to change the name.
When you don't have a depth buffer, every fragment will automatically pass the depth test. Now after you add a depth buffer, the depth test will be effective (assuming you enabled it). However, yo forgot to clear your depth buffer:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
You clear the default framebuffer's color and depth buffer (or another FBO, it that is bound at that time), but not the depth buffer of fbo here...
There is a high chance that you depth texture is initially all zeros. With default depth conventions, that means it is already the nearest value, so the depth test will likely fail for anything you draw.

What's wrong with my Depth Texture?

At the moment I'm playing with FBO, which I don't really master. I have no problems with color atachments, but I can't figure out how to use depth textures in my render.
At the moment, I render my scene on a FBO (color+depth), then I draw a textured quad using the depth texture... But it doesn't work, my screen stays black (everyhting is 0 : even if i multiply by 1000 it stays at 0). (Depth test works : if I render the color texture instead of the depth, depth test works correctly)
Rendering loop :
///Render to FBO
glEnable(GL_DEPTH_TEST);
FBO.Bind(); //Do I really need to explain what it does ?
glUseProgram(ShaderProgram->getProgramID());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
///Setting uniforms etc
mainScene.drawAll();
//draw the quad
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glViewport(0, 0, m_window.getSize().x, m_window.getSize().y);
glUseProgram(2DShader.getProgramID());
quadModel.bindVAO();
glBindTexture(GL_TEXTURE_2D, FBO.DepthTexureID);
glDrawArrays(GL_TRIANGLES, 0, 6);
And FBO creation :
glGenFramebuffers(1, &FBO_ID);
glBindFramebuffer(GL_FRAMEBUFFER, FBO_ID);
glGenTextures(1, &ColorBufferID);
glBindTexture(GL_TEXTURE_2D, ColorBufferID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorBufferID, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glGenTextures(1, &DepthTexture);
glBindTexture(GL_TEXTURE_2D, DepthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, DepthTexture, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Any idea ?
Your depth texture is not mipmap-complete, but you are using the default mipmap filtering, so sampling will fail. Just specify some non-mipmapping filters like GL_NEAREST or LINEAR for sampling that texture.

Depth values and FBO

This is the lovely image I keep seeing gDebugger as I attempt to implement shadow maps. For reference I'm using the tutorial at http://fabiensanglard.net/shadowmapping/index.php. I've tried a couple different things with varying results, mostly either an FBO that has flat depth values(no slight variance indicating weird depth ranges) or this awesome guy. I've tried a huge amount of different things but alas, sans complete FBO failure, I mostly come back to this. For reference, here is what gDebugger spits out as my actual depth buffer:
So, with all that in mind, here's my code. Shader code is withheld because I have yet to get something in my FBO that's worth processing.
Here's how I initialize the FBO/Depth tex:
int shadow_width = 1024;
int shadow_height = 1024;
// Try to use a texture depth component
glGenTextures(1, &depth_tex);
glBindTexture(GL_TEXTURE_2D, depth_tex);
// GL_LINEAR does not make sense for depth texture. However, next tutorial shows usage of GL_LINEAR and PCF
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Remove artefact on the edges of the shadowmap
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
// No need to force GL_DEPTH_COMPONENT24, drivers usually give you the max precision if available
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24,
shadow_width, shadow_height, 0, GL_DEPTH_COMPONENT,
GL_FLOAT, 0);
// attach the texture to FBO depth attachment point
glBindTexture(GL_TEXTURE_2D, 0);
// create a framebuffer object
glGenFramebuffers(1, &fbo_id);
glBindFramebuffer(GL_FRAMEBUFFER, fbo_id);
// Instruct openGL that we won't bind a color texture with the currently binded FBO
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D, depth_tex, 0);
// switch back to window-system-provided framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
And here's my render code:
glLoadIdentity();
float lpos[4] = {0,0,0,1};
float lpos3[4] = {5000,-500,5000,1};
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_FRAMEBUFFER,fbo_id);
glViewport(0, 0, 1024, 1024);
glClear(GL_DEPTH_BUFFER_BIT);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
setup_mats(lpos[0],lpos[1],lpos[2],lpos3[0],lpos3[1],lpos3[2]);
glCullFace(GL_FRONT);
mdlman.render_models(R_STENCIL);
mdlman.render_model(R_STENCIL,1);
set_tex_mat();
glBindFramebuffer(GL_FRAMEBUFFER,0);
glViewport( 0, 0, (GLsizei)800, (GLsizei)600 );
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
this->r_cam->return_LookAt();
gluPerspective(45,800/600,0.5f,2000.0f);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(&r_cam->getProjectionMatrix()[0][0]);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(&r_cam->getViewMatrix()[0][0]);
glCullFace(GL_BACK);
mdlman.render_models(R_NORM);
And here's how I draw objects for the FBO:
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER,models.at(mdlid).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(mdlid).i_buff);
glNormalPointer(GL_FLOAT,sizeof(vert),BUFFER_OFFSET(sizeof(GLfloat)*2));
glVertexPointer(3, GL_FLOAT,sizeof(vert),BUFFER_OFFSET(sizeof(GLfloat)*5));
glDrawElements(GL_TRIANGLES,(models.at(mdlid).f_index.size())*3,GL_UNSIGNED_INT,0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
Some closing notes: the depth buffer image from gDebugger IS from the lights POV. My camera is quaternion based and uses matrices aligned with OGL for it's lookAt function(which is used to translate to light position/view.) I can provide the code for that if need be. Everything else is pretty much carbon-copied from the tutorial to make sure that I'm not doing or setting up something stupid.

How to copy CUDA generated PBO to Texture with Mipmapping

I'm trying to copy a PBO into a Texture with automipmapping enabled, but it seems only the top level texture is generated (in other words, no mipmapping is occuring).
I'm building a PBO using
//Generate a buffer ID called a PBO (Pixel Buffer Object)
glGenBuffers(1, pbo);
//Make this the current UNPACK buffer
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, *pbo);
//Allocate data for the buffer. 4-channel 8-bit image
glBufferData(GL_PIXEL_UNPACK_BUFFER, size_tex_data, NULL, GL_DYNAMIC_COPY);
cudaGLRegisterBufferObject(*pbo);
and I'm buildilng a texture using
// Enable Texturing
glEnable(GL_TEXTURE_2D);
// Generate a texture identifier
glGenTextures(1,textureID);
// Make this the current texture (remember that GL is state-based)
glBindTexture( GL_TEXTURE_2D, *textureID);
// Allocate the texture memory. The last parameter is NULL since we only
// want to allocate memory, not initialize it
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA_FLOAT32_ATI, size_x, size_y, 0,
GL_RGBA, GL_FLOAT, NULL);
// Must set the filter mode, GL_LINEAR enables interpolation when scaling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
Later in a kernel I modify the PBO using something like:
float4* aryPtr = NULL;
cudaGLMapBufferObject((void**)&aryPtr, *pbo);
//Pixel* gpuPixelsRawPtr = thrust::raw_pointer_cast(&gpuPixels[0]);
//... do some cuda stuff to aryPtr ...
//If we don't unmap the PBO then OpenGL won't be able to use it:
cudaGLUnmapBufferObject(*pbo);
Now, before I draw to the screen using the texture generated above I call:
(note that rtMipmapTex = *textureID above and rtResultPBO = *pbo above)
glEnable(GL_DEPTH);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, rtMipmapTex);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, rtResultPBO);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
This all works fine and correctly displays the texture. But, if I change that last line to
glTexSubImage2D(GL_TEXTURE_2D, 1, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
which, as far as I understand, should show me the first level instead of the zeroth level texture in the texture pyramid, I just get a blank white texture.
How do I copy the texture from the PBO in such a way that the auto-mipmapping is triggered?
Thanks.
I was being an idiot. The above code works perfectly, the problem was that
glTexSubImage2D(GL_TEXTURE_2D, 1, 0, 0, canvasSize.x, canvasSize.y, GL_RGBA, GL_FLOAT, NULL);
doesn't select the mipmap level from the texture, it selects it from the pbo, which is not mipmapped. Instead you can display the particular mipmapped level with:
glTexEnvi(GL_TEXTURE_FILTER_CONTROL,GL_TEXTURE_LOD_BIAS, 4);