This should be really simple, but it's consumed multi-hours of my time, and I have no clue what's going on.
I'm rendering a flat-colored full-screen quad to a texture, then reading back the result with glGetTexImage. It's GPGPU related, so I want the alpha value to behave as if it's any of the other three. I'm using an FBO, texture format GL_RGBA32F_ARB, NVidia card on a MacBook Pro with 10.5, if it matters.
I only get back the correct color if the alpha I specify is one; with any other value it appears to be blending with what's already in the framebuffer, even though I've explicitly disabled GL_BLEND. I also tried enabling blending and using glBlendFunc(GL_ONE, GL_ZERO) but the end result is the same. I can clear the framebuffer to zero before rendering, which fixes it, but I want to understand why that's necessary. As a second test, rendering two overlapping quads gives a blended result, when I just want the original 4-channel color back. Surely the solid color quad should be overwriting pixels in the framebuffer completely? I'm guessing I've misunderstood something fundamental. Thanks.
const size_t res = 16;
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_FALSE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_ARB,
res, res, 0, GL_RGBA, GL_FLOAT, 0);
glBindTexture(GL_TEXTURE_2D, 0);
GLuint fbo;
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, tex, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
glViewport(0, 0, res, res);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, res, 0, res, -1, 1);
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
//glEnable(GL_BLEND);
//glBlendFunc(GL_ONE, GL_ZERO);
glDisable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glColor4f(0.2, 0.3, 0.4, 0.5);
for (int i=0; i<2; ++i) {
glBegin(GL_QUADS);
glVertex2i(0,0);
glVertex2i(res, 0);
glVertex2i(res, res);
glVertex2i(0, res);
glEnd();
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
std::vector<float> tmp(res*res*4);
glBindTexture(GL_TEXTURE_2D, tex);
glGetTexImage(GL_TEXTURE_2D, 0,
GL_RGBA, GL_FLOAT, &tmp.front());
const float * const x = &tmp.front();
cerr << x[0] << " " << x[1] << " " << x[2] << " " << x[3] << endl;
// prints 0.3 0.45 0.6 0.75
glDeleteTextures(1, &tex);
glDeleteFramebuffersEXT(1, &fbo);
Not really a good answer, however, some things to note:
What you're observing does not really look like blending. For one, your back-buffer is initially rgba=0, so alpha-blending against it would give 0, not 0.2 0.3 0.4 0.5 like you may observe.
my inclination was that you somehow set the same texture buffer as texture and framebuffer attachement. This is undefined in the spec (section 4.4.3). In the code snippet you provide, you do a glBindTexture(GL_TEXTURE_2D, 0) though, which should make sure it is not the case... I'll let it here in case you've missed it.
Related
I have rendered a depth map to a framebuffer in the following way:
// the framebuffer
glGenFramebuffers(1, &depthMapFBO);
// completion: attacching a texture
glGenTextures(1, &depthMap);
glBindTexture(GL_TEXTURE_2D, depthMap);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, SCR_WIDTH, SCR_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); // i.e. allocate memory, to be filled later at rendering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
// bind framebuffer and attach depth texture
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthMap, 0);
glDrawBuffer(GL_NONE); // i.e. explicitly tell we want no color data
glReadBuffer(GL_NONE); // i.e. explicitly tell we want no color data
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Note the use of GL_DEPTH_COMPONENT32F because I want a high precision.
Now I want to put the values stored in the depth buffer of the framebuffer into an array, preserving the precision. How to do so? Here is what I had in mind:
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFBO);
glClear(GL_DEPTH_BUFFER_BIT);
[ render the scene to framebuffer]
GLfloat * d= new GLfloat[conf::SCR_WIDTH * conf::SCR_HEIGHT];
glReadPixels(0, 0, conf::SCR_WIDTH, conf::SCR_HEIGHT, GL_DEPTH_COMPONENT, GL_FLOAT, d);
for (int i{ 0 }; i < conf::SCR_HEIGHT; ++i) {
for (int j{ 0 }; j < SCR_WIDTH; ++j) {
std::cout << d[i * SCR_WIDTH + j] << " ";
}
std::cout << std::endl;
}
However, this always prints 0.956376. Why so? I know I still have to re-linearize the depths... but why is always printed a constant value, and how can I fix this? Furthermore, is my approach correct, with regards to the lossless retrieval of information? Thanks in advance.
The same thing happens with:
GLfloat * d= new GLfloat[conf::SCR_WIDTH * conf::SCR_HEIGHT * 4];
glBindTexture(GL_TEXTURE_2D, depthMap);
glGetTexImage(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, GL_FLOAT, d);
I'm using opengl to draw the graphics for simple game like space invaders. So far I have it rendering moving meteorites and a gif file quite nicely. I get the basics. But I just cant get the framebuffer working properly which I indent to render bitmap font to.
The first function will be called inside the render function only when the score changes, This will produce a texture containing the score characters. The second function will draw the texture containing the bitmap font characters to the screen every time the render function is called. I thought this would be a more efficient way to draw the score. Right now I'm just trying to get it drawing a square using the frameBuffer, but it seems that the coordinates range from -1 to 0. I thought the coordinates for a texture went from 0 to 1? I commented which vertex effects which corner of the square and it seems to be wrong.
void Score::UpdateScoreTexture(int* success)
{
int length = 8;
char* chars = LongToNumberDigits(count, &length, 0);
glDeleteTextures(1, &textureScore);//last texture containing previous score deleted to make room for new score
glGenTextures(1, &textureScore);
GLuint frameBufferScore;
glGenTextures(1, &textureScore);
glBindTexture(GL_TEXTURE_2D, textureScore);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glGenFramebuffers(1, &frameBufferScore);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferScore);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureScore, 0);
GLenum status;
status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
std::cout << "status is: ";
std::cout << "\n";
switch (status)
{
case GL_FRAMEBUFFER_COMPLETE:
std::cout << "good";
break;
default:
PrintGLStatus(status);
while (1 == 1);
}
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferScore);
glBegin(GL_POLYGON);
glVertex3f(-1, -1, 0.0);//appears to be the bottom left,
glVertex3f(0, -1, 0.0);//appears to be the bottom right
glVertex3f(0, 0, 0.0);//appears to be the top right
glVertex3f(-1, 0, 0.0);//appears to be the top left
glEnd();
glDisable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D,0);
glDeleteFramebuffers(1, &frameBufferScore);
//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, chars2);
}
void Score::DrawScore(void)
{
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glBindTexture(GL_TEXTURE_2D, textureScore);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex3f(0.7, 0.925, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f(0.7, 0.975, 0.0);
glTexCoord2f(1.0, 1.0); glVertex3f(0.975, 0.975, 0.0);
glTexCoord2f(1.0, 0.0); glVertex3f(0.975, 0.925, 0.0);
glEnd();
glFlush();
glDisable(GL_TEXTURE_2D);
}
Any ideas where I'm going wrong?
You have not set the glViewport, this may give you problems.
Another possibility is that you have the matrix set to something other than identity.
Ensure that you have reset the model-view and projection matrices to identity (or what you want them to be) before glBegin(GL_POLYGON) in UpdateScoreTexture() (You may wish to push the matrices to the stack before you make changes):
glViewport(0,0, framebufferWidth, framebufferHeight)
glMatrixMode(GL_PROJECTION)
glPushMatrix()
glLoadIdentity()
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glLoadIdentity()
Then put them back at the end of the function:
glViewport(0,0, width, height)
glMatrixMode(GL_PROJECTION)
glPopMatrix()
glMatrixMode(GL_MODELVIEW)
glPopMatrix()
I have created the shadow map. However it has two problems :
1. The shadow comes into picture only when I change the model matrix. i.e initially there is no shadows, but when i press a key to move the figure, that is there is a change in the model matrix, the shadow appears.
2. There is a trail of old renders on the texture on the framebuffer that results in a long trail.
Can anyone shed any light on this?
THis is the screenshot of the problem
Edit Code here :
void generateShadowTex()
{
//Calculate final ligting properties
glm::vec4 a_f=light_ambient*mat_ambient;
glm::vec4 d_f=light_diffuse*mat_diffuse;
glm::vec4 s_f=light_specular*mat_specular;
int counter=0;
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST); // need depth test to correctly draw 3D objects
glClearColor(0,0,0,1);
//glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
if(wframe)
glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
else
glPolygonMode(GL_FRONT_AND_BACK,GL_FILL);
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D,depthTex);
// MAths here for mvp manupulation
//Draw Elements
if(a<17 || a==18)
glDrawElements(GL_QUADS, masterNumberIndices[a], GL_UNSIGNED_INT, (char*) NULL+0);
else
glDrawElements(GL_TRIANGLES, masterNumberIndices[a], GL_UNSIGNED_INT, (char*) NULL+0);
glBindVertexArrayAPPLE(0);
}
glUseProgram(0);
}
void Init_FBO()
{
GLfloat border[] = {1.0f, 0.0f, 0.0f, 0.0f};
//glActiveTexture(GL_TEXTURE3);
glGenTextures(1, &depthTex);
glBindTexture(GL_TEXTURE_2D, depthTex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24,900,900, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, border);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LESS);
glBindTexture(GL_TEXTURE_2D,0);
glGenFramebuffers(1, &shadowFBO);
glBindFramebuffer(GL_FRAMEBUFFER, shadowFBO);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTex, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0); // go back to the default framebuffer
// check FBO status
GLenum FBOstatus = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER);
if(FBOstatus != GL_FRAMEBUFFER_COMPLETE)
{
printf("GL_FRAMEBUFFER_COMPLETE failed, CANNOT use FBO\n");
}
else
{
printf("Frame Buffer Done Succesfully\n");
}
}
void display()
{
glBindFramebuffer(GL_FRAMEBUFFER, shadowFBO);
generateShadowTex();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
generateScene();
}
You need to enable the depth buffer before you clear it. From the OpenGL specification:
If a buffer is not present, then a glClear directed at that buffer has no effect. 1
And:
GL_DEPTH_TEST
If enabled, do depth comparisons and update the depth buffer. Note that even if the depth buffer exists and the depth mask is non-zero, the depth buffer is not updated if the depth test is disabled. 2
As to the first part of your question, I can only guess what the problem might be, but you're probably not initializing the model matrix correctly.
I have some code in OpenGL to render a YUV image onto an OpenGL viewport. The program works without a problem when running on nvidia cards, but it generates an error when running over the Intel HD 3000, which sadly is the target machine. The point where the error is generated is marked in the code.
The shader programs are
// Vertex Shader
#version 120
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
// fragment shader
#version 120
uniform sampler2D texY;
uniform sampler2D texU;
uniform sampler2D texV;
void main() {
vec4 color;
float y = texture2D(texY, gl_TexCoord[0].st).r;
float u = texture2D(texU, gl_TexCoord[0].st).r;
float v = texture2D(texV, gl_TexCoord[0].st).r;
color.r = (1.164 * (y - 0.0625)) + (1.596 * (v - 0.5));
color.g = (1.164 * (y - 0.0625)) - (0.391 * (u - 0.5)) - (0.813 * (v - 0.5));
color.b = (1.164 * (y - 0.0625)) + (2.018 * (u - 0.5));
color.a = 1.0;
gl_FragColor = color;
};
Then I run the program like this:
GLuint textures[3];
glGenTextures(3, textures);
glBindTexture(GL_TEXTURE_2D, textures[YTEX]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, textures[UTEX]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, textures[VTEX]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
GLsizei size = width * height;
GLvoid *y = yuv_buffer;
GLvoid *u = (GLubyte *)y + size;
GLvoid *v = (GLubyte *)u + (size >> 2);
glUseProgram(program_id);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE,
GL_UNSIGNED_BYTE, y);
glUniform1i(glGetUniformLocation(program_id, "texY"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width >> 1, height >> 1, 0,
GL_LUMINANCE, GL_UNSIGNED_BYTE, u);
glUniform1i(glGetUniformLocation(program_id, "texU"), 1);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, textures[2]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width >> 1, height >> 1, 0,
GL_LUMINANCE, GL_UNSIGNED_BYTE, u);
glUniform1i(glGetUniformLocation(program_id, "texV"), 2);
glBegin(GL_QUADS);
glTexCoord2f(texLeft, texTop);
glVertex2i(left, top);
glTexCoord2f(texLeft, texBottom);
glVertex2i(left, bottom);
glTexCoord2f(texRight, texBottom);
glVertex2i(right, bottom);
glTexCoord2f(texRight, texTop);
glVertex2i(right, top);
glEnd();
// glError() returns 0x506 here
glBindTexture(GL_TEXTURE_2D, 0);
glActiveTexture(GL_TEXTURE0);
glUseProgram(0);
update since the error happens with frame buffers, I discover they are used like this:
when the program is instantiated, a frame buffer is created like this:
glViewport(0, 0, (GLint)width, (GLint)height);
glGenFramebuffers(1, &fbo_id);
glGenTextures(1, &fbo_texture);
glGenRenderbuffers(1, &rbo_id);
glBindTexture(GL_TEXTURE_2D, fbo_texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rbo_id);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo_id);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, fbo_texture, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, rbo_id);
GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
glBindFramebuffer(GL_FRAMEBUFFER_EXT, 0);
glPushAttrib(GL_TEXTURE_BIT);
glBindTexture(GL_TEXTURE_2D, m_frameTexture->texture());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPopAttrib();
The YUV image comes spliced in tiles, which are assembled by rendering in this fbo. Whenever a frame starts, this is performed:
glBindFramebuffer(GL_FRAMEBUFFER_EXT, 0);
glDrawBuffer(GL_BACK);
glViewport(0, 0, (GLint)width, (GLint)height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, (double)width, 0.0, (double)height, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindFramebuffer(GL_FRAMEBUFFER_EXT, fbo_id);
Then the code above is executed, and after all the tiles had been assembled together
glBindFramebuffer(GL_FRAMEBUFFER_EXT, 0);
glPushAttrib(GL_VIEWPORT_BIT | GL_TEXTURE_BIT | GL_ENABLE_BIT);
glViewport(0, 0, width, height);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0.0, (double)width, 0.0, (double)height, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glBindTexture(GL_TEXTURE_2D, fbo_texture);
glBegin(GL_QUADS);
glTexCoord2i(0, 0);
glVertex2f(renderLeft, renderTop);
glTexCoord2i(0, 1);
glVertex2f(renderLeft, renderTop + renderHeight);
glTexCoord2i(1, 1);
glVertex2f(renderLeft + renderWidth, renderTop + renderHeight);
glTexCoord2i(1, 0);
glVertex2f(renderLeft + renderWidth, renderTop);
glEnd();
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
What's the value of status after:
GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
If the value is anything other than GL_FRAMEBUFFER_COMPLETE, OpenGL will probably choke when it tries to read from the FBO.
The glCheckFramebufferStatus docs describes other (error) values it can return, and what causes them.
Of particular interest might be:
If the currently bound framebuffer is not framebuffer complete, then
it is an error to attempt to use the framebuffer for writing or
reading. This means that rendering commands (glDrawArrays and
glDrawElements) as well as commands that read the framebuffer
(glReadPixels, glCopyTexImage2D, and glCopyTexSubImage2D) will
generate the error GL_INVALID_FRAMEBUFFER_OPERATION if called while
the framebuffer is not framebuffer complete.
(emphasis mine)
edit based on your comments:
To paraphrase the docs wrt GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
Not all framebuffer attachment points are framebuffer attachment complete.
This means that one of the following is happening:
At least one attachment point with a renderbuffer or texture attached has its attached object no longer in existence or has an attached image with a width or height of zero,
The color attachment point has a non-color-renderable image attached. Color-renderable formats include GL_RGBA4, GL_RGB5_A1, and GL_RGB565.
The depth attachment point has a non-depth-renderable image attached. GL_DEPTH_COMPONENT16 is the only depth-renderable format.
The stencil attachment point has a non-stencil-renderable image attached. GL_STENCIL_INDEX8 is the only stencil-renderable format.
We can rule out the last 2 bullets, because it doesn't appear that you're using depth or stencil attachements. That leaves two calls to examine:
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, fbo_texture, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, rbo_id);
From the opengl.org wiki on FBOs:
You get GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT when any of the attachments are 'incomplete'. Criteria for completeness are:
The source object for the image still exists and has the same type it was attached with.
The image has a non-zero width and height.
The layer for 3D or array textures attachments is less than the depth of the texture.
The image's format must match the attachment point's requirements, as defined above. Color-renderable formats for color attachments, etc.
The wiki says of GL_COLOR_ATTACHMENTi​:
These attachment points can only have images bound to them with
color-renderable formats. All compressed image formats are not
color-renderable, and thus cannot be attached to an FBO.
Double check that the fbo_texture and rbo_id are still valid, and that their height/width aren't 0. Finally, it could be fbo_texture's format. You've got it set to GL_RGBA8, but the docs say valid options include GL_RGBA4, GL_RGB5_A1, and GL_RGB565. I'm not sure whether or not that excludes all other formats (like your GL_RGBA8). The wiki seems to suggest that any non-compressed format should work. Try switching it to GL_RGBA4, and see if that works out.
glGetError error codes "stick" and are not automatically cleared. If something at the beginning your program generates OpenGL error AND you check for error code 1000 opengl calls later, error will be still here.
So if you want to understand what's REALLY going on, check for errors after every OpenGL call, or call glGetError in a loop, until all error codes are returned (as OpenGL documentation suggests).
I solved the problem. It was an extensions problem which made the render buffer object disappear. I basically changed this
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, rbo_id);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo_id);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, fbo_texture, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, rbo_id);
GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
for this
glBindRenderbuffer(GL_RENDERBUFFER, rbo_id);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, fbo_id);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TURE_2D, fbo_texture, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, rbo_id);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
and then it worked. I still wonder exactly what the problem was, but so far I am happy with the result. Special thanks to #luke who's answer helped to locate the exact point of the problem.
Exactly, what command raises error? Try to replace GL_QUADS with GL_TRIANGLE_FAN.
My OpenGL application which was working fine on ATI card stopped working when I put in an NVIDIA Quadro card. Texture simply don't work at all! I've reduced my program to a single display function which doesn't work:
void glutDispCallback()
{
//ALLOCATE TEXTURE
unsigned char * noise = new unsigned char [32 * 32 * 3];
memset(noise, 255, 32*32*3);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, noise);
delete [] noise;
//DRAW
glDrawBuffer(GL_BACK);
glViewport(0, 0, 1024, 1024);
setOrthographicProjection();
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glLoadIdentity();
glDisable(GL_BLEND);
glDisable(GL_LIGHTING);
glBindTexture(GL_TEXTURE_2D, textureID);
glColor4f(0,0,1,0);
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(-0.4,-0.4);
glTexCoord2f(0, 1);
glVertex2f(-0.4, 0.4);
glTexCoord2f(1, 1);
glVertex2f(0.4, 0.4);
glTexCoord2f(1,0);
glVertex2f(0.4,-0.4);
glEnd();
glutSwapBuffers();
//CLEANUP
GL_ERROR();
glDeleteTextures(1, &textureID);
}
The result is a blue quad (or whatever is specified by glColor4f()), and not a white quad which is what the texture is. I have followed the FAQ on OpenGL site. I have disabled blending in case texture was being blended out. I have disabled lighting. I have looked through glGetError() - no errors. I've also set glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); and GL_DECAL. Same result. I've also tried different polygon winding - CW and CCW.
Anyone else encounter this?
Can you try using GL_REPLACE in glTexEnvi? It could be a bug in the NV driver.
Your code is correct and does what it should.
memset(noise, 255, 32*32*3); makes the texture white, but you call glColor4f(0,0,1,0); so the final color will be (1,1,1)*(0,0,1) = (0,0,1) = blue.
What is the behavior you would like to have ?
I found the error. Somewhere else in my code I had initialized a GL_TEXTURE_3D object and had not called glDisable(GL_TEXTURE_3D);
Even though I had called glBindTexture(GL_TEXTURE_2D, textureID); it should have bound a 2D texture as the current texture and used that - as this code always worked on ATI cards. Well apparently the nVidia driver wasn't doing that - it was using that 3D texture for some reason. So adding glDisable(GL_TEXTURE_3D); fixed the problem and everything works as expected.
Thanks all who tried to help.