OpenGL Immediate Mode texture draws as black box (in FLTK provided context) - c++

I'm trying to display a 128x128 array of floats (0.0 -> 1.0) as a texture of green on black within a legacy FLTK application. The rest of the application uses immediate mode for all it's drawing, so I don't want to change the way it works. From reading tutorials, I've come up with the following calls that should be sufficient to load the texture into graphics memory, and draw it to the current context (managed by FLTK). However, all I can get it to do is draw a black square where the texture should be, drawing over anything else that has been drawn there (GL_LINES).
Is there anything basic that I'm missing here? some initialization step that I've neglected? Thanks in advance. Here the code that I'm trying to get running:
// Part of class init
glGenTextures(1, &tex);
// As far as I can tell, this is optional with FLTK having already
// set these parameters. Omitting them does not change output.
glPushMatrix();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODEL);
glLoadIdentity();
// This is the meat of the problem. reports no GL errors.
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_FLAT);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 128, 128, 0, GL_GREEN, GL_FLOAT, &image);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(-0.9,-0.9);
glTexCoord2d(0,1); glVertex2f(-0.9, 0.9);
glTexCoord2d(1,0); glVertex2f( 0.9,-0.9);
glTexCoord2d(1,1); glVertex2f( 0.9, 0.9);
glEnd();
glDisable(GL_TEXTURE_2D);
// Again, busywork.
glPopMatrix();

Related

openGL Texture mapping - c

I'm trying to display the texture on the window using openGL. However, the texture is only mapping to the bottom left of my window and it cuts off! output
Here is my code:
Texture:
GLuint textureID[1];
GLubyte Image[1024*768*4];
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1,textureID);
glBindTexture(GL_TEXTURE_2D, textureID[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, 1024, 768, 0, GL_RGBA, GL_UNSIGNED_BYTE, Image);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
Quad:
glPushMatrix ();
glTranslatef(0, 0.0, -1.1);
glMaterialf(GL_FRONT, GL_SHININESS, 30.0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textureID[0]);
glViewport(-511,-383,1025,768);
glBegin(GL_QUADS);
glTexCoord2d(0.0, 0.0); glVertex2f(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2f(1024.0, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2f(1024.0, 768.0);
glTexCoord2d(0.0, 1.0); glVertex2f(0.0, 768.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix ();
glFlush ();
I'm trying to map the texture into my window. Both of my window and texture have the size of 1024x768. What did I do wrong? If I comment out glViewport, the texture will be mapped to top right.
The reason why your image gets cut off are the values supplied to glViewport. This function specifies to which area of the screen the rendering should go. So if you set the x-value to -511 and the width to 1025, then the drawing will happen from pixel -511 to 514, which is exactly what you see.
What you actually want to get the image to your desired position is a projection (most probably an orthographic one), that maps you input coordinates to the appropriate normalized device coordinates (NDC). When not using projections OpenGL works in this NDC coordinates ranging from -1 to 1 on each axis and not, as you assumed, in pixel coordinates.
Your viewport parameters are invalid for your particular desires (negative values for x,y). Also likely you didn't specify a projection / modelview matrix pair that maps local coordinates to pixels (at least not in the code shown), yet the coordinates you pass to glVertex look like you want to address pixels.

Coordinates wrong when drawing to texture in framebuffer and then rendering texture to screen?

I'm using opengl to draw the graphics for simple game like space invaders. So far I have it rendering moving meteorites and a gif file quite nicely. I get the basics. But I just cant get the framebuffer working properly which I indent to render bitmap font to.
The first function will be called inside the render function only when the score changes, This will produce a texture containing the score characters. The second function will draw the texture containing the bitmap font characters to the screen every time the render function is called. I thought this would be a more efficient way to draw the score. Right now I'm just trying to get it drawing a square using the frameBuffer, but it seems that the coordinates range from -1 to 0. I thought the coordinates for a texture went from 0 to 1? I commented which vertex effects which corner of the square and it seems to be wrong.
void Score::UpdateScoreTexture(int* success)
{
int length = 8;
char* chars = LongToNumberDigits(count, &length, 0);
glDeleteTextures(1, &textureScore);//last texture containing previous score deleted to make room for new score
glGenTextures(1, &textureScore);
GLuint frameBufferScore;
glGenTextures(1, &textureScore);
glBindTexture(GL_TEXTURE_2D, textureScore);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glGenFramebuffers(1, &frameBufferScore);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferScore);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureScore, 0);
GLenum status;
status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
std::cout << "status is: ";
std::cout << "\n";
switch (status)
{
case GL_FRAMEBUFFER_COMPLETE:
std::cout << "good";
break;
default:
PrintGLStatus(status);
while (1 == 1);
}
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferScore);
glBegin(GL_POLYGON);
glVertex3f(-1, -1, 0.0);//appears to be the bottom left,
glVertex3f(0, -1, 0.0);//appears to be the bottom right
glVertex3f(0, 0, 0.0);//appears to be the top right
glVertex3f(-1, 0, 0.0);//appears to be the top left
glEnd();
glDisable(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D,0);
glDeleteFramebuffers(1, &frameBufferScore);
//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, chars2);
}
void Score::DrawScore(void)
{
glEnable(GL_TEXTURE_2D);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glBindTexture(GL_TEXTURE_2D, textureScore);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex3f(0.7, 0.925, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f(0.7, 0.975, 0.0);
glTexCoord2f(1.0, 1.0); glVertex3f(0.975, 0.975, 0.0);
glTexCoord2f(1.0, 0.0); glVertex3f(0.975, 0.925, 0.0);
glEnd();
glFlush();
glDisable(GL_TEXTURE_2D);
}
Any ideas where I'm going wrong?
You have not set the glViewport, this may give you problems.
Another possibility is that you have the matrix set to something other than identity.
Ensure that you have reset the model-view and projection matrices to identity (or what you want them to be) before glBegin(GL_POLYGON) in UpdateScoreTexture() (You may wish to push the matrices to the stack before you make changes):
glViewport(0,0, framebufferWidth, framebufferHeight)
glMatrixMode(GL_PROJECTION)
glPushMatrix()
glLoadIdentity()
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glLoadIdentity()
Then put them back at the end of the function:
glViewport(0,0, width, height)
glMatrixMode(GL_PROJECTION)
glPopMatrix()
glMatrixMode(GL_MODELVIEW)
glPopMatrix()

glTexImage2D multiple images

I'm drawing an image from openCV full screen, this is a large image at 60fps so I needed a faster way than the openCV gui.
Using OpenGL I do:
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height);
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
Now I want to draw two images side:side - using the openGL hardware to scale them.
I can shrink the image by changing the quad size I don't understand how to load two images with glTexImage2() since there is no handle or id associated with the image.
The reason why you cannot see how to add another texture is because you are missing two critical functions in the code that you posted: glGenTextures and glBindTexture. The first will generate texture objects in the OpenGL context (places for textures to exist on the graphics hardware). The second "selects" one of those texture objects for subsequent calls (glTex..) to affect it.
First of all, the functions like glTexParameteri and glTexImage2D do not need to be called again at every rendering loop... but I guess in your case, you should do that because the images are always changing. By default, in your code, the texture object used is the zeroth object (a reserved one for the default). You should create two texture objects and bind them one after the other to achieve the desired result:
GLuint tex_obj[2]; //create two names for the texture (should not be global variables, but just for sake of this example).
void initGL() {
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glEnable(GL_TEXTURE_2D);
glGenTextures(2,tex_obj); //generate 2 texture objects with names tex_obj[0] and [1]
glBindTexture(GL_TEXTURE_2D, tex_obj[0]); //bind the first texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,tex_obj[0]); //bind the first texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width0, height0, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data0 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
}
That is basically how it is done. But I have to warn you that my knowledge of OpenGL is a bit outdated, so there might be more efficient ways to do this (I know at least that glBegin/glEnd is deprecated in C++, replaced by VBOs).
Remember openGL is a state machine - you put it into a state, give it a command and it replays the states later.
One nice thing about textures is that you can do things outside the paint call - so if in your image processing step you generate image 1 you can load it into the card at that point.
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select image 1 slot
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 ); // load it into the graphics card memory
And then recall it in the paint call
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select pre loaded image 1
glBegin(GL_QUADS); // draw it

Transparent textures in OpenGL do not replace quad alpha

I am trying to draw a partly-transparent texture above a coloured quad, so that the colour shines through the transparent part, making an outline. This is the code:
// Setup
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
// (...)
// Get image bits in object 't'
glBindTexture(GL_TEXTURE_2D, textureIDs[idx]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D, 0, 3, t.width(), t.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, t.bits());
// (...)
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
// Underlying coloured quad
glColor4d(0.0, 1.0, 0.0, 1.0);
glBindTexture(GL_TEXTURE_2D, 0);
glBegin(GL_QUADS);
glVertex3d(coords11.first, coords11.second, -0.001);
glVertex3d(coords12.first, coords12.second, -0.001);
glVertex3d(coords21.first, coords21.second, -0.001);
glVertex3d(coords22.first, coords22.second, -0.001);
glEnd();
// Textured quad
glColor4d(1.0, 1.0, 1.0, 0.5);
glBindTexture(GL_TEXTURE_2D, textureIDs[castleTextureIndices[0]]);
glBegin(GL_QUADS);
glTexCoord2d(0.0, 0.0); glVertex3d(coords11.first, coords11.second, 0);
glTexCoord2d(1.0, 0.0); glVertex3d(coords12.first, coords12.second, 0);
glTexCoord2d(1.0, 1.0); glVertex3d(coords21.first, coords21.second, 0);
glTexCoord2d(0.0, 1.0); glVertex3d(coords22.first, coords22.second, 0);
glEnd();
The texture is black except where it has the outline of a castle; that part is transparent (and white). The expected behaviour, therefore, is a black rectangle with a green outline. What I get instead is a greenish rectangle with a white outline:
Screenshot
That's with alpha on the textured quad set to 0.5. If instead I set it to 1.0, I get back the texture with no transparency, as in (oh well, can't post the second screenshot as a link; it's at s330.photobucket.com/albums/l412/TWBWar/?action=view&current=replace10.png). With alpha of 0, I get a green rectangle. It therefore seems to me that the textured quad is being blended using the alpha value set by glColor4d, instead of the value from the texture, which is what I expected from GL_REPLACE. (I experimented with GL_DECAL and GL_MODULATE as well, but I wasn't able to get the behaviour I wanted.) Can anyone tell me how to make OpenGL use the texture, not the quad, alpha for blending?
Get ready for a trip down memory lane.
glTexImage2D(GL_TEXTURE_2D, 0, 3, ..., ..., 0, GL_RGBA, GL_UNSIGNED_BYTE, ...);
That 3 in there is your problem. The right part of the arguments (RGBA+UNSIGNED_BYTE) mentions what your source data type is. the destination data type is the third argument (i.e. what GL will store your texture as).
As it happens, when GL 1.0 was created, texturing was somewhat simpler. So the parameter only was required to specify the number of channels you wanted for your final texture, with the mapping
1=GL_LUMINANCE
2=GL_LUMINANCE_ALPHA
3=GL_RGB
4=GL_RGBA
I believe this use is discouraged, but as you can see, still works.
So... Your texture is stored as GL_RGB, without alpha.
The rest is simple application of the spec. A replace on a texture with internal format RGB does final_color=texture_color and final_alpha=fragment_alpha (aka the source from glColor in your case). See this man page for the full tables of texture environment based on internal formats.

OpenGL Nvidia Driver 259.12 texture not working

My OpenGL application which was working fine on ATI card stopped working when I put in an NVIDIA Quadro card. Texture simply don't work at all! I've reduced my program to a single display function which doesn't work:
void glutDispCallback()
{
//ALLOCATE TEXTURE
unsigned char * noise = new unsigned char [32 * 32 * 3];
memset(noise, 255, 32*32*3);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, noise);
delete [] noise;
//DRAW
glDrawBuffer(GL_BACK);
glViewport(0, 0, 1024, 1024);
setOrthographicProjection();
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glLoadIdentity();
glDisable(GL_BLEND);
glDisable(GL_LIGHTING);
glBindTexture(GL_TEXTURE_2D, textureID);
glColor4f(0,0,1,0);
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(-0.4,-0.4);
glTexCoord2f(0, 1);
glVertex2f(-0.4, 0.4);
glTexCoord2f(1, 1);
glVertex2f(0.4, 0.4);
glTexCoord2f(1,0);
glVertex2f(0.4,-0.4);
glEnd();
glutSwapBuffers();
//CLEANUP
GL_ERROR();
glDeleteTextures(1, &textureID);
}
The result is a blue quad (or whatever is specified by glColor4f()), and not a white quad which is what the texture is. I have followed the FAQ on OpenGL site. I have disabled blending in case texture was being blended out. I have disabled lighting. I have looked through glGetError() - no errors. I've also set glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); and GL_DECAL. Same result. I've also tried different polygon winding - CW and CCW.
Anyone else encounter this?
Can you try using GL_REPLACE in glTexEnvi? It could be a bug in the NV driver.
Your code is correct and does what it should.
memset(noise, 255, 32*32*3); makes the texture white, but you call glColor4f(0,0,1,0); so the final color will be (1,1,1)*(0,0,1) = (0,0,1) = blue.
What is the behavior you would like to have ?
I found the error. Somewhere else in my code I had initialized a GL_TEXTURE_3D object and had not called glDisable(GL_TEXTURE_3D);
Even though I had called glBindTexture(GL_TEXTURE_2D, textureID); it should have bound a 2D texture as the current texture and used that - as this code always worked on ATI cards. Well apparently the nVidia driver wasn't doing that - it was using that 3D texture for some reason. So adding glDisable(GL_TEXTURE_3D); fixed the problem and everything works as expected.
Thanks all who tried to help.