OpenGL alpha value makes texture whiter? - opengl

I'm trying to load a texture with RGBA values but the alpha values just seem to make the texture more white, not adjust the transparency. I've heard about this problem with 3D scenes, but I'm just using OpenGL for 2D. Is there anyway I can fix this?
I'm initializing OpenGL with
glViewport(0, 0, winWidth, winHeight);
glDisable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_DEPTH_TEST);
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, winWidth, 0, winHeight); // set origin to bottom left corner
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1, 1, 1);
Screenshot:
That washed out dotty image should be semitransparent. The black bits are supposed to be completely transparent. As you can see, there's an image behind it that isn't showing through.
The code to generate that texture is rather lengthy, so I'll describe what I did. It's a 40*30*4 array of type unsigned char. Every 4th char is set to 128 (should be 50% transparent, right?).
I then pass it into this function, loads the data into a texture:
void Texture::Load(unsigned char* data, GLenum format) {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _w, _h, format, GL_UNSIGNED_BYTE, data);
glDisable(GL_TEXTURE_2D);
}
And...I think I just found the problem. Was initializing the full-sized texture with this code:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tw, th, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glDisable(GL_TEXTURE_2D);
But I guess glTexImage2D needs to be GL_RGBA too? I can't use two different internal formats? Or at least not ones of different sizes (3 bytes vs 4 bytes)? GL_BGR works fine even when its initialized like this...

In the interest of others, I post my solution here.
The problem was that although my Load function was correct,
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _w, _h, GL_RGBA, GL_UNSIGNED_BYTE, data);
I was passing GL_RGB to this function
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tw, th, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
Which also needs to specify the correct number of bytes (four). From my understanding you can't use a different number of bytes for a SubImage, although I think you can use a different format if it does have the same number of bytes (i.e. mixing GL_RGB and GL_BGA is okay, but not GL_RGB and GL_RGBA).

Are there any overlapping primitives in your scene?
You are aware that you're calling the 3-parameter version of glColor, which sets the alpha to 1.0, right?
It could be helpful if you could post a screenshot, or otherwise describe what happens, say, when you draw two primitives with identical colors and differing alphas. In fact, any code demostrating the problem could help.
Edit:
I'd imagine that using TexImage with GL_RGB (for internalformat, the 3rd parameter) creates a 3-component texture with no alpha or alpha values implicitly initialized to 1, no matter what kind of pixel data you supply.
GL_BGR is not a valid value for this parameter, perhaps it is tricking your implementation into using a full 4-byte internal format? (Or a 2-byte one, as per GL_LUMINANCE_ALPHA) Or do you mean passing GL_BGR to your Texture::Load() function, which should not really be different from passing GL_RGB?

I think this should work, but it assumes the image has an alpha channel. If you try and load an image without an alpha channel you will get an exception or your application might crash. For non-alpha channel images use GL_RGB instead of GL_RGBA on the second parameter, right before setting the GL_UNSIGNED_BYTE.
void Texture::Load(unsigned char* data) {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tw, th, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glDisable(GL_TEXTURE_2D);
}

Related

Using glGetTexImage() to aquire OpenCV Mat

Im trying to make an OpenCV Mat() using output from OpenGL's glGetTexImage(). The texture I am trying to get information from was made using the call;
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
and so I've tried to do this using;
unsigned char* texture_bytes = (unsigned char*)malloc(sizeof(unsigned char)*texture_width*texture_height * 3);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
Matrix = Mat(texture_height, texture_width, CV_8UC3, texture_bytes);
What I am wondering is If anyone knows what I should set the format and type of glGetTexImage() to in order for this to work. Also, what should i set the type of the Mat() to?
You can assume that the context is set correctly, and that the texture that is input is correct. I have verified this by displaying the texture on screen using OpenGL. Thanks in advance!
I have been faced with the problem of getting data from OpenGL to OpenCV recently. I didn't use glGetTexImage though.
What I did was an offscreen render in a framebuffer with a texture initialized like this:
GLuint texture;
if (glIsTexture(texture)) {
glDeleteTextures(1, &texture);
}
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then after my draw calls, I get the data using glReadPixels:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadBuffer(GL_COLOR_ATTACHMENT0);
cv::Mat texture = cv::Mat::zeros(height, width, CV_32FC3);
glReadPixels(0, 0, width, height, GL_BGR, GL_FLOAT, texture.data);
Hope it helps you.
You have a mismatch in the format parameter used for the glGetTexImage() call and the internal format of the texture:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
...
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
For an integer texture, which you have in this case, you need to use a format parameter to glGetTexImage() that works for integer textures. In your example, that would be:
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR_INTEGER, GL_UNSIGNED_BYTE, texture_bytes);
It is always a good idea to call glGetError() if you have any kind of problem getting the desired OpenGL behavior. In this case, you would have gotten a GL_INVALID_OPERATION error, based on this error condition in the spec:
format is one of the integer formats in table 8.3 and the internal format of the texture image is not integer, or format is not one of the integer formats in table 8.3 and the internal format is integer.

OpenGL: what is the BEST texture format for depth infomation?

I am the new one here, and I have a question about the texture format in OpenGL for depth infomation, there is part of my code:
glGenTextures(1,&tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16UI_EXT, width, height, 0, GL_LUMINANCE_INTEGER_EXT, GL_UNSIGNED_SHORT, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The question is: in my Intel HD (graphics 5500), there will have problem in "glTexImage2D" when I want to deal with the depth camera (in unsigned short), but that is okey for NV (GeForce 940M). (The GL error is 0x0502)
Is the internal format "GL_LUMINANCE16UI_EXT" not suitable for Intel HD? or do I miss something or there have better format can used?
BTW, I had tried the internal format "GL_DEPTH_COMPONENT16" with "GL_DEPTH_COMPONENT" to make the error not happened, but some other problem will happen in the following code after the code above:
glBindTexture(GL_TEXTURE_2D, tex);
frameBuffer.Bind();
glPushAttrib(GL_VIEWPORT_BIT);
glViewport(0, 0, renderBuffer.width, renderBuffer.height);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GlSlProgram Bind;
.
glGetUniformLocation(...);
glUniform3f(...);
.
glDrawArrays(GL_POINTS, 0, 1);
frameBuffer.Unbind();
GlSlProgram Unbind;
glPopAttrib();
glFinish();
The gl error will happen in "glClear" and "glDrawArrays" with 0x0506, when this kind of format is used. I don't know how to fix that...
GL_LUMINANCE16UI is no depth buffer format and will most likely not work. A list of available depth buffer formats is here.
also, you probably shouldn't bind the texture itself but instead attach it to the framebuffer with glFrameBufferTexture2D and GL_DEPTH_ATTACHMENT.

Problems Loading 32 bit .bmp texture in opengl c++

I'm having some trouble loading a 32 bit .bmp image in my opengl game. So far i can load and display 24 bit perfectly. I now want to load a bit map with portions of its texture being transparent or invisible.
This function has no problems with 24 bit. but 32 bit with alpha channel .bmp seem to distort the colors and cause transparently in unintended places.
Texture LoadTexture( const char * filename, int width, int height, bool alpha)
{
GLuint texture;
GLuint* data;
FILE* file;
fopen_s(&file, filename, "rb");
if(!file)
{
std::cout << filename <<": Load Texture Fail!\n";
exit(0);
}
data = new GLuint[width * height];
fread( data, width * height * sizeof(GLuint), 1, file );
fclose(file);
glGenTextures( 1, &texture );
glBindTexture( GL_TEXTURE_2D, texture);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
if(alpha) //for .bmp 32 bit
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
else //for .bmp 24 bit
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
}
std::cout<<"Texture: "<<filename<< " loaded"<<std::endl;
delete data;
return Texture(texture);
}
In Game Texture, drawn on a flat plane
this might look like its working but the 0xff00ff color is the one that should be transparent. and if i reverse the alpha channel in photoshop the result is the same the iner sphere is always transparent.
i also enabled:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
there is no problems with transparency the problems seems to be with loading the bitmap with an alpha channel. also all bit maps that I load seem to be off a bit to the right. Just wondering if there was a reason for this?
I'm going to answer this on the assumption that your file "parsing" is in-fact correct. That the file data is just the image part of a .BMP without any of the header information.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
I find it curious that your 3-component data is in BGR order, yet your 4-component data is in RGBA order. Especially if this data comes from a .BMP file (though they don't allow alpha channels). Are you sure that your data isn't in a different ordering? For example, perhaps ABGR order?
Also, stop using numbers for image formats. You should use a real, sized internal format for your textures, not "3" or "4".
So code that i use work for 100% for me compiled by visual c 2012
glBindTexture(GL_TEXTURE_2D, texture_id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Thia is very important!!!!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imag_ptr, ptr->image->height, 0,GL_RGBA, GL_UNSIGNED_BYTE, imag_ptr);
and than in render i use
glPushMatrix();
glTranslatef(0,0,0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 1);
glEnable(GL_BLEND);
glColor4ub(255,255,255,255); //This is veryveryvery importent !!!!!!!!!!! (try to play and you see)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBegin(GL_QUADS);
glTexCoord3d(1, 1, 0);
glVertex2f(8,8);
glTexCoord3d(0, 1, 0);
glVertex2f(-8,8);
glTexCoord3d(0, 0, 0);
glVertex2f(-8,-8);
glTexCoord3d(1, 0, 0);
glVertex2f(8,-8);
glEnd();
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix();
and i use for example android png icon and another i try to post another image but i have no peputation for this so if you want i send it to you
So all of this in png format
but in bmp format and tga you must swap colors from ARGB to RGBA without this its not working
for( x=0;x<bmp_in_fheader.width;x++)
{
t=pixel[i];
t1=pixel[i+1];
t2=pixel[i+2];
t3=pixel[i+3];
pixel_temp[j]=t2;
pixel_temp[j+1]=t1;
pixel_temp[j+2]=t;
pixel_temp[j+3]=t3;
i+=4;
j+=4;
}
==Next==
To crate them in photoshop you must delete your background and draw on new layer than add alpha layer
in channels REMEMBER !! Very important to that in alpha all black color is represent transparency
and your image must be under white color only

GL-CL-Interop: Test for texture completeness

EDIT: As suggested I changed the texture target to GL_TEXTURE_2D. So the initialisation looks like that now:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
Since it's a GL_TEXTURE_2D, mipmaps need to be defined. How should that be reflected on the initialisation of the OpenCL Image2D?
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, tex, &err));
I'm still getting a CL_INVALID_GL_OBJECT, though. So the question still is: How can I check for texture completeness at the point of the initialisation of the OpenCL Image2D?
Previous approach:
I'm decoding a video-file with avcodec. The result is an AVFrame. I can display the frames on a GL_TEXTURE_RECTANGLE_ARB.
This is an excerpt from my texture initialisation, following an initialisation of the gl (glew) context:
GLuint tex = 0;
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, tex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB8, width,
height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
}
Now I want to assign tex to a Image2DGL for an interop between OpenCL and OpenGL. Im using an Nvidia Geforce 310M, Cuda Toolkit 4. OpenGL version is 3.3.0 and GLX version is 1.4.
texMems.push_back(Image2DGL(clw->context, CL_MEM_READ_WRITE, GL_TEXTURE_RECTANGLE_ARB, 0, tex, &err));
This gives back an:
clCreateFromGLBuffer: -60 (CL_INVALID_GL_OBJECT)
This is all happening before I'm starting the render loop. I can display the video frames on the texture just fineafter that. The texture target (GL_TEXTURE_RECTANGLE_ARB) is allowed for the OpenCL context, as the corresponding OpenGL extension is enabled (GL_ARB_texture_rectangle).
Now the error description in the OpenCL 1.1 spec states:
CL_INVALID_GL_OBJECT if texture is not a GL texture object whose type matches
texture_target, if the specified miplevel of texture is not defined, or if the width or height
of the specified miplevel is zero.
I'm using GL_TEXTURE_RECTANGLE_ARB, so there's not mipmapping (as I understand). However what I found was this statement in the Nvidia OpenCL implementation notes:
If the texture object specified in a call to clCreateFromGLTexture2D or
clCreateFromGLTexture3D is incomplete as per OpenGL rules on texture
completeness then the call will return CL_INVALID_GL_OBJECT in errcode_ret.
How can I validate the texture completeness at that state where I only initialize the texture without providing any actual texture content? Any ideas?
I was able to resolve the issue that I couldn't create an Image2DGL. I was missing to specify a 4-channel internal format for the texture2D:
void initTexture( int width, int height )
{
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL );
glBindTexture(GL_TEXTURE_2D, 0);
}
By specifying GL_RGBA I was able to successfully create the Image2DGL (which is equivalent to clCreateFromGLTexture2D). It seems that this fulfilled the demand to have texture completeness.

glTexImage2D multiple images

I'm drawing an image from openCV full screen, this is a large image at 60fps so I needed a faster way than the openCV gui.
Using OpenGL I do:
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height);
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
Now I want to draw two images side:side - using the openGL hardware to scale them.
I can shrink the image by changing the quad size I don't understand how to load two images with glTexImage2() since there is no handle or id associated with the image.
The reason why you cannot see how to add another texture is because you are missing two critical functions in the code that you posted: glGenTextures and glBindTexture. The first will generate texture objects in the OpenGL context (places for textures to exist on the graphics hardware). The second "selects" one of those texture objects for subsequent calls (glTex..) to affect it.
First of all, the functions like glTexParameteri and glTexImage2D do not need to be called again at every rendering loop... but I guess in your case, you should do that because the images are always changing. By default, in your code, the texture object used is the zeroth object (a reserved one for the default). You should create two texture objects and bind them one after the other to achieve the desired result:
GLuint tex_obj[2]; //create two names for the texture (should not be global variables, but just for sake of this example).
void initGL() {
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glEnable(GL_TEXTURE_2D);
glGenTextures(2,tex_obj); //generate 2 texture objects with names tex_obj[0] and [1]
glBindTexture(GL_TEXTURE_2D, tex_obj[0]); //bind the first texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,tex_obj[0]); //bind the first texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width0, height0, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data0 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
}
That is basically how it is done. But I have to warn you that my knowledge of OpenGL is a bit outdated, so there might be more efficient ways to do this (I know at least that glBegin/glEnd is deprecated in C++, replaced by VBOs).
Remember openGL is a state machine - you put it into a state, give it a command and it replays the states later.
One nice thing about textures is that you can do things outside the paint call - so if in your image processing step you generate image 1 you can load it into the card at that point.
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select image 1 slot
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 ); // load it into the graphics card memory
And then recall it in the paint call
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select pre loaded image 1
glBegin(GL_QUADS); // draw it