I'm drawing an image from openCV full screen, this is a large image at 60fps so I needed a faster way than the openCV gui.
Using OpenGL I do:
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height);
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
Now I want to draw two images side:side - using the openGL hardware to scale them.
I can shrink the image by changing the quad size I don't understand how to load two images with glTexImage2() since there is no handle or id associated with the image.
The reason why you cannot see how to add another texture is because you are missing two critical functions in the code that you posted: glGenTextures and glBindTexture. The first will generate texture objects in the OpenGL context (places for textures to exist on the graphics hardware). The second "selects" one of those texture objects for subsequent calls (glTex..) to affect it.
First of all, the functions like glTexParameteri and glTexImage2D do not need to be called again at every rendering loop... but I guess in your case, you should do that because the images are always changing. By default, in your code, the texture object used is the zeroth object (a reserved one for the default). You should create two texture objects and bind them one after the other to achieve the desired result:
GLuint tex_obj[2]; //create two names for the texture (should not be global variables, but just for sake of this example).
void initGL() {
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glEnable(GL_TEXTURE_2D);
glGenTextures(2,tex_obj); //generate 2 texture objects with names tex_obj[0] and [1]
glBindTexture(GL_TEXTURE_2D, tex_obj[0]); //bind the first texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,tex_obj[0]); //bind the first texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width0, height0, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data0 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
}
That is basically how it is done. But I have to warn you that my knowledge of OpenGL is a bit outdated, so there might be more efficient ways to do this (I know at least that glBegin/glEnd is deprecated in C++, replaced by VBOs).
Remember openGL is a state machine - you put it into a state, give it a command and it replays the states later.
One nice thing about textures is that you can do things outside the paint call - so if in your image processing step you generate image 1 you can load it into the card at that point.
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select image 1 slot
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 ); // load it into the graphics card memory
And then recall it in the paint call
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select pre loaded image 1
glBegin(GL_QUADS); // draw it
Related
I've managed to get my game to read in a PNG file, and successfully texture my objects. To be honest, I can't 100% nail down how it's actually working - and now I'd like to extend it to loading several textures, and using the one I specify.
Here's my PNG loading function:
//Loads PNG to texture
GLuint loadPNG(string name) {
nv::Image img;
GLuint myTextureID;
if (img.loadImageFromFile(name.c_str())) {
glGenTextures(1, &myTextureID);
glBindTexture(GL_TEXTURE_2D, myTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, img.getInternalFormat(), img.getWidth(), img.getHeight(), 0, img.getFormat(), img.getType(), img.getLevel(0));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0f);
}
else {
MessageBox(NULL, L"Failed to load texture", L"Sorry!", MB_OK | MB_ICONINFORMATION);
}
return myTextureID;
}
In my main function, I define the texture like this:
//Load in player texture
testTexture = loadPNG("test.png");
where testTexture is a global variable, of type GLuint. And drawing my rectangles in my main draw loop is done this way:
//Used to draw rectangles
void drawRect(gameObject &p) {
glEnable(GL_TEXTURE_2D);
//Sets PNG transparent background
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//glBindTexture(GL_TEXTURE_2D, myTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(p.x, p.y);
glTexCoord2f(1.0, 0.0); glVertex2f(p.x + p.width, p.y);
glTexCoord2f(1.0, 1.0); glVertex2f(p.x + p.width, p.y + p.height);
glTexCoord2f(0.0, 1.0); glVertex2f(p.x, p.y + p.height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
This works fine, texturing all my objects with the defined texture. However, I'd like to be able to define more textures, and use those. I tried moving:
glBindTexture(GL_TEXTURE_2D, myTextureID);
from the loadPNG function, into my drawRect, as:
glBindTexture(GL_TEXTURE_2D, testTexture);
However this doesn't apply any texture whatsoever. If anyone has any ideas, I'd really appreciate the help. Thanks!
You have to bind the texture in order to initialize it with glTexImage2D. Don't remove the call to glBindTexture from loadPNG. If you want to render with a different texture, simply bind the texture before rendering the quads.
I have a simple OpenGL program built with GLUT that draws a few buildings, have a keyboard controlled moving camera, and I am trying to texture the ground. The animation is smooth.
The problem I have is with texturing mapping the ground. Two separate issues with two separate approaches
1) If I use glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST); then I have no
flickering when I zoom out to far away. But the issue is when I zoom out from far, it appears as a single solid color.
From far:
From close:
2) If I use, glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); then I have a lot of flickering
when I zoom out to far away as I move, almost look like a TV noise when you are out of tune.
But stationary, it looks more natural.
From far:
From close:
Questions: What would be a simple way from my existing code to get the best of both worlds? Looking natural from far, and no flickering of the texture as I move. Thank you.
Here is the relevant code:
GLuint grassTextureId;
GLfloat GROUND_PLANE_WIDTH = 1000.0f;
void Display()
{
glLoadIdentity();
camera.Update();
...
FlatGroundPlane_Draw(void);
...
glutSwapBuffers();
}
void FlatGroundPlane_Draw(void)
{
glEnable(GL_TEXTURE_2D);
glBindTexture( GL_TEXTURE_2D, grassTextureId); // call glBindTexture before glBegin
glBegin(GL_QUADS);
glNormal3f(0, 1, 0);
glTexCoord2d(0, 0);
GLdouble textCoord = GROUND_PLANE_WIDTH;
glVertex3f( -GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
// go beyond 1 for texture coordinate so it repeats and tiles
glTexCoord2d(0, textCoord);
glVertex3f( -GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, textCoord);
glVertex3f( GROUND_PLANE_WIDTH, 0, GROUND_PLANE_WIDTH);
glTexCoord2d(textCoord, 0);
glVertex3f( GROUND_PLANE_WIDTH, 0, -GROUND_PLANE_WIDTH);
glEnd();
glDisable(GL_TEXTURE_2D);
}
void Initialise()
{
glEnable(GL_DEPTH_TEST);
glClearDepth(1.0f);
glDepthFunc(GL_LEQUAL);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST);
modelParser = new ModelParser();
// grass texture is 1024x1024
// from http://seamless-pixels.blogspot.ca/2012_10_01_archive.html
grassTextureId = modelParser->LoadTiledTextureFromFile("./Grass/Grass_1024.ppm");
}
void ModelParser::UploadTiledTexture(unsigned int &iTexture, const RGBImage &img)
{
glGenTextures(1, &iTexture); // create the texture
glBindTexture(GL_TEXTURE_2D, iTexture);
// Issue 1: no flickering, but appear solid color from far away
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// Issue 2: flickering noise during movement but appear realistic from far away
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
// the texture would wrap over at the edges (repeat)
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, img.Width(), img.Height(), GL_RGB, GL_UNSIGNED_BYTE, img.Data());
}
int main(int argc, char** argv)
{
...
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
...
glutDisplayFunc(&GLUT::Display);
...
glutMainLoop();
}
My problem is that after having set up a frame buffer object with a single colour texture attached to the GL_COLOR_ATTACHMENT0 point and rendering a number of objects to this texture when I then go to draw this to the screen I get a completely white texture.
Now I know the drawing code is correct as I can draw to the back buffer just fine, it's simply when the frame buffer becomes involved that the problem occurs. The drawing code uses a very basic shader that textures a quad.
My code is below:
// Create the FBO
glGenFramebuffers(1, &m_gbufferFBO);
glBindFramebuffer(GL_FRAMEBUFFER, m_gbufferFBO);
// Create a colour texture for use in the fbo
glGenTextures(1, &m_colourBuffer);
glBindTexture(GL_TEXTURE_2D, m_colourBuffer);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_colourBuffer, 0);
// check if the frame buffer was successfully created
CheckFrameBufferErrors();
CheckGLErrors();
// Begin rendering with the frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, m_gbufferFBO);
glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT);
// Set the viewport to match the width and height of our FBO
glViewport(0, 0, m_width, m_height);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
// Clear buffer to whatever the clear colour is set to
glClearColor(m_clearColour.GetR(), m_clearColour.GetG(), m_clearColour.GetB(), m_clearColour.GetA());
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );
Game::GetInstance().GetCamera()->ApplyViewTransform();
// RENDERING OF OBJECTS
glPopAttrib();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Then the buffer's colour texture is rendered to the screen using the fixed function pipeline.
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
// Render the colour buffer to screen
glBindTexture(GL_TEXTURE_2D, m_colourBuffer);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2f(0.f, 0.f); glVertex3f(0.f, 0.f, 0.f);
glTexCoord2f(1.f, 0.f); glVertex3f(m_width, 0.f, 0.f);
glTexCoord2f(1.f, 1.f); glVertex3f(m_width, m_height, 0.f);
glTexCoord2f(0.f, 1.f); glVertex3f(0.f, m_height, 0.f);
glEnd();
Any ideas on what I could be doing wrong here?
The FBO attached texture must not be bound, when the FBO is bound. A texture can never be a data source and sink at the same time.
For all texturing units and targets to which the texture has been bound you must bind another or no texture before binding the FBO as render destination.
My OpenGL application which was working fine on ATI card stopped working when I put in an NVIDIA Quadro card. Texture simply don't work at all! I've reduced my program to a single display function which doesn't work:
void glutDispCallback()
{
//ALLOCATE TEXTURE
unsigned char * noise = new unsigned char [32 * 32 * 3];
memset(noise, 255, 32*32*3);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, noise);
delete [] noise;
//DRAW
glDrawBuffer(GL_BACK);
glViewport(0, 0, 1024, 1024);
setOrthographicProjection();
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glLoadIdentity();
glDisable(GL_BLEND);
glDisable(GL_LIGHTING);
glBindTexture(GL_TEXTURE_2D, textureID);
glColor4f(0,0,1,0);
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(-0.4,-0.4);
glTexCoord2f(0, 1);
glVertex2f(-0.4, 0.4);
glTexCoord2f(1, 1);
glVertex2f(0.4, 0.4);
glTexCoord2f(1,0);
glVertex2f(0.4,-0.4);
glEnd();
glutSwapBuffers();
//CLEANUP
GL_ERROR();
glDeleteTextures(1, &textureID);
}
The result is a blue quad (or whatever is specified by glColor4f()), and not a white quad which is what the texture is. I have followed the FAQ on OpenGL site. I have disabled blending in case texture was being blended out. I have disabled lighting. I have looked through glGetError() - no errors. I've also set glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); and GL_DECAL. Same result. I've also tried different polygon winding - CW and CCW.
Anyone else encounter this?
Can you try using GL_REPLACE in glTexEnvi? It could be a bug in the NV driver.
Your code is correct and does what it should.
memset(noise, 255, 32*32*3); makes the texture white, but you call glColor4f(0,0,1,0); so the final color will be (1,1,1)*(0,0,1) = (0,0,1) = blue.
What is the behavior you would like to have ?
I found the error. Somewhere else in my code I had initialized a GL_TEXTURE_3D object and had not called glDisable(GL_TEXTURE_3D);
Even though I had called glBindTexture(GL_TEXTURE_2D, textureID); it should have bound a 2D texture as the current texture and used that - as this code always worked on ATI cards. Well apparently the nVidia driver wasn't doing that - it was using that 3D texture for some reason. So adding glDisable(GL_TEXTURE_3D); fixed the problem and everything works as expected.
Thanks all who tried to help.
I'm trying to load a texture with RGBA values but the alpha values just seem to make the texture more white, not adjust the transparency. I've heard about this problem with 3D scenes, but I'm just using OpenGL for 2D. Is there anyway I can fix this?
I'm initializing OpenGL with
glViewport(0, 0, winWidth, winHeight);
glDisable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_DEPTH_TEST);
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, winWidth, 0, winHeight); // set origin to bottom left corner
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1, 1, 1);
Screenshot:
That washed out dotty image should be semitransparent. The black bits are supposed to be completely transparent. As you can see, there's an image behind it that isn't showing through.
The code to generate that texture is rather lengthy, so I'll describe what I did. It's a 40*30*4 array of type unsigned char. Every 4th char is set to 128 (should be 50% transparent, right?).
I then pass it into this function, loads the data into a texture:
void Texture::Load(unsigned char* data, GLenum format) {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _w, _h, format, GL_UNSIGNED_BYTE, data);
glDisable(GL_TEXTURE_2D);
}
And...I think I just found the problem. Was initializing the full-sized texture with this code:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tw, th, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glDisable(GL_TEXTURE_2D);
But I guess glTexImage2D needs to be GL_RGBA too? I can't use two different internal formats? Or at least not ones of different sizes (3 bytes vs 4 bytes)? GL_BGR works fine even when its initialized like this...
In the interest of others, I post my solution here.
The problem was that although my Load function was correct,
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _w, _h, GL_RGBA, GL_UNSIGNED_BYTE, data);
I was passing GL_RGB to this function
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tw, th, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
Which also needs to specify the correct number of bytes (four). From my understanding you can't use a different number of bytes for a SubImage, although I think you can use a different format if it does have the same number of bytes (i.e. mixing GL_RGB and GL_BGA is okay, but not GL_RGB and GL_RGBA).
Are there any overlapping primitives in your scene?
You are aware that you're calling the 3-parameter version of glColor, which sets the alpha to 1.0, right?
It could be helpful if you could post a screenshot, or otherwise describe what happens, say, when you draw two primitives with identical colors and differing alphas. In fact, any code demostrating the problem could help.
Edit:
I'd imagine that using TexImage with GL_RGB (for internalformat, the 3rd parameter) creates a 3-component texture with no alpha or alpha values implicitly initialized to 1, no matter what kind of pixel data you supply.
GL_BGR is not a valid value for this parameter, perhaps it is tricking your implementation into using a full 4-byte internal format? (Or a 2-byte one, as per GL_LUMINANCE_ALPHA) Or do you mean passing GL_BGR to your Texture::Load() function, which should not really be different from passing GL_RGB?
I think this should work, but it assumes the image has an alpha channel. If you try and load an image without an alpha channel you will get an exception or your application might crash. For non-alpha channel images use GL_RGB instead of GL_RGBA on the second parameter, right before setting the GL_UNSIGNED_BYTE.
void Texture::Load(unsigned char* data) {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tw, th, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glDisable(GL_TEXTURE_2D);
}