OpenGL rendering with texture unclear - c++

I'm using the latest version of OpenGL. But when I create textures using the following function is written in C++:
GLuint Texture::Generate2dTexture(int width, int height, char* data, int length)
{
GLuint textureIndex;
glGenTextures(1, &textureIndex);
glBindTexture(GL_TEXTURE_2D, textureIndex);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, width, height, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return textureIndex;
}
and render using:
void Renderer::DrawRectangle(GLuint textureID, RECTD rect)
{
glBindTexture(GL_TEXTURE_2D, textureID);
glBegin(GL_QUADS);
{
glTexCoord2f(0.0f, 1.0f);
glVertex2d(rect.left, rect.top);
glTexCoord2f(0.0f, 0.0f);
glVertex2d(rect.left, rect.bottom);
glTexCoord2f(1.0f, 0.0f);
glVertex2d(rect.right, rect.bottom);
glTexCoord2f(1.0f, 1.0f);
glVertex2d(rect.right, rect.top);
}
glEnd();
}
They appear unclear after rendering. It seems quite different from the appearance in the image viewer made by M$.
What cause that to happen and how to avoid the risk?

Check the dimensions of your image. Most image loaders will expect images to be presented in powers of 2. So, an 8x8 pixel image should look clear (as opposed to stretched out or garbled). Try clipping the image to 32x32 or 128x128 etc to see if that helps. If the image you want cannot be represented where height=width you can still adjust the canvas so it is something like 128x128 and then simply use the UV coordinates to take the portion of the image you want for your texture.
Also, I think if you disable bilinear filtering any blurring should be taken care of:
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Please note this is not a definite answer - I don't have enough rep to comment...

Related

Binding multiple textures OpenGL to different quads

I've managed to get my game to read in a PNG file, and successfully texture my objects. To be honest, I can't 100% nail down how it's actually working - and now I'd like to extend it to loading several textures, and using the one I specify.
Here's my PNG loading function:
//Loads PNG to texture
GLuint loadPNG(string name) {
nv::Image img;
GLuint myTextureID;
if (img.loadImageFromFile(name.c_str())) {
glGenTextures(1, &myTextureID);
glBindTexture(GL_TEXTURE_2D, myTextureID);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, img.getInternalFormat(), img.getWidth(), img.getHeight(), 0, img.getFormat(), img.getType(), img.getLevel(0));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16.0f);
}
else {
MessageBox(NULL, L"Failed to load texture", L"Sorry!", MB_OK | MB_ICONINFORMATION);
}
return myTextureID;
}
In my main function, I define the texture like this:
//Load in player texture
testTexture = loadPNG("test.png");
where testTexture is a global variable, of type GLuint. And drawing my rectangles in my main draw loop is done this way:
//Used to draw rectangles
void drawRect(gameObject &p) {
glEnable(GL_TEXTURE_2D);
//Sets PNG transparent background
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//glBindTexture(GL_TEXTURE_2D, myTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(p.x, p.y);
glTexCoord2f(1.0, 0.0); glVertex2f(p.x + p.width, p.y);
glTexCoord2f(1.0, 1.0); glVertex2f(p.x + p.width, p.y + p.height);
glTexCoord2f(0.0, 1.0); glVertex2f(p.x, p.y + p.height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
This works fine, texturing all my objects with the defined texture. However, I'd like to be able to define more textures, and use those. I tried moving:
glBindTexture(GL_TEXTURE_2D, myTextureID);
from the loadPNG function, into my drawRect, as:
glBindTexture(GL_TEXTURE_2D, testTexture);
However this doesn't apply any texture whatsoever. If anyone has any ideas, I'd really appreciate the help. Thanks!
You have to bind the texture in order to initialize it with glTexImage2D. Don't remove the call to glBindTexture from loadPNG. If you want to render with a different texture, simply bind the texture before rendering the quads.

Drawing multiple 2D textures in OpenGL

I am writing my own game library in C++ using Visual Studio 2013 Ultimate. I have managed to get some basic drawing working for 1 texture, but when I add another it appears to 'overwrite' the previous.
For example, if I have texture A and texture B, B will get drawn where A should have been. I have checked that I am not using the same texture ID for each texture.
I have taken a look at OpenGL trying to Draw Multiple 2d Textures, only 1st one appears, but this did not solve my problem.
I am quite new to OpenGL and lower level graphics programming in general, could someone please explain to me what is wrong with my code?
Here is my texture loading code:
Texture::Texture(const std::string& filePath)
{
m_pixelData = PixelData(stbi_load(filePath.c_str(),
&m_size.x,
&m_size.y,
&m_numberOfImageComponents,
0 /* number of requested image components if non-zero */));
// generate 1 texture name and store it in m_textureName
glGenTextures(1, &m_textureID);
// make this the active texture
glBindTexture(GL_TEXTURE_2D, m_textureID);
// specifies how the data to be uploaded is aligned
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// set texture parameters (need to look up the details of what's going on here)
// TODO work out what this does
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// TODO work out what this does
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
GLenum imageFormat;
// Work out image format
// TODO add proper error handling
switch(m_numberOfImageComponents)
{
case 4:
imageFormat = GL_RGBA;
break;
case 3:
imageFormat = GL_RGB;
break;
default:
DDGL_LOG(ERROR, "No image formats with " + std::to_string(m_numberOfImageComponents) + "components known");
throw;
}
// upload the texture to VRAM
glTexImage2D(GL_TEXTURE_2D,
0,
imageFormat,
m_size.x,
m_size.y,
0,
imageFormat,
GL_UNSIGNED_BYTE,
m_pixelData.get());
}
Here is my 'start draw' code
void GraphicsAPIWrapper::startDraw()
{
// TODO does this need to be called every frame?
glMatrixMode(GL_PROJECTION); // select the matrix
glLoadIdentity(); //reset the matrix
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
Here is my drawing code
void GraphicsAPIWrapper::draw(std::shared_ptr<IDrawableGameObject> drawableObject)
{
glPushMatrix();
// convert pixel coordinates to decimals
const Vector2F floatWindowSize ((float) getWindowSize().x, (float) getWindowSize().y);
const Vector2F floatObjectSize ((float) drawableObject->getSize().x, (float) drawableObject->getSize().y);
const Vector2F relativeObjectSize (floatObjectSize.x / floatWindowSize.x, floatObjectSize.y / floatWindowSize.y);
const Vector2F relativeObjectPosition (drawableObject->getPosition().x / floatWindowSize.x, drawableObject->getPosition().y / floatWindowSize.y);
// transformations
glTranslatef(relativeObjectPosition.x, relativeObjectPosition.y, 0.0f);
glRotatef(drawableObject->getRotation(), 0.0f, 0.0f, 1.0f);
// TODO should this be triangles or quads? I've been told triangles are generally higher performance
// right now QUADS are simpler though
glBegin(GL_QUADS);
glBindTexture(GL_TEXTURE_2D, drawableObject->getTextureID());
glTexCoord2f(0.0f, 1.0f); glVertex3f(-relativeObjectSize.x, -relativeObjectSize.y, 1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex3f(relativeObjectSize.x, -relativeObjectSize.y, 1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex3f(relativeObjectSize.x, relativeObjectSize.y, 1.0f);
glTexCoord2f(0.0f, 0.0f); glVertex3f(-relativeObjectSize.x, relativeObjectSize.y, 1.0f);
glEnd();
glPopMatrix();
}
Here is my 'end draw' code
void GraphicsAPIWrapper::endDraw()
{
glfwSwapBuffers(m_window->getWindow());
}
So, just to be extremely clear, the behavior I am getting is that one texture is getting drawn everywhere, rather than different textures as desired.
You are calling glBindTexture() inside a glBegin/glEnd-Block, which is invalid and will have no effect besides generating an error. Hence, the one really bound is the one last bound before that - very likely to be the bind operation when you load your texture, so the last one loaded is the one shown for all objects...

sdl surface to opengl texture

How can I convert a .png image to an OpenGL surface, with SDL? what I have now:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
and my code to draw the thing:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
//Use blurry texture mapping (replace GL_LINEAR with GL_NEAREST for blocky)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glColor4f( 1.0, 1.0, 1.0, 1.0 ); //Don't use special coloring
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(0.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(128.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(128.0f, 128.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(0.0f, 128.0f, 0.0f);
glEnd();
glDisable(GL_TEXTURE_2D);
The problem is that it only works with .bmp files, and they turn bluish, so what is wrong?
Also, when I try to load a .png, it shows up really weird.
Wrong colors can be caused by getting the channel order wrong. The code I have lying around for loading .bmp's uses GL_BGR instead of GL_RGB so I think that will solve your problem with bmp's.
The problem with your png image is more likely caused by the png being 32-bits per pixel. Probably the best solution for you is to inspect the format field of the SDL surface to determine to appropriate flags/values to pass to glTexImage2D.

glTexImage2D multiple images

I'm drawing an image from openCV full screen, this is a large image at 60fps so I needed a faster way than the openCV gui.
Using OpenGL I do:
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height);
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
Now I want to draw two images side:side - using the openGL hardware to scale them.
I can shrink the image by changing the quad size I don't understand how to load two images with glTexImage2() since there is no handle or id associated with the image.
The reason why you cannot see how to add another texture is because you are missing two critical functions in the code that you posted: glGenTextures and glBindTexture. The first will generate texture objects in the OpenGL context (places for textures to exist on the graphics hardware). The second "selects" one of those texture objects for subsequent calls (glTex..) to affect it.
First of all, the functions like glTexParameteri and glTexImage2D do not need to be called again at every rendering loop... but I guess in your case, you should do that because the images are always changing. By default, in your code, the texture object used is the zeroth object (a reserved one for the default). You should create two texture objects and bind them one after the other to achieve the desired result:
GLuint tex_obj[2]; //create two names for the texture (should not be global variables, but just for sake of this example).
void initGL() {
glClearColor (0.0,0.0,0.0,1.0);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,width,height,0);
glEnable(GL_TEXTURE_2D);
glGenTextures(2,tex_obj); //generate 2 texture objects with names tex_obj[0] and [1]
glBindTexture(GL_TEXTURE_2D, tex_obj[0]); //bind the first texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); //set its parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
void paintGL() {
glClear (GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D,tex_obj[0]); //bind the first texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width0, height0, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data0 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
glBindTexture(GL_TEXTURE_2D, tex_obj[1]); //bind the second texture.
//then load it into the graphics hardware:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 );
glBegin(GL_QUADS);
glTexCoord2i(0,0); glVertex2i(0,height); //you should probably change these vertices.
glTexCoord2i(0,1); glVertex2i(0,0);
glTexCoord2i(1,1); glVertex2i(width,0);
glTexCoord2i(1,0); glVertex2i(width,height);
glEnd();
}
That is basically how it is done. But I have to warn you that my knowledge of OpenGL is a bit outdated, so there might be more efficient ways to do this (I know at least that glBegin/glEnd is deprecated in C++, replaced by VBOs).
Remember openGL is a state machine - you put it into a state, give it a command and it replays the states later.
One nice thing about textures is that you can do things outside the paint call - so if in your image processing step you generate image 1 you can load it into the card at that point.
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select image 1 slot
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width1, height1, 0, GL_RGB, GL_UNSIGNED_BYTE, the_image_data1 ); // load it into the graphics card memory
And then recall it in the paint call
glBindTexture(GL_TEXTURE_2D,tex_obj[1]); // select pre loaded image 1
glBegin(GL_QUADS); // draw it

OpenGL Nvidia Driver 259.12 texture not working

My OpenGL application which was working fine on ATI card stopped working when I put in an NVIDIA Quadro card. Texture simply don't work at all! I've reduced my program to a single display function which doesn't work:
void glutDispCallback()
{
//ALLOCATE TEXTURE
unsigned char * noise = new unsigned char [32 * 32 * 3];
memset(noise, 255, 32*32*3);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, noise);
delete [] noise;
//DRAW
glDrawBuffer(GL_BACK);
glViewport(0, 0, 1024, 1024);
setOrthographicProjection();
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glLoadIdentity();
glDisable(GL_BLEND);
glDisable(GL_LIGHTING);
glBindTexture(GL_TEXTURE_2D, textureID);
glColor4f(0,0,1,0);
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(-0.4,-0.4);
glTexCoord2f(0, 1);
glVertex2f(-0.4, 0.4);
glTexCoord2f(1, 1);
glVertex2f(0.4, 0.4);
glTexCoord2f(1,0);
glVertex2f(0.4,-0.4);
glEnd();
glutSwapBuffers();
//CLEANUP
GL_ERROR();
glDeleteTextures(1, &textureID);
}
The result is a blue quad (or whatever is specified by glColor4f()), and not a white quad which is what the texture is. I have followed the FAQ on OpenGL site. I have disabled blending in case texture was being blended out. I have disabled lighting. I have looked through glGetError() - no errors. I've also set glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); and GL_DECAL. Same result. I've also tried different polygon winding - CW and CCW.
Anyone else encounter this?
Can you try using GL_REPLACE in glTexEnvi? It could be a bug in the NV driver.
Your code is correct and does what it should.
memset(noise, 255, 32*32*3); makes the texture white, but you call glColor4f(0,0,1,0); so the final color will be (1,1,1)*(0,0,1) = (0,0,1) = blue.
What is the behavior you would like to have ?
I found the error. Somewhere else in my code I had initialized a GL_TEXTURE_3D object and had not called glDisable(GL_TEXTURE_3D);
Even though I had called glBindTexture(GL_TEXTURE_2D, textureID); it should have bound a 2D texture as the current texture and used that - as this code always worked on ATI cards. Well apparently the nVidia driver wasn't doing that - it was using that 3D texture for some reason. So adding glDisable(GL_TEXTURE_3D); fixed the problem and everything works as expected.
Thanks all who tried to help.