In our own 3d application I'm loading multiple textures, using devil library. When attempting to load one texture, I'm calling ilutRenderer( ILUT_OPENGL );, which in a turn perform following function calls:
ILboolean ilutGLInit()
{
// Use PROXY_TEXTURE_2D with glTexImage2D() to test more accurately...
glGetIntegerv(GL_MAX_TEXTURE_SIZE, (GLint*)&MaxTexW);
glGetIntegerv(GL_MAX_TEXTURE_SIZE, (GLint*)&MaxTexH);
if (MaxTexW == 0 || MaxTexH == 0)
MaxTexW = MaxTexH = 256; // Trying this because of the VooDoo series of cards...
// Should we really be setting all this ourselves? Seems too much like a glu(t) approach...
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
#ifdef _MSC_VER
if (IsExtensionSupported("GL_ARB_texture_compression") &&
IsExtensionSupported("GL_EXT_texture_compression_s3tc")) {
ilGLCompressed2D = (ILGLCOMPRESSEDTEXIMAGE2DARBPROC)
wglGetProcAddress("glCompressedTexImage2DARB");
}
#endif
if (IsExtensionSupported("GL_ARB_texture_cube_map"))
HasCubemapHardware = IL_TRUE;
return IL_TRUE;
}
ilutRenderer( ILUT_OPENGL ); needs to be called only once (for each newly created window), but while experimenting I've called same function multiple times. (one call for each loaded texture)
If same function were called multiple times - loaded openGl texture was looking with poorer quality than if called once. (I have multiple textures, but most of them with poorer quality, not sure about first image).
I was stumbled about it - since from my perspective that call did not do anything special - why it cannot tolerate multiple similar calls ?
Well, I've started to filter out which functions can be called multiple times and which cannot be - and concluded that it was glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); which triggered odd behavior. So wrapping function like this:
if( !bInitDone )
{
bInitDone = TRUE;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
}
after that ilutRenderer( ILUT_OPENGL ) can be called as many times as needed.
I have also tried to centralize opengl initialization like this:
...
InitGL( OnWindowReady )
...
void OnWindowReady( void )
{
ilInit();
ilutRenderer( ILUT_OPENGL );
}
... may be some rendering code ...
void LoadModel( const wchar_t* file )
{
... load texture 1 ...
... load texture 2 ...
}
But textures are still appearing "as corrupted". Maybe window should be rendered at least once before starting to load textures, but I want to know what functions in OpenGL are reflecting texture corruption / looking nice kind of state.
I have "NVidia Quadro K2100M", driver version 375.86.
Is this display driver bug ?
How do you typically report bugs to NVidia ?
Related
My question is possiibly not related with Qt and/or QOpenGLWidget itself, but rather with OpenGL buffers in general. Anyway, I'm trying to implement a crossplatform renderer of YUV video frames, which requires converting YUV to RGB and rendering the result on a widget afterwards.
So far, I succeeded in the following:
Found two proper shaders (1 fragment & 1 vertex) to improve YUV 2 RGB conversion (Our project supports only Qt 5.6 so far, no better way for me to do it)
Used QOpenGLWidget to obtain a properly-behaving widget
Did my best with QOpenGLTexture to make the drawing
Here is my very sketchy code, which displays video frames from a raw YUV file and most of the job is done by GPU. I would be happy if it were not for the trouble of buffer allocations. The point is, frames are received from some legacy code, which provides me with custom wrappers around something like unsigned char *data, that is why I have to copy it like this:
//-----------------------------------------
GLvoid* mBufYuv; // buffer somewhere
int mFrameSize;
//-------------------------
void OpenGLDisplay::DisplayVideoFrame(unsigned char *data, int frameWidth, int frameHeight)
{
impl->mVideoW = frameWidth;
impl->mVideoH = frameHeight;
memcpy(impl->mBufYuv, data, impl->mFrameSize);
update();
}
While testing the concept, frame and buffer sizes were hardcoded like:
// Called from the outside, assuming video frame height/width are constant
void OpenGLDisplay::InitDrawBuffer(unsigned bsize)
{
impl->mFrameSize = bsize;
impl->mBufYuv = new unsigned char[bsize];
}
Qt texture classes served well for the pupose, so...
// Create y, u, v texture objects respectively
impl->mTextureY = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureU = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureV = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureY->create();
impl->mTextureU->create();
impl->mTextureV->create();
// Get the texture index value of the return y component
impl->id_y = impl->mTextureY->textureId();
// Get the texture index value of the returned u component
impl->id_u = impl->mTextureU->textureId();
// Get the texture index value of the returned v component
impl->id_v = impl->mTextureV->textureId();
And after all the rendering itself looks like:
void OpenGLDisplay::paintGL()
{
// Load y data texture
// Activate the texture unit GL_TEXTURE0
glActiveTexture(GL_TEXTURE0);
// Use the texture generated from y to generate texture
glBindTexture(GL_TEXTURE_2D, impl->id_y);
// Use the memory mBufYuv data to create a real y data texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW, impl->mVideoH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, impl->mBufYuv);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load u data texture
glActiveTexture(GL_TEXTURE1);//Activate texture unit GL_TEXTURE1
glBindTexture(GL_TEXTURE_2D, impl->id_u);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW/2, impl->mVideoH/2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load v data texture
glActiveTexture(GL_TEXTURE2);//Activate texture unit GL_TEXTURE2
glBindTexture(GL_TEXTURE_2D, impl->id_v);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW / 2, impl->mVideoH / 2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH * 5/4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Specify y texture to use the new value can only use 0, 1, 2, etc. to represent
// the index of the texture unit, this is the place where opengl is not humanized
//0 corresponds to the texture unit GL_TEXTURE0 1 corresponds to the
// texture unit GL_TEXTURE1 2 corresponds to the texture unit GL_TEXTURE2
glUniform1i(impl->textureUniformY, 0);
// Specify the u texture to use the new value
glUniform1i(impl->textureUniformU, 1);
// Specify v texture to use the new value
glUniform1i(impl->textureUniformV, 2);
// Use the vertex array way to draw graphics
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
As I've mentioned above, it works fine, but it's only a demo sketch, the goal was to implement generic video renderer, which means aspect ratio, resolution and frame fize may change dynamically; thus, the buffer (GLvoid* mBufYuv; in the code above) has to be reallocated and, even worse, I'll have to memcpy data to it 25 times per second. Looks definitely like something, that wouldn't work way too fast with Full HD video, for example.
Of course, several trivial optimizations are possible, leading to reduction of data copying, but Google told me that there are different ways to allocate buffers in OpenGL directly, those PBO/PUBO things and QOpenGLBuffer at least.
Now, there is a problem -- I'm quite confused with all those many ways to handle textures and don't know neither the best/optimal, nor the one best fitting my case.
Any piece of advice is appreciated.
I am using OpenGL, GLM, ILU and GLUT libraries for loading and texturing 3D models. The models appear to load in correctly, however when it comes to the texturing the texture seems to repeat.
I have included two pictures below showing non-textured, textured.
non-textured:
textured:
If you look closely enough to the last image, the texture is applied to a tiny scale and repeated across the whole model.
For the code, I first start by loading the texture.
ILboolean success = false;
if (ilGetInteger(IL_VERSION_NUM) < IL_VERSION)
{
return false;
}
ilInit(); /*Initialize the DevIL library*/
ilGenImages(1, &ilTextureID); //Generate DevIL image objects
ilBindImage(ilTextureID); /* Binding of image object */
success = ilLoadImage((const ILstring)theFilename); /* Loading of image*/
if (!success)
{
ilDeleteImages(1, &ilTextureID);
return false;
}
success = ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE); // Convert every colour component into unsigned byte.
if (!success)
{
return false;
}
textureWidth = ilGetInteger(IL_IMAGE_WIDTH);
textureHeight = ilGetInteger(IL_IMAGE_HEIGHT);
glGenTextures(1, &GLTextureID); // GLTexture name generation
glBindTexture(GL_TEXTURE_2D, GLTextureID); // Binding of GLtexture name
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Use linear interpolation for magnification filter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Use linear interpolation for minifying filter
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData()); /* Texture specification */
glBindTexture(GL_TEXTURE_2D, GLTextureID); // Binding of GLtexture name
ilDeleteImages(1, &ilTextureID);
I have tried things like adding,
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
but this just seems to make the model non-textured.
Then I call the method to model loading method and apply the texture:
m_model = glmReadOBJ(mdlFilename);
glmFacetNormals(m_model);
glmVertexNormals(m_model, 180.0f, false);
m_TextureID = mdlTexture.getTexture();
m_model->textures[m_model->numtextures - 1].id = m_TextureID;
m_model->textures[m_model->numtextures - 1].width = mdlTexture.getTWidth();
m_model->textures[m_model->numtextures - 1].height =mdlTexture.getTHeight();
For the above code, whilst I was debugging I am getting negative values for
"vertices", "normals" and "facetnorms" for the 3D model, but I am getting values for "numnormals", "numtexcoords" and "numfacetnorms". I'm not entirely sure if this is normal.
And finally for the rendering of the model:
glPushMatrix();
//transformations here...
glTranslatef(mdlPosition.x, 0.0f, -mdlPosition.z);
glRotatef(mdlRotationAngle, 0, 1, 0);
glScalef(mdlScale.x, mdlScale.y, mdlScale.z);
glmDraw(m_model, GLM_SMOOTH | GLM_TEXTURE | GLM_MATERIAL);
glPopMatrix();
I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)
I'm having some weird memory issues in a C program I'm writing, and I think something related to my texture loading system is the cause.
The problem is that, depending on how many textures I make, different issues start coming up. Less textures tend to ever so slightly change other variables in the program. If I include all the textures I want to include, the program may spit out a host of different "* glibc detected *" type errors, and occasionally a Segmentation Fault.
The kicker is that occasionally, the program works perfectly. It's all the luck of the draw.
My code is pretty heavy at this point, so I'll just post what I believe to be the relevant parts of it.
d_newTexture(d_loadBMP("resources/sprites/default.bmp"), &textures);
Is the function I call to load a texture into OpenGL. "textures" is a variable of type texMan_t, which is a struct I made.
typedef struct {
GLuint texID[500];
int texInc;
} texMan_t;
The idea is that texMan_t encompasses all your texture IDs for easier use. texInc just keeps track of what the next available member of texID is.
This is d_newTexture:
void d_newTexture(imgInfo_t info, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &tex->texID[tex->texInc]);
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height, GL_RGBA, GL_UNSIGNED_BYTE, info.data );
tex->texInc++;
glDisable(GL_TEXTURE_2D);
}
I also use a function by the name of d_newTextures, which is identical to d_newTexture, except for that it splits up a simple sprite sheet into multiple textures.
void d_newTextures(imgInfo_t info, int count, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(count, &tex->texID[tex->texInc]);
for(int i=0; i<count; i++) {
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc+i]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height/count,
GL_RGBA, GL_UNSIGNED_BYTE, &info.data[info.width*(info.height/count)*4*i] );
}
tex->texInc+=count;
glDisable(GL_TEXTURE_2D);
}
What could be the cause of the issues I'm seeing?
EDIT: Recently, I've also been getting the error "* glibc detected out/PokeEngine: free(): invalid pointer: 0x01010101 **" after closing the program as well, assuming it's able to properly begin. The backtrace looks like this:
/lib/i386-linux-gnu/libc.so.6(+0x75ee2)[0xceeee2]
/usr/lib/nvidia-173/libGLcore.so.1(+0x277c7c)[0x109ac7c]
EDIT 2:
Here's the code for d_loadBMP as well. Hope it helps!
imgInfo_t d_loadBMP(char* filename) {
imgInfo_t out;
FILE * bmpFile;
bmpFile = fopen(filename, "r");
if(bmpFile == NULL) {
printf("ERROR: Texture file not found!\n");
}
bmp_sign bmpSig;
bmp_fHeader bmpFileHeader;
bmp_iHeader bmpInfoHeader;
fread(&bmpSig, sizeof(bmp_sign), 1, bmpFile);
fread(&bmpFileHeader, sizeof(bmp_fHeader), 1, bmpFile);
fread(&bmpInfoHeader, sizeof(bmp_iHeader), 1, bmpFile);
out.width = bmpInfoHeader.width;
out.height = bmpInfoHeader.height;
out.size = bmpInfoHeader.imageSize;
out.data = (char*)malloc(sizeof(char)*out.width*out.height*4);
// Loaded backwards because that's how BMPs are stored
for(int i=out.width*out.height*4; i>0; i-=4) {
fread(&out.data[i+2], sizeof(char), 1, bmpFile);
fread(&out.data[i+1], sizeof(char), 1, bmpFile);
fread(&out.data[i], sizeof(char), 1, bmpFile);
out.data[i+3] = 255;
}
return out;
}
The way you're loading BMP files is wrong. You're reading right into structs, which is very unreliable, because the memory layout your compiler chooses for a struct may vastly differ from the data layout in a file. Also your code contains zero error checks. If I had to make an educated guess I'd say this is where your problems are.
BTW. glEnable(GL_TEXTURE_…) enables a texture target as data source for rendering. It's completely unnecessary for just generating and uploading textures. You can omit the bracing glEnable(GL_TEXTURE_2D); … glDisable(GL_TEXTURE_2D) blocks in your loading code. Also I'd not use gluBuildMipmaps2D – it doesn't support arbitrary texture dimensions, and you're disabling mipmapping anyway – and just upload directly with glTexImage2D.
Also I don't get your need for a texture manager. Or at least not why your texture manager looks like this. A much better approach would be using a hash map file path → texture ID and a reference count.
I'm having problems trying to load textures in an OpenGL GLUT project using classes.
Here's some code that includes the texturing stuff:
Declaring a Textured Model from a subclass of a model class.
TextureModel * title = new TextureModel("Box.obj", "title.raw");
Constructor method of TextureModel subclass:
TextureModel(string fName, string tName) : Model(fName), textureFile(tName)
{
material newMat = {{0.63,0.52,0.1,1.0},{0.63,0.52,0.1,1.0},{0.2,0.2,0.05,0.5},10};
Material = newMat;
// enable texturing
glEnable(GL_TEXTURE_2D);
loadcolTexture(textureFile);
glGenTextures(1, &textureRef);
// specify the filtering method
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// associate the image read in to the texture to be applied
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, 256, 256, GL_RGB, GL_UNSIGNED_BYTE, image_array);
}
Texture loading function to read in data in RAW file:
int loadcolTexture(const string fileName) {
ifstream inFile;
inFile.open(fileName.c_str(), ios::binary );
if (!inFile.good())
{
cerr << "Can't open texture file " << fileName << endl;
return 1;
}
inFile.seekg (0, ios::end);
int size = inFile.tellg();
image_array = new char [size];
inFile.seekg (0, ios::beg);
inFile.read (image_array, size);
inFile.close();
return 0;}
Method to draw the triangles:
virtual void drawTriangle(int f1, int f2, int f3, int t1, int t2, int t3, int n1, int n2, int n3)
{
glColor3f(1.0,1.0,1.0);
glBegin(GL_TRIANGLES);
glBindTexture(GL_TEXTURE_2D, textureRef);
glNormal3fv(&normals[n1].x);
glTexCoord2f(textures[t1].u, textures[t1].v);
glVertex3fv(&Model::vertices[f1].x);
glNormal3fv(&normals[n2].x);
glTexCoord2f(textures[t2].u, textures[t2].v);
glVertex3fv(&Model::vertices[f2].x);
glNormal3fv(&normals[n3].x);
glTexCoord2f(textures[t3].u, textures[t3].v);
glVertex3fv(&Model::vertices[f3].x);
glEnd();
}
I also have Lighting, Depth Testing and Double Buffering enabled.
Models and Lighting work fine, but the textures don't appear. Any reasons why it won't work will be great.
To add to the comment, I see a few things here:
As mentioned in the comment, you need to bind a texture before you can upload data to it. Once you generate the texture with glGenTextures, you need to set it to be the active texture before you try to load data or set set parameters with glTexParameteri
You're building mipmaps, but not using them. Either set the GL_TEXTURE_MIN_FILTER to GL_NEAREST_MIPMAP_LINEAR to utilize the mipmaps, or don't build them in the first place. As is you're just wasting texture memory.
It's not legal to bind a texture inbetween glBegin/glEnd as you have done in drawTriangle. Bind it before the glBegin.
Please, please, please start using glGetError in your code. This will tell you if you are doing wrong things before you have to come and ask to find your mistakes. (you would have found 2/3 of the mistakes here if you had been using it).