I'm having some weird memory issues in a C program I'm writing, and I think something related to my texture loading system is the cause.
The problem is that, depending on how many textures I make, different issues start coming up. Less textures tend to ever so slightly change other variables in the program. If I include all the textures I want to include, the program may spit out a host of different "* glibc detected *" type errors, and occasionally a Segmentation Fault.
The kicker is that occasionally, the program works perfectly. It's all the luck of the draw.
My code is pretty heavy at this point, so I'll just post what I believe to be the relevant parts of it.
d_newTexture(d_loadBMP("resources/sprites/default.bmp"), &textures);
Is the function I call to load a texture into OpenGL. "textures" is a variable of type texMan_t, which is a struct I made.
typedef struct {
GLuint texID[500];
int texInc;
} texMan_t;
The idea is that texMan_t encompasses all your texture IDs for easier use. texInc just keeps track of what the next available member of texID is.
This is d_newTexture:
void d_newTexture(imgInfo_t info, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &tex->texID[tex->texInc]);
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height, GL_RGBA, GL_UNSIGNED_BYTE, info.data );
tex->texInc++;
glDisable(GL_TEXTURE_2D);
}
I also use a function by the name of d_newTextures, which is identical to d_newTexture, except for that it splits up a simple sprite sheet into multiple textures.
void d_newTextures(imgInfo_t info, int count, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(count, &tex->texID[tex->texInc]);
for(int i=0; i<count; i++) {
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc+i]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height/count,
GL_RGBA, GL_UNSIGNED_BYTE, &info.data[info.width*(info.height/count)*4*i] );
}
tex->texInc+=count;
glDisable(GL_TEXTURE_2D);
}
What could be the cause of the issues I'm seeing?
EDIT: Recently, I've also been getting the error "* glibc detected out/PokeEngine: free(): invalid pointer: 0x01010101 **" after closing the program as well, assuming it's able to properly begin. The backtrace looks like this:
/lib/i386-linux-gnu/libc.so.6(+0x75ee2)[0xceeee2]
/usr/lib/nvidia-173/libGLcore.so.1(+0x277c7c)[0x109ac7c]
EDIT 2:
Here's the code for d_loadBMP as well. Hope it helps!
imgInfo_t d_loadBMP(char* filename) {
imgInfo_t out;
FILE * bmpFile;
bmpFile = fopen(filename, "r");
if(bmpFile == NULL) {
printf("ERROR: Texture file not found!\n");
}
bmp_sign bmpSig;
bmp_fHeader bmpFileHeader;
bmp_iHeader bmpInfoHeader;
fread(&bmpSig, sizeof(bmp_sign), 1, bmpFile);
fread(&bmpFileHeader, sizeof(bmp_fHeader), 1, bmpFile);
fread(&bmpInfoHeader, sizeof(bmp_iHeader), 1, bmpFile);
out.width = bmpInfoHeader.width;
out.height = bmpInfoHeader.height;
out.size = bmpInfoHeader.imageSize;
out.data = (char*)malloc(sizeof(char)*out.width*out.height*4);
// Loaded backwards because that's how BMPs are stored
for(int i=out.width*out.height*4; i>0; i-=4) {
fread(&out.data[i+2], sizeof(char), 1, bmpFile);
fread(&out.data[i+1], sizeof(char), 1, bmpFile);
fread(&out.data[i], sizeof(char), 1, bmpFile);
out.data[i+3] = 255;
}
return out;
}
The way you're loading BMP files is wrong. You're reading right into structs, which is very unreliable, because the memory layout your compiler chooses for a struct may vastly differ from the data layout in a file. Also your code contains zero error checks. If I had to make an educated guess I'd say this is where your problems are.
BTW. glEnable(GL_TEXTURE_…) enables a texture target as data source for rendering. It's completely unnecessary for just generating and uploading textures. You can omit the bracing glEnable(GL_TEXTURE_2D); … glDisable(GL_TEXTURE_2D) blocks in your loading code. Also I'd not use gluBuildMipmaps2D – it doesn't support arbitrary texture dimensions, and you're disabling mipmapping anyway – and just upload directly with glTexImage2D.
Also I don't get your need for a texture manager. Or at least not why your texture manager looks like this. A much better approach would be using a hash map file path → texture ID and a reference count.
Related
I'm loading a font from a TGA texture. I generate the mipmap using the gluBuild2DMipmaps() function.
When the font has a certain size, it looks very good. But when it gets smaller, it gets darker and darker whenever it reaches a new mipmap level.
This is how I create the texture:
void TgaLoader::bindTexture(unsigned int* texture)
{
tImageTGA *pBitMap = m_tgaImage;
if(pBitMap == 0)
{
return;
}
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
gluBuild2DMipmaps(GL_TEXTURE_2D,
pBitMap->channels,
pBitMap->size_x,
pBitMap->size_y,
textureType,
GL_UNSIGNED_BYTE,
pBitMap->data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
This makes the text look like this (it should be white):
Barely visible.
If I change the GL_TEXTURE_MIN_FILTER to basically ignore mipmaps (using GL_LINEAR for example), it looks like this:
I've tried different filter options and also tried using glGenerateMipmap() instead of gluBuild2DMipmaps(), but I always end up with the same result.
What's wrong with the code?
So, I've been reading about this, and I still haven't found a conclusion. Some examples use textures as their render targets, some people use renderbuffers, and some use both!
For example, using just textures:
// Create the gbuffer textures
glGenTextures(ARRAY_SIZE_IN_ELEMENTS(m_textures), m_textures);
glGenTextures(1, &m_depthTexture);
for (unsigned int i = 0 ; i < ARRAY_SIZE_IN_ELEMENTS(m_textures) ; i++) {
glBindTexture(GL_TEXTURE_2D, m_textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0);
}
both:
glGenRenderbuffersEXT ( 1, &m_diffuseRT );
glBindRenderbufferEXT ( GL_RENDERBUFFER_EXT, m_diffuseRT );
glRenderbufferStorageEXT ( GL_RENDERBUFFER_EXT, GL_RGBA, m_width, m_height );
glFramebufferRenderbufferEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, m_diffuseRT );
glGenTextures ( 1, &m_diffuseTexture );
glBindTexture ( GL_TEXTURE_2D, m_diffuseTexture );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
// Attach the texture to the FBO
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_diffuseTexture, 0 );
What's the difference? What's the point of creating a texture, a render buffer, and then assign one to the other? After you successfully supply a texture with an image, it's got its memory allocated, so why does one need to bind it to a render buffer?
Why would one use textures or renderbuffers? What would be the advantages?
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
EDIT:
So, my current code for a GBuffer is this:
enum class GBufferTextureType
{
Depth = 0,
Position,
Diffuse,
Normal,
TexCoord
};
.
.
.
glGenFramebuffers ( 1, &OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
glBindFramebuffer ( GL_FRAMEBUFFER, OpenGLID );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
uint32_t TextureGLIDs[5];
glGenTextures ( 5, TextureGLIDs );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
// Create the depth texture
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, In_Dimensions.x, In_Dimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, TextureGLIDs[ ( int ) GBufferTextureType::Depth], 0 );
// Create the color textures
for ( unsigned cont = 1; cont < 5; ++cont )
{
glBindTexture ( GL_TEXTURE_2D, TextureGLIDs[cont] );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGB32F, In_Dimensions.x, In_Dimensions.y, 0, GL_RGB, GL_FLOAT, NULL );
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + cont, GL_TEXTURE_2D, TextureGLIDs[cont], 0 );
}
// Specify draw buffers
GLenum DrawBuffers[4];
for ( unsigned cont = 0; cont < 4; ++cont )
DrawBuffers[cont] = GL_COLOR_ATTACHMENT0 + cont;
glDrawBuffers ( 4, DrawBuffers );
if ( Graphics::GraphicsBackend->CheckError() == false )
{
Delete();
return false;
}
GLenum Status = glCheckFramebufferStatus ( GL_FRAMEBUFFER );
if ( Status != GL_FRAMEBUFFER_COMPLETE )
{
Delete();
return false;
}
Dimensions = In_Dimensions;
// Unbind
glBindFramebuffer ( GL_FRAMEBUFFER, 0 );
Is this the way to go?
I still have to write the corresponding shaders...
What's the point of creating a texture, a render buffer, and then assign one to the other?
That's not what's happening. But that's OK, because that second example code is errant nonsense. The glFramebufferTexture2DEXT is overriding the binding from glFramebufferRenderbufferEXT. The renderbuffer is never actually used after it is created.
If you found that code online somewhere, I strongly advise you to disregard anything that source told you about OpenGL development. Though I would advise that anyway, since it's using the "EXT" extension functions in 2016, almost a decade since core FBOs became available.
I've read that you cannot read from renderbuffer, only texture. Wht's the use of it, then?
That is entirely the point of them: you use a renderbuffer for images that you don't want to read from. That's not useful for deferred rendering, since you really do want to read from them.
But imagine if you're generating a reflection image of a scene, which you will later use as a texture in your main scene. Well, to render the reflection scene, you need a depth buffer. But you're not going to read from that depth buffer (not as a texture, at any rate); you need a depth buffer for depth testing. But the only image you're going to read from after is the color image.
So you would make the depth buffer a renderbuffer. That tells the implementation that the image can be put into whatever storage is most efficient for use as a depth buffer, without having to worry about read-back performance. This may or may not have a performance impact. But at the very least, it won't be any slower than using a texture.
Most rendering scenarios need a depth and/or stencil buffer, though it is rare that you would ever need to sample the data stored in the stencil buffer from a shader.
It would be impossible to do depth/stencil tests if your framebuffer did not have a location to store these data and any render pass that uses these fragment tests requires a framebuffer with the appropriate images attached.
If you are not going to use the depth/stencil buffer data in a shader, a renderbuffer will happily satisfy storage requirements for fixed-function fragment tests. Renderbuffers have fewer format restrictions than textures do, particularly if we detour this discussion to multisampling.
D3D10 introduced support for multisampled color textures but omitted multisampled depth textures; D3D10.1 later fixed that problem and GL3.0 was finalized after D3D10's initial design oversight was corrected.
Pre-GL3 / D3D10.1 design would manifest itself in GL as a multisampled framebuffer object that allows either texture or renderbuffer color attachments but forces you to use a renderbuffer for the depth attachment.
Renderbuffers are ultimately the lowest common denominator for storage, they will get you through tough jams on feature-limited hardware. You can actually blit the data stored in a renderbuffer into a texture in some situations where you could not draw directly into the texture.
To that end, you can resolve a multisampled renderbuffer into a single-sampled texture by blitting from one framebuffer to another. This is implicit multisampling, and it (would) allow you to use the anti-aliased results of a previous render pass with a standard texture lookup. Unfortunately it is thoroughly useless for anti-aliasing in deferred shading--you need explicit multisample resolve for that.
Nonetheless, it is incorrect to say that a renderbuffer is not readable; it is in every sense of the word, but since your goal is deferred shading, would require additional GL commands to copy the data into a texture.
In our own 3d application I'm loading multiple textures, using devil library. When attempting to load one texture, I'm calling ilutRenderer( ILUT_OPENGL );, which in a turn perform following function calls:
ILboolean ilutGLInit()
{
// Use PROXY_TEXTURE_2D with glTexImage2D() to test more accurately...
glGetIntegerv(GL_MAX_TEXTURE_SIZE, (GLint*)&MaxTexW);
glGetIntegerv(GL_MAX_TEXTURE_SIZE, (GLint*)&MaxTexH);
if (MaxTexW == 0 || MaxTexH == 0)
MaxTexW = MaxTexH = 256; // Trying this because of the VooDoo series of cards...
// Should we really be setting all this ourselves? Seems too much like a glu(t) approach...
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
#ifdef _MSC_VER
if (IsExtensionSupported("GL_ARB_texture_compression") &&
IsExtensionSupported("GL_EXT_texture_compression_s3tc")) {
ilGLCompressed2D = (ILGLCOMPRESSEDTEXIMAGE2DARBPROC)
wglGetProcAddress("glCompressedTexImage2DARB");
}
#endif
if (IsExtensionSupported("GL_ARB_texture_cube_map"))
HasCubemapHardware = IL_TRUE;
return IL_TRUE;
}
ilutRenderer( ILUT_OPENGL ); needs to be called only once (for each newly created window), but while experimenting I've called same function multiple times. (one call for each loaded texture)
If same function were called multiple times - loaded openGl texture was looking with poorer quality than if called once. (I have multiple textures, but most of them with poorer quality, not sure about first image).
I was stumbled about it - since from my perspective that call did not do anything special - why it cannot tolerate multiple similar calls ?
Well, I've started to filter out which functions can be called multiple times and which cannot be - and concluded that it was glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); which triggered odd behavior. So wrapping function like this:
if( !bInitDone )
{
bInitDone = TRUE;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
}
after that ilutRenderer( ILUT_OPENGL ) can be called as many times as needed.
I have also tried to centralize opengl initialization like this:
...
InitGL( OnWindowReady )
...
void OnWindowReady( void )
{
ilInit();
ilutRenderer( ILUT_OPENGL );
}
... may be some rendering code ...
void LoadModel( const wchar_t* file )
{
... load texture 1 ...
... load texture 2 ...
}
But textures are still appearing "as corrupted". Maybe window should be rendered at least once before starting to load textures, but I want to know what functions in OpenGL are reflecting texture corruption / looking nice kind of state.
I have "NVidia Quadro K2100M", driver version 375.86.
Is this display driver bug ?
How do you typically report bugs to NVidia ?
I am creating an OpenGL texture like this:
glGenTextures( 1, &boardTex );
glBindTexture( GL_TEXTURE_2D, boardTex );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, NULL);
I get a handle at board so I assume the texture's been successfully created. I want to share this texture with CUDA so I register and map the resource:
cudaGLSetGLDevice(0);
cudaGraphicsGLRegisterImage( &boardImage, boardTex, GL_TEXTURE_2D, cudaGraphicsMapFlagsNone );
cudaGraphicsMapResources( 1, &boardImage, 0 );
Then I try to get the mapped pointer like this:
float4* mappedPointer;
size_t mappedSize;
cudaGraphicsResourceGetMappedPointer( (void**)&mappedPointer, &mappedSize, boardImage );
Unfortunately this call returns an error and refuses to work. I made sure the texture wasn't bound in OpenGL context just in case. Still not working. cudaGetErrorString yields "unknown error" so I'm pretty stuck here. I'd appreciate any ideas.
Okay, I found this out myself:
cudaGraphicsSubResourceGetMappedArray (&array, resource, 0, 0); returns a cudaArray to work with. I have yet to wrap my mind around how cudaArrays work (and I might end up using PBOs) but at least it's not crashing.
Edit:
From the CUDA Reference Guide entry for cudaGraphicsResourceGetMappedPointer():
If resource is not a buffer then it cannot be accessed via a pointer
and cudaErrorUnknown is returned.
From the CUDA Reference Guide entry for cudaGraphicsSubResourceGetMappedArray():
If resource is not a texture then it cannot be accessed via an array
and cudaErrorUnknown is returned.
In other words, use GetMappedPointer for mapped buffer objects. Use GetMappedArray for mapped texture objects.
I am making a texture in my environment that excludes all white pixels. I read in a ppm file and the fourth value is always set to 0 if it is a white pixel. Everything seems to be in order, I have set up my view correctly and so forth. The texture image is visible with my current code, however the image as a whole is not fully opaque. It is highly see through. Is this a problem with how I am setting up my GL_Blend? Why is the entire texture not opaque as it should be only excluding the white pixels?
first three values are read in as rgb values and fourth value is not in file, it is selected depending on the total value of the three previous numbers from rgb. This texture is not loaded every time I render it is in a display list so only done once.
glPushMatrix();
FILE *inFile3;
char dump3[3];
int max3, k3 = 0;
inFile3 = fopen("tree.ppm", "r");
int x3;
int y3;
fscanf(inFile3, "%s", dump3);
fscanf(inFile3, "%d %d", &x3, &y3);
fscanf(inFile3, "%d", &max3);
int arraySize3 = y3*(4*x3);
int pixel3, rgb = 0;
GLubyte data3[arraySize3];
for (int i = 0; i < x3; i++) {
for (int j = 0; j < y3; j++) {
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
fscanf(inFile3, "%d", &pixel3);
data3[k3++] = pixel3;
rgb += pixel3;
data3[k3++] = ((rgb) > 760) ? 0 : 255;
rgb = 0;
}
}
fclose(inFile3);
glGenTextures(1,&texture3);
glBindTexture(GL_TEXTURE_2D,texture3);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST_MIPMAP_NEAREST);
gluBuild2DMipmaps(GL_TEXTURE_2D,4,x3,y3,GL_RGBA,GL_UNSIGNED_BYTE,data3);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture3 );
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glRotatef(180, 0, 0, 0);
glTranslatef(0, -19, 0);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex3f(30,0,10);
glTexCoord2d(0,1); glVertex3f(30,20,10);
glTexCoord2d(1,1); glVertex3f(30,20,-10);
glTexCoord2d(1,0); glVertex3f(30,0,-10);
glEnd();
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glPopMatrix();
Screen shots:
If you move the viewpoint really close to the tree, does it become opaque? Or does it if you disable mipmapping?
edit
By moving the eye closer to a tree, the level 0 mipmap (your original image) is used. To select a mipmap, OpenGL computes which level provides the best match between the size of its texels and the size of a pixel.
To disable mipmapping generation, you must use glTexImage2D to upload your texture instead of gluBuild2DMipmaps.
To disable usage of mipmaps, change the following line
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
to
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
You could also do as Jim Buck suggests and use
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
to force the highest mipmap level used to 0. The lowest being by default 0, this effectively disables mipmapping. By setting GL_TEXTURE_MAX_LEVEL and GL_TEXTURE_BASE_LEVEL to the same value, you will be able to see the content of that specific mipmap level.
All this is to confirm if it's a problem with the mipmaps.
In addition to what #bernie pointed out about GluByte arrays and including glBegin/end, the problem was in..
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
GL_MODULATE needed to be replaced with GL_REPLACE. This fixed the issue. Thanks for your guys' help.