OpenGL renders texture all white - c++

I'm attempting to render a .png image as a texture. However, all that is being rendered is a white square.
I give my texture a unique int ID called texID, read the pixeldata into a buffer 'image' (declared in the .h file). I load my pixelbuffer, do all of my OpenGL stuff and bind that pixelbuffer to a texture for OpenGL. I then draw it all using glDrawElements.
Also I initialize the texture with a size of 32x32 when its contructor is called, therefore i doubt it is related to a power of two size issue.
Can anybody see any mistakes in my OpenGL GL_TEXTURE_2D setup that might give me a block white square.
#include "Texture.h"
Texture::Texture(int width, int height, string filename)
{
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
printf(fnPtr);
w = width; //give our texture a width and height, the reason that we need to pass in the width and height values manually
h = height;//UPDATE, these MUST be P.O.T.
unsigned error = lodepng::decode(image,w,h,fnPtr);//lodepng's decode function will load the pixel data into image vector
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
for(int i = 0; i<image.size(); i++)
{
printf("%i,", image.at(i));
}
printf("\nImage size is %i", image.size());
//image now contains our pixeldata. All ready for OpenGL to do its thing
//let's get this texture up in the video memory
texGLInit();
}
void Texture::texGLInit()
{
//WHERE YOU LEFT OFF: glGenTextures isn't assigning an ID to textures. it stays at zero the whole time
//i believe this is why it's been rendering white
glGenTextures(1, &textures);
printf("\ntexture = %u", textures);
glBindTexture(GL_TEXTURE_2D, textures);//evrything we're about to do is about this texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
//glDisable(GL_COLOR_MATERIAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image);
//we COULD free the image vectors memory right about now.
}
void Texture::draw(point centerPoint, point dimensions)
{
glEnable(GL_TEXTURE_2D);
printf("\nDrawing block at (%f, %f)",centerPoint.x, centerPoint.y);
glBindTexture(GL_TEXTURE_2D, textures);//bind the texture
//create a quick vertex array for the primitive we're going to bind the texture to
printf("TexID = %u",textures);
GLfloat vArray[8] =
{
centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2),//bottom left i0
centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top left i1
centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top right i2
centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2)//bottom right i3
};
//create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
GLfloat tArray[8] =
{
0.0f,0.0f, //0
0.0f,1.0f, //1
1.0f,1.0f, //2
1.0f,0.0f //3
};
//and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
GLubyte iArray[6] =
{
0,1,2,
0,2,3
};
//Activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Give openGL a pointer to our vArray and tArray
glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);
//Draw it all
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);
//glDrawArrays(GL_TRIANGLES,0,6);
//Disable the vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
//done!
/*glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glTexCoord2f(0.0f,1.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,1.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,0.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glEnd();*/
}
Texture::Texture(void)
{
}
Texture::~Texture(void)
{
}
I'll also include the main class' init, where I do a bit more OGL setup before this.
void init(void)
{
printf("\n......Hello Guy. \n....\nInitilising");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,XSize,0,YSize);
glEnable(GL_TEXTURE_2D);
myBlock = new Block(0,0,offset);
glClearColor(0,0.4,0.7,1);
glLineWidth(2); // Width of the drawing line
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST);
printf("\nInitialisation Complete");
}
Update: adding in the main function where I first setup my OpenGL window.
int main(int argc, char** argv)
{
glutInit(&argc, argv); // GLUT Initialization
glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE); // Initializing the Display mode
glutInitWindowSize(800,600); // Define the window size
glutCreateWindow("Gem Miners"); // Create the window, with caption.
printf("\n========== McLeanTech Systems =========\nBecoming Sentient\n...\n...\n....\nKILL\nHUMAN\nRACE \n");
init(); // All OpenGL initialization
//-- Callback functions ---------------------
glutDisplayFunc(display);
glutKeyboardFunc(mykey);
glutSpecialFunc(processSpecialKeys);
glutSpecialUpFunc(processSpecialUpKeys);
//glutMouseFunc(mymouse);
glutMainLoop(); // Loop waiting for event
}

Here's the usual checklist for whenever textures come out white:
OpenGL context created and being bound to current thread when attemting to load texture?
Allocated texture ID using glGenTextures?
Are the parameters format and internal format to glTex[Sub]Image… valid OpenGL tokens allowed as input for this function?
Is mipmapping being used?
YES: Supply all mipmap layers – optimally set glTexParameteri GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL, as well as GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOG.
NO: Turn off mipmap filtering by setting glTexParameteri GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.

Related

Texture binding isn't working / C++ / OpenGL [duplicate]

This question already has answers here:
What are the usual troubleshooting steps for OpenGL textures not showing?
(6 answers)
OpenGL object in C++ RAII class no longer works
(2 answers)
Closed 5 years ago.
I'm trying to create a Texture class for my project which initializes and load a texture from an image. The texture loads well but whenever I want to get the texture ID from outside the class by calling GetTexture() function, glIsTexture() does not consider the return value (the texture ID) as a texture anymore. And the face I want to texture stays blank.
Also, I tried to bind the texture with glBindTexture() directly from the Texture class itself with the function Texture::SetActive() but it still doesn't work.
And finally, when I return the texture ID directly from the function, the texture displays correctly.
Is there something I'm missing here ? I don't really know what to look for at this point.
Thanks in advance for your help !
Here's my Texture class :
// Constructor
Texture::Texture(std::string const& texPath) {
SDL_Surface *texture = nullptr, *newFormatTexture = nullptr, *flippedTexture = nullptr;
SDL_PixelFormat tmpFormat;
Uint32 amask, rmask, gmask, bmask;
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xFF000000;
gmask = 0x00FF0000;
bmask = 0x0000FF00;
amask = 0x000000FF;
#else
rmask = 0x000000FF;
gmask = 0x0000FF00;
bmask = 0x00FF0000;
amask = 0xFF000000;
#endif
if ((texture = IMG_Load(texPath.c_str())) == nullptr) {
std::cerr << "[ERROR] : Could not load texture " << texPath << ". Skipping..." << std::endl;
}
tmpFormat = *(texture->format);
tmpFormat.BitsPerPixel = 32;
tmpFormat.BytesPerPixel = 4;
tmpFormat.Rmask = rmask;
tmpFormat.Gmask = gmask;
tmpFormat.Bmask = bmask;
tmpFormat.Amask = amask;
if ((newFormatTexture = SDL_ConvertSurface(texture, &tmpFormat, SDL_SWSURFACE)) == nullptr) {
std::cerr << "[ERROR] : Couldn't convert surface to given format." << std::endl;
}
if ((flippedTexture = this->FlipSurface(newFormatTexture)) == nullptr) {
std::cerr << "[ERROR] : Couldn't flip surface." << std::endl;
}
glGenTextures(1, &(this->_textureID));
glBindTexture(GL_TEXTURE_2D, this->_textureID);
glTexImage2D(GL_TEXTURE_2D, 0, 4, flippedTexture->w, flippedTexture->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, flippedTexture->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
SDL_FreeSurface(flippedTexture);
SDL_FreeSurface(newFormatTexture);
SDL_FreeSurface(texture);
}
Texture::Texture(unsigned char *texData, int width, int height) {
glGenTextures(1, &(this->_textureID));
glBindTexture(GL_TEXTURE_2D, this->_textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, texData);
glBindTexture(GL_TEXTURE_2D, 0);
}
Texture::~Texture() {
glDeleteTextures(1, &(this->_textureID));
}
Texture Texture::CreateTexture(std::string const& texPath) {
Texture tex(texPath);
return (tex);
}
Texture Texture::CreateTexture(unsigned char *texData, int width, int height) {
Texture tex(texData, width, height);
return (tex);
}
unsigned int Texture::GetTexture() const {
return (this->_textureID);
}
void Texture::SetActive() {
glBindTexture(GL_TEXTURE_2D, this->_textureID);
}
The main class where I load and use my texture :
int WinMain(void) {
Window window("Hello", 640, 480);
double angleX, angleZ;
Texture tex;
int height;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, (double)640/480, 1, 1000);
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
tex = Texture::CreateTexture("caisse.jpg");
while (!window.Quit()) {
Input::Update();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(3,4,2,0,0,0,0,0,1);
tex.SetActive();
glBegin(GL_QUADS);
glTexCoord2d(0, 1);
glVertex3d(1, 1, 1);
glTexCoord2d(0, 0);
glVertex3d(1, 1, -1);
glTexCoord2d(1, 0);
glVertex3d(-1, 1, -1);
glTexCoord2d(1, 1);
glVertex3d(-1, 1, 1);
glEnd();
glFlush();
window.RefreshDisplay();
}
return (0);
}
EDIT
I solved my problem.
As described in this topic : What are the usual troubleshooting steps for OpenGL textures not showing? , the initialisation of the texture must not be done in the constructor.
Thanks for the help :)
OK, let's look at this:
Texture Texture::CreateTexture(std::string const& texPath) {
Texture tex(texPath);
return (tex);
}
I'm going to assume that this is a static function. So it creates a Texture object on the stack. And tex contains an OpenGL texture object. The function then returns this object.
By the rules of C++, the lifetime of tex is limited to the scope in which it is created. Namely, Texture::CreateTexture. Which means that, at the end of this function, tex will be destroyed by having its destructor invoked.
But since you returned tex, before that happens, tex will be used to initialize the return value of the function. That return value happens to be an object of type Texture, so the compiler will invoke Texture's copy constructor to initialize the return value.
So, right before tex is destroyed, there are two Texture objects: tex itself and the return value of type Texture that was copied from tex. So far, so good.
Now, tex is destroyed. Texture::~Texture calls glDestroyTexture on the texture object contained within it. That destroys the texture created in the constructor. Fine.
So... what happens now? Well, let's back up to the creation of the return value from CreateTexture. I said that it would invoke the copy constructor of Texture to construct it, passing tex as the object to copy from.
You did not post your complete code, but given the nature of the other code you've written, I'd bet that you didn't write a copy constructor for Texture. That's fine, because the compiler will make one for you.
Only that's not fine. Why? Because right before tex gets destroyed, there are two Texture objects. And both of them store the same OpenGL texture object name. How did that happen?
Because you copied the texture object from tex into the return value. That's what the compiler-generated copy constructor does: it copies everything in the class.
So when tex is destroyed, it is destroying the OpenGL texture it just returned.
Texture should not be a copyable class. It should be move-only, just like many resource-containing classes in C++.

OpenGL/ImageMagick Failing to Render Texture (Sometimes)

I have been having a very odd problem when trying to use OpenGL's C++ API. I am trying to load in a texture using ImageMagick, and then display it as a simple 2D textured square. I have a decent amount of experience with using OpenGL in Java, so I understand how to render a texture and bind it to a primitive. However, each time I attempt to draw it, the program either fails to render, or it renders it as a (properly sized) white square. I'm not entirely sure what is going on, but I believe it has to do with ImageMagick.
I have been using Ubuntu's terminal for compiling, and I've learned just how painful it can be to have to install libraries manually. ImageMagick first refused to compile when used in my program, and when I finally got the program to compile, it would seg-fault each time it ran. I've finally got it "working", but now, whenever I attempt to load in the image, the program will run without rendering. I haven't found anything like this on Google.
http://imgur.com/C7yKwDK
The odd thing is, very rarely, it will work correctly and render the square as expected. However, when I then try to rerun the program, it fails as shown above. I've determined that the line that causes it to fail to render is the same line the image is loaded, so that led me to believe that the image was just being loaded incorrectly, causing the program to fail. However, if I move the texture loading code before the creation of the GL window, the program will consistently render successfully, but the textured square appears only as white (though the size of the square is correct, so I know the image loading is working).
Anyway, sorry for the long post. I've just given up solving this one on my own, and was hoping one of you could help me out.
OpenGL Initialization Code:
Texture* tx;
void GraphicsOGL :: initialize3D(int argc, char* argv[]) {
Magick::InitializeMagick(*argv);
glutInit(&argc, argv);
//Loading Here ALWAYS Causes White Square
/*glEnable(GL_TEXTURE_2D);
tx = new Texture("Resources/Images/test.png");
tx->load();*/
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGBA);
glutInitWindowSize(SCREEN_WIDTH, SCREEN_HEIGHT);
glutInitWindowPosition(100, 100);
glutCreateWindow("OpenGL Game");
glViewport(0,0,SCREEN_WIDTH,SCREEN_HEIGHT);
glOrtho(0,SCREEN_WIDTH,SCREEN_HEIGHT,0, -3,1000);
glEnable(GL_DEPTH_TEST);
glEnable(GL_ALPHA_TEST);
glEnable(GL_TEXTURE_2D);
//Loading Here SOMETIMES Works, But Typically Fails
tx = new Texture("Resources/Images/test.png");
tx->load();
glutDisplayFunc(displayCallback);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glutMainLoop();
}
Texture Loading Code:
bool Texture::load() {
try {
m_image.read(m_fileName); //This Line Causes it to Fail to Render
m_image.write(&m_blob, "RGBA");
}
catch (Magick::Error& Error) {
std::cout << "Error loading texture '" << m_fileName << "': " << Error.what() << std::endl;
return false;
}
width = m_image.columns();
height = m_image.rows();
glGenTextures(1, &m_textureObj);
glBindTexture(m_textureTarget, m_textureObj);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameterf(m_textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(m_textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(m_textureTarget, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_blob.data());
//glBindTexture(m_textureTarget, 0);
return true;
}
Texture Drawing Code:
void GraphicsOGL :: drawTexture(float x, float y, Texture* tex) {
glEnable(GL_TEXTURE_2D);
tex->bind();
float depth = 0, w, h;
w = tex->getWidth();
h = tex->getHeight();
glBegin(GL_QUADS);
glVertex3f(x, y+h, depth); glTexCoord2f(1,0);
glVertex3f(x+w, y+h, depth); glTexCoord2f(1,1);
glVertex3f(x+w, y, depth); glTexCoord2f(0,1);
glVertex3f(x, y, depth); glTexCoord2f(0,0);
glEnd();
}

OpenGL Texturing, no error but grey

Trying to colour terrain points based on texture colour (currently hard coded to vec2(0.5, 0.5) for test purposes - which should be light blue) but all the points are grey. glGetError returns 0 throughout the whole process. I think I might be doing the render process wrong or have a problem with my shaders(?)
Vertex Shader:
void main(){
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
uniform sampler2D myTextureSampler;
void main(){
gl_FragColor = texture2D(myTextureSampler, vec2(0.5, 0.5));
}
Terrain Class:
class Terrain
{
public:
Terrain(GLuint pProgram, char* pHeightmap, char* pTexture){
if(!LoadTerrain(pHeightmap))
{
OutputDebugString("Loading terrain failed.\n");
}
if(!LoadTexture(pTexture))
{
OutputDebugString("Loading terrain texture failed.\n");
}
mProgram = pProgram;
mTextureLocation = glGetUniformLocation(pProgram, "myTextureSampler");
};
~Terrain(){};
void Draw()
{
glEnableClientState(GL_VERTEX_ARRAY); // Uncommenting this causes me to see nothing at all
glBindBuffer(GL_ARRAY_BUFFER, mVBO);
glVertexPointer(3, GL_FLOAT, 0, 0);
glEnable( GL_TEXTURE_2D );
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mBMP);
glProgramUniform1i(mProgram, mTextureLocation, 0);
GLenum a = glGetError();
glPointSize(5.0f);
glDrawArrays(GL_POINTS, 0, mNumberPoints);
a = glGetError();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisable( GL_TEXTURE_2D );
glDisableClientState(GL_VERTEX_ARRAY);
}
private:
GLuint mVBO, mBMP, mUV, mTextureLocation, mProgram;
int mWidth;
int mHeight;
int mNumberPoints;
bool LoadTerrain(char* pFile)
{
/* Definitely no problem here - Vertex data is fine and rendering nice and dandy */
}
// TEXTURES MUST BE POWER OF TWO!!
bool LoadTexture(char *pFile)
{
unsigned char header[54]; // Each BMP file begins by a 54-bytes header
unsigned int dataPos; // Position in the file where the actual data begins
unsigned int width, height;
unsigned int imageSize;
unsigned char * data;
FILE * file = fopen(pFile, "rb");
if(!file)
return false;
if(fread(header, 1, 54, file) != 54)
{
fclose(file);
return false;
}
if ( header[0]!='B' || header[1]!='M' )
{
fclose(file);
return false;
}
// Read ints from the byte array
dataPos = *(int*)&(header[0x0A]);
imageSize = *(int*)&(header[0x22]);
width = *(int*)&(header[0x12]);
height = *(int*)&(header[0x16]);
// Some BMP files are misformatted, guess missing information
if (imageSize==0) imageSize=width*height*3; // 3 : one byte for each Red, Green and Blue component
if (dataPos==0) dataPos=54; // The BMP header is done that way
// Create a buffer
data = new unsigned char [imageSize];
// Read the actual data from the file into the buffer
fread(data,1,imageSize,file);
//Everything is in memory now, the file can be closed
fclose(file);
// Create one OpenGL texture
glGenTextures(1, &mBMP);
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, mBMP);
// Give the image to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
delete [] data;
data = 0;
return true;
}
};
Answering own question incase anyone has a similar problem:
I had tested this with multiple images - but it turns out theres a bug in my graphics application of choice; which has been exporting 8-bit Bitmap's even though I explicitally told it to export 24-bit Bitmap's. So basically - reverting back to MS Paint solved my solution. 3 cheers for MS Paint.

glTexCoord2d has no effect

I'm trying to draw a simple texture in opengl. I made a simple class Texture:
class Texture{
public:
unsigned int id;
unsigned char image[256*256*3];
int level;
int border;
int width;
int height;
Texture (int level =0, int border = 0) : level(level), border(border) {
glGenTextures(1, &id);
width = 256, height = 256;
glTexImage2D(GL_TEXTURE_2D, level, GL_RGB, width, height, border, GL_RGB, GL_UNSIGNED_BYTE, &image[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
for (int i= 0; i<width*height*3; i+=3){
image[i]=1;//i%255;
image[i+1] =1;// 255-i%255;
image[i+2] =1;// i%128;
}
}
void useIt(){
glBindTexture( GL_TEXTURE_2D, id );
}
};
It creates an unsigned char array and fill it with some random data. I'm trying to use it this way:
glEnable(GL_TEXTURE_2D);
texture->useIt();
glBegin(GL_TRIANGLES);
glNormal3d(0, 1, 0);
glTexCoord2d(0.0,0.0);
glVertex3f(width/-2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,1.0);
glVertex3f(width/2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,0.0);
glVertex3f(width/2.f,height/2.f,depth/-2.f);
glTexCoord2d(0.0,0.0);
glVertex3f(width/-2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,1.0);
glVertex3f(width/2.f,height/2.f,depth/-2.f);
glTexCoord2d(0.0,1.0);
glVertex3f(width/-2.f,height/2.f,depth/-2.f);
glEnd();
glDisable(GL_TEXTURE_2D);
It draws the plane, but withouht texture (draws with the previously used material). what am i doing wrong?
Three possible issues with your code. Brett Hale already told you, that you need to bind a texture object before uploading data to it with glTexImage.
glTexImage creates copy of the data you supply to it (this is different to the glVertex…Pointer functions, which only take a pointer or offset into a buffer object). However you're filling the image array with data after you copied it's contents to the texture. Also you may safely delete the image array after copying the data to the texture.
Last but not least: Those operations are found in a constructor. If you have the texture class instance in a scope that's initialized before a OpenGL context has been created, nothing will happen at all, because there's no OpenGL context. So either make sure, the texture object is created only after a OpenGL context is available, or put the texture creation and upload code into a separate method, that you call once a OpenGL context is available.
glBindTexture is required in the Texture constructor, prior to the glTex* operations.
You might also require: glPixelStorei(GL_UNPACK_ALIGNMENT, 1) prior to glTexImage2D, since row memory addresses are not on 4-byte boundaries.
BTW..., you need to set the image data before you 'upload' it via glTexImage2D. Right now, you are just setting the texture with uninitialized data. Furthermore, the loop that sets the RGB byte data is just giving you values very close to black, all: (1, 1, 1).

Problems with GL_LUMINANCE and ATI

I'm trying to use luminance textures on my ATI graphics card.
The problem: I'm not being able to correctly retrieve data from my GPU. Whenever I try to read it (using glReadPixels), all it gives me is an 'all-ones' array (1.0, 1.0, 1.0...).
You can test it with this code:
#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>
static int arraySize = 64;
static int textureSize = 8;
//static GLenum textureTarget = GL_TEXTURE_2D;
//static GLenum textureFormat = GL_RGBA;
//static GLenum textureInternalFormat = GL_RGBA_FLOAT32_ATI;
static GLenum textureTarget = GL_TEXTURE_RECTANGLE_ARB;
static GLenum textureFormat = GL_LUMINANCE;
static GLenum textureInternalFormat = GL_LUMINANCE_FLOAT32_ATI;
int main(int argc, char** argv)
{
// create test data and fill arbitrarily
float* data = new float[arraySize];
float* result = new float[arraySize];
for (int i = 0; i < arraySize; i++)
{
data[i] = i + 1.0;
}
// set up glut to get valid GL context and
// get extension entry points
glutInit (&argc, argv);
glutCreateWindow("TEST1");
glewInit();
// viewport transform for 1:1 pixel=texel=data mapping
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, textureSize, 0.0, textureSize);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, textureSize, textureSize);
// create FBO and bind it (that is, use offscreen render target)
GLuint fboId;
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// create texture
GLuint textureId;
glGenTextures (1, &textureId);
glBindTexture(textureTarget, textureId);
// set texture parameters
glTexParameteri(textureTarget, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_T, GL_CLAMP);
// define texture with floating point format
glTexImage2D(textureTarget, 0, textureInternalFormat, textureSize, textureSize, 0, textureFormat, GL_FLOAT, 0);
// attach texture
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textureTarget, textureId, 0);
// transfer data to texture
//glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
//glRasterPos2i(0, 0);
//glDrawPixels(textureSize, textureSize, textureFormat, GL_FLOAT, data);
glBindTexture(textureTarget, textureId);
glTexSubImage2D(textureTarget, 0, 0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, data);
// and read back
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glReadPixels(0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, result);
// print out results
printf("**********************\n");
printf("Data before roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", data[i]);
}
printf("\n\n\n");
printf("**********************\n");
printf("Data after roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", result[i]);
}
printf("\n");
// clean up
delete[] data;
delete[] result;
glDeleteFramebuffersEXT (1, &fboId);
glDeleteTextures (1, &textureId);
system("pause");
return 0;
}
I also read somewhere on the internet that ATI cards don't support luminance yet. Does anyone know if this is true?
This has nothing to do with luminance values; the problem is with you reading floating point values.
In order to read floating-point data back properly via glReadPixels, you first need to set the color clamping mode. Since you're obviously not using OpenGL 3.0+, you should be looking at the ARB_color_buffer_float extension. In that extension is glClampColorARB, which works pretty much like the core 3.0 verison.
here's what I found out:
1) If you use GL_LUMINANCE as texture format (and GL_LUMINANCE_FLOAT32_ATI GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), the glClampColor(..) (or glClampColorARB(..)) doesn't seem to work at all.
I was only able to see the values getting actively clamped/not clamped if I set the texture format to GL_RGBA. I don't understand why this happens, since the only glClampColor(..) limitation I heard of is that it works exclusively with floating-point buffers, which all chosen internal formats seems to be.
2) If you use GL_LUMINANCE (again, with GL_LUMINANCE_FLOAT32_ATI, GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), it looks like you must "correct" your output buffer dividing each of its elements by 3. I guess this happens because when you use glTexImage2D(..) with GL_LUMINANCE it internally replicates each array component three times and when you read GL_LUMINANCE values with glReadPixel(..) it calculates its values from the sum of the RGB components (thus, three times what you have given as input). But again, it stills give you clamped values.
3) Finally, if you use GL_RED as texture format (instead of GL_LUMINANCE), you don't need to pack your input buffer and you get your output buffer properly. The values are not clamped and you don't need to call glClampColor(..) at all.
So, I guess I'll stick with GL_RED, because in the end what I wanted was an easy way to send and collect floating-point values from my "kernels" without having to worry about offsetting array indexes or anything like this.