Texture binding isn't working / C++ / OpenGL [duplicate] - c++

This question already has answers here:
What are the usual troubleshooting steps for OpenGL textures not showing?
(6 answers)
OpenGL object in C++ RAII class no longer works
(2 answers)
Closed 5 years ago.
I'm trying to create a Texture class for my project which initializes and load a texture from an image. The texture loads well but whenever I want to get the texture ID from outside the class by calling GetTexture() function, glIsTexture() does not consider the return value (the texture ID) as a texture anymore. And the face I want to texture stays blank.
Also, I tried to bind the texture with glBindTexture() directly from the Texture class itself with the function Texture::SetActive() but it still doesn't work.
And finally, when I return the texture ID directly from the function, the texture displays correctly.
Is there something I'm missing here ? I don't really know what to look for at this point.
Thanks in advance for your help !
Here's my Texture class :
// Constructor
Texture::Texture(std::string const& texPath) {
SDL_Surface *texture = nullptr, *newFormatTexture = nullptr, *flippedTexture = nullptr;
SDL_PixelFormat tmpFormat;
Uint32 amask, rmask, gmask, bmask;
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xFF000000;
gmask = 0x00FF0000;
bmask = 0x0000FF00;
amask = 0x000000FF;
#else
rmask = 0x000000FF;
gmask = 0x0000FF00;
bmask = 0x00FF0000;
amask = 0xFF000000;
#endif
if ((texture = IMG_Load(texPath.c_str())) == nullptr) {
std::cerr << "[ERROR] : Could not load texture " << texPath << ". Skipping..." << std::endl;
}
tmpFormat = *(texture->format);
tmpFormat.BitsPerPixel = 32;
tmpFormat.BytesPerPixel = 4;
tmpFormat.Rmask = rmask;
tmpFormat.Gmask = gmask;
tmpFormat.Bmask = bmask;
tmpFormat.Amask = amask;
if ((newFormatTexture = SDL_ConvertSurface(texture, &tmpFormat, SDL_SWSURFACE)) == nullptr) {
std::cerr << "[ERROR] : Couldn't convert surface to given format." << std::endl;
}
if ((flippedTexture = this->FlipSurface(newFormatTexture)) == nullptr) {
std::cerr << "[ERROR] : Couldn't flip surface." << std::endl;
}
glGenTextures(1, &(this->_textureID));
glBindTexture(GL_TEXTURE_2D, this->_textureID);
glTexImage2D(GL_TEXTURE_2D, 0, 4, flippedTexture->w, flippedTexture->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, flippedTexture->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
SDL_FreeSurface(flippedTexture);
SDL_FreeSurface(newFormatTexture);
SDL_FreeSurface(texture);
}
Texture::Texture(unsigned char *texData, int width, int height) {
glGenTextures(1, &(this->_textureID));
glBindTexture(GL_TEXTURE_2D, this->_textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, texData);
glBindTexture(GL_TEXTURE_2D, 0);
}
Texture::~Texture() {
glDeleteTextures(1, &(this->_textureID));
}
Texture Texture::CreateTexture(std::string const& texPath) {
Texture tex(texPath);
return (tex);
}
Texture Texture::CreateTexture(unsigned char *texData, int width, int height) {
Texture tex(texData, width, height);
return (tex);
}
unsigned int Texture::GetTexture() const {
return (this->_textureID);
}
void Texture::SetActive() {
glBindTexture(GL_TEXTURE_2D, this->_textureID);
}
The main class where I load and use my texture :
int WinMain(void) {
Window window("Hello", 640, 480);
double angleX, angleZ;
Texture tex;
int height;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(70, (double)640/480, 1, 1000);
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
tex = Texture::CreateTexture("caisse.jpg");
while (!window.Quit()) {
Input::Update();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(3,4,2,0,0,0,0,0,1);
tex.SetActive();
glBegin(GL_QUADS);
glTexCoord2d(0, 1);
glVertex3d(1, 1, 1);
glTexCoord2d(0, 0);
glVertex3d(1, 1, -1);
glTexCoord2d(1, 0);
glVertex3d(-1, 1, -1);
glTexCoord2d(1, 1);
glVertex3d(-1, 1, 1);
glEnd();
glFlush();
window.RefreshDisplay();
}
return (0);
}
EDIT
I solved my problem.
As described in this topic : What are the usual troubleshooting steps for OpenGL textures not showing? , the initialisation of the texture must not be done in the constructor.
Thanks for the help :)

OK, let's look at this:
Texture Texture::CreateTexture(std::string const& texPath) {
Texture tex(texPath);
return (tex);
}
I'm going to assume that this is a static function. So it creates a Texture object on the stack. And tex contains an OpenGL texture object. The function then returns this object.
By the rules of C++, the lifetime of tex is limited to the scope in which it is created. Namely, Texture::CreateTexture. Which means that, at the end of this function, tex will be destroyed by having its destructor invoked.
But since you returned tex, before that happens, tex will be used to initialize the return value of the function. That return value happens to be an object of type Texture, so the compiler will invoke Texture's copy constructor to initialize the return value.
So, right before tex is destroyed, there are two Texture objects: tex itself and the return value of type Texture that was copied from tex. So far, so good.
Now, tex is destroyed. Texture::~Texture calls glDestroyTexture on the texture object contained within it. That destroys the texture created in the constructor. Fine.
So... what happens now? Well, let's back up to the creation of the return value from CreateTexture. I said that it would invoke the copy constructor of Texture to construct it, passing tex as the object to copy from.
You did not post your complete code, but given the nature of the other code you've written, I'd bet that you didn't write a copy constructor for Texture. That's fine, because the compiler will make one for you.
Only that's not fine. Why? Because right before tex gets destroyed, there are two Texture objects. And both of them store the same OpenGL texture object name. How did that happen?
Because you copied the texture object from tex into the return value. That's what the compiler-generated copy constructor does: it copies everything in the class.
So when tex is destroyed, it is destroying the OpenGL texture it just returned.
Texture should not be a copyable class. It should be move-only, just like many resource-containing classes in C++.

Related

How to sample a pixel from a framebuffer texture in OpenGL?

I am creating a color picker OpenGL application for images with ImGUI. I have managed to load an image by loading the image into a glTexImage2D and using ImGUI::Image().
Now I would like to implement a method, which can determine the color of the pixel in case of a left mouse click.
Here is the method I loading the texture, then assigning it to a framebuffer:
bool LoadTextureFromFile(const char *filename, GLuint *out_texture, int *out_width, int *out_height,ImVec2 mousePosition ) {
// Reading the image into a GL_TEXTURE_2D
int image_width = 0;
int image_height = 0;
unsigned char *image_data = stbi_load(filename, &image_width, &image_height, NULL, 4);
if (image_data == NULL)
return false;
GLuint image_texture;
glGenTextures(1, &image_texture);
glBindTexture(GL_TEXTURE_2D, image_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
stbi_image_free(image_data);
glBindTexture(GL_TEXTURE_2D, 0);
*out_texture = image_texture;
*out_width = image_width;
*out_height = image_height;
// Assigning texture to Frame Buffer
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, image_texture, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, image_texture, 0);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE){
std::cout<< "Frame buffer is done."<< std::endl;
}
return true;
}
Unfortunately, the above code results in a completely blank screen. I guess, there is something I missed during setting the framebuffer.
Here is the method, where I would like to sample the framebuffer texture by using the mouse coordinates:
void readPixelFromImage(ImVec2 mousePosition) {
unsigned char pixels[4];
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(GLint(mousePosition.x), GLint(mousePosition.y), 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
std::cout << "r: " << static_cast<int>(pixels[0]) << '\n';
std::cout << "g: " << static_cast<int>(pixels[1]) << '\n';
std::cout << "b: " << static_cast<int>(pixels[2]) << '\n';
std::cout << "a: " << static_cast<int>(pixels[3]) << '\n' << std::endl;
}
Any help is appreciated!
There is indeed something missing in your code:
You set up a new Framebuffer that contains just a single texture buffer. This is okay, so the glCheckFramebufferStatus equals GL_FRAMEBUFFER_COMPLETE. But there is no render buffer attached to your framebuffer. If you want your image rendered on screen, you should use the default framebuffer. This framebuffer is created from your GL context.
However, the documentation says: Default framebuffers cannot change their buffer attachments, [...] https://www.khronos.org/opengl/wiki/Framebuffer. So attaching a texture or renderbuffer to the default FB is certainly not possible. You could, however, generate a new FB as you did, render to it, and finally render the outcome (or blit the buffers) to your default FB. Maybe a good starting point for this technique is https://learnopengl.com/Advanced-Lighting/Deferred-Shading
Moreover, if you intend to just read back rendered values from your GPU, it is more performant to use a renderbuffer instead of a texture. You can even have multiple renderbuffers attached to your framebuffer (as in deferred shading). Example: you could use a second renderbuffer to render an object/instance id (so, the renderbuffer will be single channel integer), and your first renderbuffer will be used for normal drawing. Reading the second renderbuffer with glReadPixels you can directly read which instance was drawn at e.g. the mouse position. This way, you can enable mouse picking very efficiently.

imageLoad glsl always return 0 in compute shader OpenGL 4.3 [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I know that there is another question with exactly the same title here however the solution provided over there does not work for my case.
I am trying to access pixel value from my compute shader. But the imageLoad function always returns 0.
Here is how I load the image:
void setTexture(GLuint texture_input, const char *fname)
{
// set texture related
int width, height, nbChannels;
unsigned char *data = stbi_load(fname, &width, &height, &nbChannels, 0);
if (data)
{
GLenum format;
if (nbChannels == 1)
{
format = GL_RED;
}
else if (nbChannels == 3)
{
format = GL_RGB;
}
else if (nbChannels == 4)
{
format = GL_RGBA;
}
glActiveTexture(GL_TEXTURE0 + 1);
gerr();
glBindTexture(GL_TEXTURE_2D, texture_input);
gerr();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
gerr();
glTexImage2D(GL_TEXTURE_2D, // target
0, // level, 0 means base level
format, // internal format of image specifies color components
width, height, // what it says
0, // border, should be 0 at all times
format, GL_UNSIGNED_BYTE, // data type of pixel data
data);
gerr();
glBindImageTexture(1, // unit
texture_input, // texture id
0, // level
GL_FALSE, // is layered
0, // layer no
GL_READ_ONLY, // access type
GL_RGBA32F);
// end texture handling
gerr();
glBindTexture(GL_TEXTURE_2D, 0); // unbind
}
else
{
std::cout << "Failed to load texture" << std::endl;
}
stbi_image_free(data);
}
And here is the relevant declaration and calling code in the shader:
layout(rgba32f, location = 1, binding = 1) readonly uniform image2D in_image;
struct ImageTexture
{
int width;
int height;
};
vec3 imageValue(ImageTexture im, float u, float v, in vec3 p)
{
u = clamp(u, 0.0, 1.0);
v = 1 - clamp(v, 0.0, 1.0);
int i = int(u * im.width);
int j = int(v * im.height);
if (i >= im.width)
i = im.width - 1;
if (j >= im.height)
j = im.height - 1;
vec3 color = imageLoad(in_image, ivec2(i, j)).xyz;
if (color == vec3(0))
color = vec3(0, u, v); // debug
return color;
}
I am seeing a green gradient instead of the contents of the image, which means my debugging code is in effect.
Either the internal format of the texture does not match the format which is specified at glBindImageTexture or the format argument is not a valid enumerator constant, when the two-dimensional texture image is specified, because format is used twice, for the internal format and the format (see glTexImage2D):
glTexImage2D(GL_TEXTURE_2D, // target
0, // level, 0 means base level
format, // internal format of image specifies color
// components
width, height, // what it says
0, // border, should be 0 at all times
format, GL_UNSIGNED_BYTE, // data type of pixel data
data);
The format argument to glBindImageTexture is GL_RGBA32F:
glBindImageTexture(1, // unit
texture_input, // texture id
0, // level
GL_FALSE, // is layered
0, // layer no
GL_READ_ONLY, // access type
GL_RGBA32F);
Hence, internal format has to be GL_RGBA32F. A possible fomrat is GL_RGBA:
glTexImage2D(GL_TEXTURE_2D, // target
0, // level, 0 means base level
GL_RGBA32F, // internal format of image specifies color
// components
width, height, // what it says
0, // border, should be 0 at all times
GL_RGBA, GL_UNSIGNED_BYTE, // data type of pixel data
data);

OpenGL renders texture all white

I'm attempting to render a .png image as a texture. However, all that is being rendered is a white square.
I give my texture a unique int ID called texID, read the pixeldata into a buffer 'image' (declared in the .h file). I load my pixelbuffer, do all of my OpenGL stuff and bind that pixelbuffer to a texture for OpenGL. I then draw it all using glDrawElements.
Also I initialize the texture with a size of 32x32 when its contructor is called, therefore i doubt it is related to a power of two size issue.
Can anybody see any mistakes in my OpenGL GL_TEXTURE_2D setup that might give me a block white square.
#include "Texture.h"
Texture::Texture(int width, int height, string filename)
{
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
printf(fnPtr);
w = width; //give our texture a width and height, the reason that we need to pass in the width and height values manually
h = height;//UPDATE, these MUST be P.O.T.
unsigned error = lodepng::decode(image,w,h,fnPtr);//lodepng's decode function will load the pixel data into image vector
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
for(int i = 0; i<image.size(); i++)
{
printf("%i,", image.at(i));
}
printf("\nImage size is %i", image.size());
//image now contains our pixeldata. All ready for OpenGL to do its thing
//let's get this texture up in the video memory
texGLInit();
}
void Texture::texGLInit()
{
//WHERE YOU LEFT OFF: glGenTextures isn't assigning an ID to textures. it stays at zero the whole time
//i believe this is why it's been rendering white
glGenTextures(1, &textures);
printf("\ntexture = %u", textures);
glBindTexture(GL_TEXTURE_2D, textures);//evrything we're about to do is about this texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
//glDisable(GL_COLOR_MATERIAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image);
//we COULD free the image vectors memory right about now.
}
void Texture::draw(point centerPoint, point dimensions)
{
glEnable(GL_TEXTURE_2D);
printf("\nDrawing block at (%f, %f)",centerPoint.x, centerPoint.y);
glBindTexture(GL_TEXTURE_2D, textures);//bind the texture
//create a quick vertex array for the primitive we're going to bind the texture to
printf("TexID = %u",textures);
GLfloat vArray[8] =
{
centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2),//bottom left i0
centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top left i1
centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top right i2
centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2)//bottom right i3
};
//create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
GLfloat tArray[8] =
{
0.0f,0.0f, //0
0.0f,1.0f, //1
1.0f,1.0f, //2
1.0f,0.0f //3
};
//and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
GLubyte iArray[6] =
{
0,1,2,
0,2,3
};
//Activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Give openGL a pointer to our vArray and tArray
glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);
//Draw it all
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);
//glDrawArrays(GL_TRIANGLES,0,6);
//Disable the vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
//done!
/*glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glTexCoord2f(0.0f,1.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,1.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,0.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glEnd();*/
}
Texture::Texture(void)
{
}
Texture::~Texture(void)
{
}
I'll also include the main class' init, where I do a bit more OGL setup before this.
void init(void)
{
printf("\n......Hello Guy. \n....\nInitilising");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,XSize,0,YSize);
glEnable(GL_TEXTURE_2D);
myBlock = new Block(0,0,offset);
glClearColor(0,0.4,0.7,1);
glLineWidth(2); // Width of the drawing line
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST);
printf("\nInitialisation Complete");
}
Update: adding in the main function where I first setup my OpenGL window.
int main(int argc, char** argv)
{
glutInit(&argc, argv); // GLUT Initialization
glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE); // Initializing the Display mode
glutInitWindowSize(800,600); // Define the window size
glutCreateWindow("Gem Miners"); // Create the window, with caption.
printf("\n========== McLeanTech Systems =========\nBecoming Sentient\n...\n...\n....\nKILL\nHUMAN\nRACE \n");
init(); // All OpenGL initialization
//-- Callback functions ---------------------
glutDisplayFunc(display);
glutKeyboardFunc(mykey);
glutSpecialFunc(processSpecialKeys);
glutSpecialUpFunc(processSpecialUpKeys);
//glutMouseFunc(mymouse);
glutMainLoop(); // Loop waiting for event
}
Here's the usual checklist for whenever textures come out white:
OpenGL context created and being bound to current thread when attemting to load texture?
Allocated texture ID using glGenTextures?
Are the parameters format and internal format to glTex[Sub]Image… valid OpenGL tokens allowed as input for this function?
Is mipmapping being used?
YES: Supply all mipmap layers – optimally set glTexParameteri GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL, as well as GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOG.
NO: Turn off mipmap filtering by setting glTexParameteri GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.

Problems with GL_LUMINANCE and ATI

I'm trying to use luminance textures on my ATI graphics card.
The problem: I'm not being able to correctly retrieve data from my GPU. Whenever I try to read it (using glReadPixels), all it gives me is an 'all-ones' array (1.0, 1.0, 1.0...).
You can test it with this code:
#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>
static int arraySize = 64;
static int textureSize = 8;
//static GLenum textureTarget = GL_TEXTURE_2D;
//static GLenum textureFormat = GL_RGBA;
//static GLenum textureInternalFormat = GL_RGBA_FLOAT32_ATI;
static GLenum textureTarget = GL_TEXTURE_RECTANGLE_ARB;
static GLenum textureFormat = GL_LUMINANCE;
static GLenum textureInternalFormat = GL_LUMINANCE_FLOAT32_ATI;
int main(int argc, char** argv)
{
// create test data and fill arbitrarily
float* data = new float[arraySize];
float* result = new float[arraySize];
for (int i = 0; i < arraySize; i++)
{
data[i] = i + 1.0;
}
// set up glut to get valid GL context and
// get extension entry points
glutInit (&argc, argv);
glutCreateWindow("TEST1");
glewInit();
// viewport transform for 1:1 pixel=texel=data mapping
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, textureSize, 0.0, textureSize);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, textureSize, textureSize);
// create FBO and bind it (that is, use offscreen render target)
GLuint fboId;
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// create texture
GLuint textureId;
glGenTextures (1, &textureId);
glBindTexture(textureTarget, textureId);
// set texture parameters
glTexParameteri(textureTarget, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_T, GL_CLAMP);
// define texture with floating point format
glTexImage2D(textureTarget, 0, textureInternalFormat, textureSize, textureSize, 0, textureFormat, GL_FLOAT, 0);
// attach texture
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textureTarget, textureId, 0);
// transfer data to texture
//glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
//glRasterPos2i(0, 0);
//glDrawPixels(textureSize, textureSize, textureFormat, GL_FLOAT, data);
glBindTexture(textureTarget, textureId);
glTexSubImage2D(textureTarget, 0, 0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, data);
// and read back
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glReadPixels(0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, result);
// print out results
printf("**********************\n");
printf("Data before roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", data[i]);
}
printf("\n\n\n");
printf("**********************\n");
printf("Data after roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", result[i]);
}
printf("\n");
// clean up
delete[] data;
delete[] result;
glDeleteFramebuffersEXT (1, &fboId);
glDeleteTextures (1, &textureId);
system("pause");
return 0;
}
I also read somewhere on the internet that ATI cards don't support luminance yet. Does anyone know if this is true?
This has nothing to do with luminance values; the problem is with you reading floating point values.
In order to read floating-point data back properly via glReadPixels, you first need to set the color clamping mode. Since you're obviously not using OpenGL 3.0+, you should be looking at the ARB_color_buffer_float extension. In that extension is glClampColorARB, which works pretty much like the core 3.0 verison.
here's what I found out:
1) If you use GL_LUMINANCE as texture format (and GL_LUMINANCE_FLOAT32_ATI GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), the glClampColor(..) (or glClampColorARB(..)) doesn't seem to work at all.
I was only able to see the values getting actively clamped/not clamped if I set the texture format to GL_RGBA. I don't understand why this happens, since the only glClampColor(..) limitation I heard of is that it works exclusively with floating-point buffers, which all chosen internal formats seems to be.
2) If you use GL_LUMINANCE (again, with GL_LUMINANCE_FLOAT32_ATI, GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), it looks like you must "correct" your output buffer dividing each of its elements by 3. I guess this happens because when you use glTexImage2D(..) with GL_LUMINANCE it internally replicates each array component three times and when you read GL_LUMINANCE values with glReadPixel(..) it calculates its values from the sum of the RGB components (thus, three times what you have given as input). But again, it stills give you clamped values.
3) Finally, if you use GL_RED as texture format (instead of GL_LUMINANCE), you don't need to pack your input buffer and you get your output buffer properly. The values are not clamped and you don't need to call glClampColor(..) at all.
So, I guess I'll stick with GL_RED, because in the end what I wanted was an easy way to send and collect floating-point values from my "kernels" without having to worry about offsetting array indexes or anything like this.

rendering SDL_TTF text onto openGL Red Square instead of text

I've been attempting to render text onto an openGL window using SDL and the SDL_TTF library on windows XP, VS2010.
Versions:
SDL version 1.2.14
SDL TTF devel 1.2.10
openGL (version is at least 2-3 years old).
I have successfully created an openGL window using SDL / SDL_image and can render lines / polygons onto it with no problems.
However, moving onto text it appears that there is some flaw in my current program, I am getting the following result when trying this code here
for those not willing to pastebin here are only the crutial code segments:
void drawText(char * text) {
glLoadIdentity();
SDL_Color clrFg = {0,0,255,0}; // set colour to blue (or 'red' for BGRA)
SDL_Surface *sText = TTF_RenderUTF8_Blended( fntCourier, text, clrFg );
GLuint * texture = create_texture(sText);
glBindTexture(GL_TEXTURE_2D, *texture);
// draw a polygon and map the texture to it, may be the source of error
glBegin(GL_QUADS); {
glTexCoord2i(0, 0); glVertex3f(0, 0, 0);
glTexCoord2i(1, 0); glVertex3f(0 + sText->w, 0, 0);
glTexCoord2i(1, 1); glVertex3f(0 + sText->w, 0 + sText->h, 0);
glTexCoord2i(0, 1); glVertex3f(0, 0 + sText->h, 0);
} glEnd();
// free the surface and texture, removing this code has no effect
SDL_FreeSurface( sText );
glDeleteTextures( 1, texture );
}
segment 2:
// create GLTexture out of SDL_Surface
GLuint * create_texture(SDL_Surface *surface) {
GLuint texture = 0;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
// The SDL_Surface appears to have BGR_A formatting, however this ends up with a
// white rectangle no matter which colour i set in the previous code.
int Mode = GL_RGB;
if(surface->format->BytesPerPixel == 4) {
Mode = GL_RGBA;
}
glTexImage2D(GL_TEXTURE_2D, 0, Mode, surface->w, surface->h, 0, Mode,
GL_UNSIGNED_BYTE, surface->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return &texture;
}
Is there an obvious bit of code I am missing?
Thank you for any help on this subject.
I've been trying to learn openGL and SDL for 3 days now, so please forgive any misinformation on my part.
EDIT:
I notice that using
TTF_RenderUTF8_Shaded
TTF_RenderUTF8_Solid
Throw a null pointer exception, meaning that there is an error within the actual text rendering function (I suspect), I do not know how this means TTF_RenderUTF8_Blended returns a red square but I suspect all troubles hinge on this.
I think the problem is in the glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) functions which must be called every time the text is painted on the screen.And maybe also the color conversion between the SDL and GL surface is not right.
I have combined create_texture and drawText into a single function that displays the text properly. That's the code:
void drawText(char * text, TTF_Font* tmpfont) {
SDL_Rect area;
SDL_Color clrFg = {0,0,255,0};
SDL_Surface *sText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended( tmpfont, text, clrFg ));
area.x = 0;area.y = 0;area.w = sText->w;area.h = sText->h;
SDL_Surface* temp = SDL_CreateRGBSurface(SDL_HWSURFACE|SDL_SRCALPHA,sText->w,sText->h,32,0x000000ff,0x0000ff00,0x00ff0000,0x000000ff);
SDL_BlitSurface(sText, &area, temp, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sText->w, sText->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, temp->pixels);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS); {
glTexCoord2d(0, 0); glVertex3f(0, 0, 0);
glTexCoord2d(1, 0); glVertex3f(0 + sText->w, 0, 0);
glTexCoord2d(1, 1); glVertex3f(0 + sText->w, 0 + sText->h, 0);
glTexCoord2d(0, 1); glVertex3f(0, 0 + sText->h, 0);
} glEnd();
glDisable(GL_TEXTURE_2D);
SDL_FreeSurface( sText );
SDL_FreeSurface( temp );
}
screenshot
I'm initializing OpenGL as follows:
int Init(){
glClearColor( 0.1, 0.2, 0.2, 1);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( 0, 600, 300, 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
if( glGetError() != GL_NO_ERROR ){
return false;
}
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE_MINUS_SRC_ALPHA);
}
I think you should just add glEnable(GL_BLEND), because the code for the text surface says TTF_RenderUTF8_Blended( fntCourier, text, clrFg ) and you have to enable the blending abilities of opengl.
EDIT
Okay, I finally took the time to put your code through a compiler. Most importantly, compiler with -Werror so that warning turn into errors
GLuint * create_texture(SDL_Surface *surface) {
GLuint texture = 0;
/*...*/
return &texture;
}
I didn't see it first, because that's something like C coder's 101 and is quite unexpected: You must not return pointers to local variables!. Once the functions goes out of scope the pointer returned will point to nonsense only. Why do you return a pointer at all? Just return a integer:
GLuint create_texture(SDL_Surface *surface) {
GLuint texture = 0;
/*...*/
return texture;
}
Because of this you're also not going to delete the texture afterward. You upload it to OpenGL, but then loose the reference to it.
Your code misses a glEnable(GL_TEXTURE_2D) that's why you can't see any effects of texture. However your use of textures is suboptimal. They way you did it, you recreate a whole new texture each time you're about to draw that text. If that happens in a animation loop, you'll
run out of texture memory rather soon
slow it down significantly
(1) can be addressed by not generating a new texture name each redraw
(2) can be addresses by uploading new texture data only when the text changes and by not using glTexImage2D, but glTexSubImage2D (of course, if the dimensions of the texture change, it must be glTexImage2D).
EDIT, found another possible issue, but first fix your pointer issue.
You should make sure, that you're using GL_REPLACE or GL_MODULATE texture environment mode. If using GL_DECAL or GL_BLEND you end up with red text on a red quad.
There was leaking memory of of the function in my previous post and the program was crashing after some time...
I improved this by separating the texture loading and displaying:
The first function must be called before the SDL loop.It loads text string into memory:
Every string loaded must have different txtNum parameter
GLuint texture[100];
SDL_Rect area[100];
void Load_string(char * text, SDL_Color clr, int txtNum, const char* file, int ptsize){
TTF_Font* tmpfont;
tmpfont = TTF_OpenFont(file, ptsize);
SDL_Surface *sText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Solid( tmpfont, text, clr ));
area[txtNum].x = 0;area[txtNum].y = 0;area[txtNum].w = sText->w;area[txtNum].h = sText->h;
glGenTextures(1, &texture[txtNum]);
glBindTexture(GL_TEXTURE_2D, texture[txtNum]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, sText->w, sText->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, sText->pixels);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
SDL_FreeSurface( sText );
TTF_CloseFont(tmpfont);
}
The second one displays the string, must be called in the SDL loop:
void drawText(float coords[3], int txtNum) {
glBindTexture(GL_TEXTURE_2D, texture[txtNum]);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS); {
glTexCoord2f(0, 0); glVertex3f(coords[0], coords[1], coords[2]);
glTexCoord2f(1, 0); glVertex3f(coords[0] + area[txtNum].w, coords[1], coords[2]);
glTexCoord2f(1, 1); glVertex3f(coords[0] + area[txtNum].w, coords[1] + area[txtNum].h, coords[2]);
glTexCoord2f(0, 1); glVertex3f(coords[0], coords[1] + area[txtNum].h, coords[2]);
} glEnd();
glDisable(GL_TEXTURE_2D);
}