Problems with GL_LUMINANCE and ATI - opengl

I'm trying to use luminance textures on my ATI graphics card.
The problem: I'm not being able to correctly retrieve data from my GPU. Whenever I try to read it (using glReadPixels), all it gives me is an 'all-ones' array (1.0, 1.0, 1.0...).
You can test it with this code:
#include <stdio.h>
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>
static int arraySize = 64;
static int textureSize = 8;
//static GLenum textureTarget = GL_TEXTURE_2D;
//static GLenum textureFormat = GL_RGBA;
//static GLenum textureInternalFormat = GL_RGBA_FLOAT32_ATI;
static GLenum textureTarget = GL_TEXTURE_RECTANGLE_ARB;
static GLenum textureFormat = GL_LUMINANCE;
static GLenum textureInternalFormat = GL_LUMINANCE_FLOAT32_ATI;
int main(int argc, char** argv)
{
// create test data and fill arbitrarily
float* data = new float[arraySize];
float* result = new float[arraySize];
for (int i = 0; i < arraySize; i++)
{
data[i] = i + 1.0;
}
// set up glut to get valid GL context and
// get extension entry points
glutInit (&argc, argv);
glutCreateWindow("TEST1");
glewInit();
// viewport transform for 1:1 pixel=texel=data mapping
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, textureSize, 0.0, textureSize);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0, 0, textureSize, textureSize);
// create FBO and bind it (that is, use offscreen render target)
GLuint fboId;
glGenFramebuffersEXT(1, &fboId);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
// create texture
GLuint textureId;
glGenTextures (1, &textureId);
glBindTexture(textureTarget, textureId);
// set texture parameters
glTexParameteri(textureTarget, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(textureTarget, GL_TEXTURE_WRAP_T, GL_CLAMP);
// define texture with floating point format
glTexImage2D(textureTarget, 0, textureInternalFormat, textureSize, textureSize, 0, textureFormat, GL_FLOAT, 0);
// attach texture
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textureTarget, textureId, 0);
// transfer data to texture
//glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
//glRasterPos2i(0, 0);
//glDrawPixels(textureSize, textureSize, textureFormat, GL_FLOAT, data);
glBindTexture(textureTarget, textureId);
glTexSubImage2D(textureTarget, 0, 0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, data);
// and read back
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glReadPixels(0, 0, textureSize, textureSize, textureFormat, GL_FLOAT, result);
// print out results
printf("**********************\n");
printf("Data before roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", data[i]);
}
printf("\n\n\n");
printf("**********************\n");
printf("Data after roundtrip:\n");
printf("**********************\n");
for (int i = 0; i < arraySize; i++)
{
printf("%f, ", result[i]);
}
printf("\n");
// clean up
delete[] data;
delete[] result;
glDeleteFramebuffersEXT (1, &fboId);
glDeleteTextures (1, &textureId);
system("pause");
return 0;
}
I also read somewhere on the internet that ATI cards don't support luminance yet. Does anyone know if this is true?

This has nothing to do with luminance values; the problem is with you reading floating point values.
In order to read floating-point data back properly via glReadPixels, you first need to set the color clamping mode. Since you're obviously not using OpenGL 3.0+, you should be looking at the ARB_color_buffer_float extension. In that extension is glClampColorARB, which works pretty much like the core 3.0 verison.

here's what I found out:
1) If you use GL_LUMINANCE as texture format (and GL_LUMINANCE_FLOAT32_ATI GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), the glClampColor(..) (or glClampColorARB(..)) doesn't seem to work at all.
I was only able to see the values getting actively clamped/not clamped if I set the texture format to GL_RGBA. I don't understand why this happens, since the only glClampColor(..) limitation I heard of is that it works exclusively with floating-point buffers, which all chosen internal formats seems to be.
2) If you use GL_LUMINANCE (again, with GL_LUMINANCE_FLOAT32_ATI, GL_LUMINANCE32F_ARB or GL_RGBA_FLOAT32_ATI as internal format), it looks like you must "correct" your output buffer dividing each of its elements by 3. I guess this happens because when you use glTexImage2D(..) with GL_LUMINANCE it internally replicates each array component three times and when you read GL_LUMINANCE values with glReadPixel(..) it calculates its values from the sum of the RGB components (thus, three times what you have given as input). But again, it stills give you clamped values.
3) Finally, if you use GL_RED as texture format (instead of GL_LUMINANCE), you don't need to pack your input buffer and you get your output buffer properly. The values are not clamped and you don't need to call glClampColor(..) at all.
So, I guess I'll stick with GL_RED, because in the end what I wanted was an easy way to send and collect floating-point values from my "kernels" without having to worry about offsetting array indexes or anything like this.

Related

OpenGL integer texture raising GL_INVALID_VALUE

I have an Nvidia GTX 970, with the latest (441.66) driver for Win 10 x64 (18362 build), which is obviously fully OpenGL 4.6 compliant, and currently compiling an app with VS2017.
My problem is, that I seem to be unable to use any other texture type then GL_UNSIGNED_BYTE. I'm currently trying to set up a single channel, unsigned integer (32 bit) texture, but however I try to allocate the texture, OpenGL immediately raises the GL_INVALID_VALUE error, and the shader's result turns all black.
So far I tryed allocating immutably:
glTexStorage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048);
And mutably:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);
I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.
What am I missing here?
Minimum reproducable version:
#include <iostream>
#include <GL/glew.h>
#include <SDL.h>
void OGLErrorCheck()
{
const GLenum errorCode = glGetError();
if (errorCode != GL_NO_ERROR)
{
const GLubyte* const errorString = gluErrorString(errorCode);
std::cout << errorString;
}
}
int main(int argc, char* argv[])
{
glewInit();
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, nullptr);
OGLErrorCheck();
getchar();
return 0;
}
This yields GL_INVALID_OPERATION.
This is linked with the latest SDL and GLEW, both free software, available for download at https://www.libsdl.org/ and http://glew.sourceforge.net/ respectively.
From the specs about glTexStorage2D:
void glTexStorage2D(
GLenum target,
GLsizei levels,
GLenum internalformat,
GLsizei width,
GLsizei height
);
[…]
GL_INVALID_VALUE is generated if width, height or levels are less than 1
And the value for levels you pass to glTexStorage2D is 0.
First of all you have to create an OpenGL Context. e.g.:
(See also Using OpenGL With SDL)
if (SDL_Init(SDL_INIT_VIDEO) < 0)
return 0;
SDL_Window *window = SDL_CreateWindow("ogl wnd", 0, 0, width, height, SDL_WINDOW_OPENGL);
if (window == nullptr)
return 0;
SDL_GLContext context = SDL_GL_CreateContext(window);
if (glewInit() != GLEW_OK)
return 0;
Then you have to generate an texture name by glGenTextures:
GLuint tobj;
glGenTextures(1, &tobj);
After that you've to bind the named texture to a texturing target by glBindTexture:
glBindTexture(GL_TEXTURE_2D, tobj);
Finally you can specify the two-dimensional texture image by glTexImage2D
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, nullptr);
Note, the texture format has to be GL_RED_INTEGER rather than GL_RED, because the source texture image has to be interpreted as integral data, rather than normalized floating point data. The format and type parameter specify the format of the source data. The internalformat parameter specifies the format of the target texture image.
Set the texture parameters by glTexParameter ( this can be done before glTexImage2D, too):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
If you do not generate mipmaps (by glGenerateMipmap), then setting the GL_TEXTURE_MIN_FILTER is important. Since the default filter is GL_NEAREST_MIPMAP_LINEAR the texture would be mipmap incomplete, if you don not change the minifying function to GL_NEAREST or GL_LINEAR.
And mutably: glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);
I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.
What am I missing here?
For unormalized integer texture formatss, the format parameter of glTex[Sub]Image is not allowed to be just GL_RED, you have to use GL_RED_INTEGER.The format,type combination GL_RED, GL_UNSIGNED_INT is for specifying normalized fixed point or floating point formats only.

Rendering text- freetype blank screen

I am using freetype, and the only thing I have left to do in order to render text is convert an ft_bitmap to something that can be rendered with opengl can someone explain how to do this? I am using glfw. With the way I have tried to do it it just gives a blank screen And here is the code that I am using:
#include <exception>
#include <iostream>
#include <string>
#include <glew.h>
#include <GL/glfw.h>
#include <iterator>
#include "../include/TextRenderer.h"
#include <ft2build.h>
#include FT_FREETYPE_H
#include <stdexcept>
#include <freetype/ftglyph.h>
using std::runtime_error;
using std::cout;
TextRenderer::TextRenderer(int x, int y, FT_Face Face, std::string s)
{
FT_Set_Char_Size(
Face, /* handle to face object */
0, /* char_width in 1/64th of points */
16*64, /* char_height in 1/64th of points */
0, /* horizontal device resolution */
0 ); /* vertical device resolution */
slot= Face->glyph;
text = s;
setsx(x);
setsy(y);
penX = x;
penY = y;
face = Face;
//shaders
GLuint v = glCreateShader(GL_VERTEX_SHADER) ;
const char* vs = "void main(){ gl_Position = ftransform();}";
glShaderSource(v,1,&vs,NULL);
glCompileShader(v);
GLuint f = glCreateShader(GL_FRAGMENT_SHADER) ;
const char* fs = "uniform sampler2D texture1; void main() { gl_FragColor = texture2D(texture1, gl_TexCoord[0].st); //And that is all we need}";
glShaderSource(f,1,&fs,NULL);
glCompileShader(f);
Program= glCreateProgram();
glAttachShader(Program,v);
glAttachShader(Program,f);
glLinkProgram(Program);
}
void TextRenderer::render()
{
glUseProgram(Program);
FT_UInt glyph_index;
for ( int n = 0; n < text.size(); n++ )
{
/* retrieve glyph index from character code */
glyph_index = FT_Get_Char_Index( face, text[n] );
/* load glyph image into the slot (erase previous one) */
error = FT_Load_Glyph( face, glyph_index, FT_LOAD_RENDER );
draw(&face->glyph->bitmap,penX + slot->bitmap_left,penY - slot->bitmap_top );
penX += *(&face->glyph->bitmap.width)+3;
penY += slot->advance.y >> 6; /* not useful for now */
}
}
void TextRenderer::draw(FT_Bitmap * bitmap,float x,float y)
{
GLuint texture [0] ;
glGenTextures(1,texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glTexImage2D (GL_TEXTURE_2D, 0, GL_RED , bitmap->width, bitmap->rows, 0, GL_RED , GL_UNSIGNED_BYTE, bitmap);
// int loc = glGetUniformLocation(Program, "texture1");
// glUniform1i(loc, 0);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glEnable(GL_TEXTURE_2D);
int height=bitmap->rows/10;
int width=bitmap->width/10;
glBegin(GL_QUADS);
glTexCoord2f (0.0, 0.0);
glVertex2f(x,y);
glTexCoord2f (1.0, 0.0);
glVertex2f(x+width,y);
glTexCoord2f (1.0, 1.0);
glVertex2f(x+width,y+height);
glTexCoord2f (0.0, 1.0);
glVertex2f(x,y+height);
glEnd();
glDisable(GL_TEXTURE_2D);
}
What i am using to initialize text renderer:
FT_Library library;
FT_Face arial;
FT_Error error = FT_Init_FreeType( &library );
if ( error )
{
throw std::runtime_error("Freetype failed");
}
error = FT_New_Face( library,
"C:/Windows/Fonts/Arial.ttf",
0,
&arial );
if ( error == FT_Err_Unknown_File_Format )
{
throw std::runtime_error("font format not available");
}
else if ( error )
{
throw std::runtime_error("Freetype font failed");
}
TextRenderer t(5,10,arial,"Hello");
t.render();
There's a lot of Problems in your program that result from not understanding what each call that you make to OpenGL or Freetype do. You should really read the documentation for the libraries instead of stacking tutorials into each other.
Let's do this one by one
Fragment Shader
const char* fs = "uniform sampler2D texture1;
void main() {
gl_FragColor = texture2D(texture1, gl_TexCoord[0].st);
//And that is all we need}";`
This shader doesn't compile (you should really check if it compiles with glGetShaderiv and if it links with glGetProgramiv). If you indent it correctly then you'll see that you commented out the final } because it's in the same line and after the //. So, you should remove the comment or use a \n to end the comment.
Also, for newer versions of OpenGL using gl_TexCoord is deprecated but it works if you use a compatibility profile.
Vertex Shader
just like the fragment shaders there's deprecated functionality used, namely ftransform().
But the bigger problem is that you use gl_TexCoord[0] in the fragment shader without passing it through from the vertex shader. So, you need to add the line gl_TexCoord[0]=gl_MultiTexCoord0; in your vertex shader. (As you might have guessed that is also deprecated)
Texture passing
You are passing a pointer to bitmap to glTexImage2D but bitmap is of type FT_Bitmap *, you need to pass bitmap->buffer instead.
You should not generate a new texture for each letter every frame (especially not if you're not deleting it). You should call glGentextures only once (you could put it in your TextRenderer constructor since you put all the other initialization stuff there).
Then there's the GLuint texture [0]; which should give you a compiler error. If you really need an array with one element then the syntax is GLuint texture [1];
So your final call would look something like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, bitmap->width, bitmap->rows, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, bitmap->buffer);
Miscellaneous
int height=bitmap->rows/10;
int width=bitmap->width/10;
this is an integer division and if your values for bitmap->width get smaller than 10 you would get 0 as the result, which would make the quad you're trying to draw invisible (height or width of 0). If you have trouble getting the objects into view you should just translate/scale it into view. This is also deprecated but if you keep using the other stuff this would make your window have a coordinate system from [-100,-100] to [100,100] (lower-left to upper-right).
glLoadIdentity();
glScalef(0.01f, 0.01f, 1.0f);
You're also missing the coordinate conversion from FreeType to OpenGL, Freetype uses a coordinate system which starts at [0,0] in the top left corner and x is the offset to the right while y is the offset to the bottom. So if you just use these coordinates in OpenGL everything will be upside-down.
If you do all that your result should look something like this (grey background to highlight where the polygons begin and end):
As for your general approach, repurposing one texture and drawing letter by letter re-using and overwriting the same texture seems like an inefficient approach. It would be better to just allocate one larger texture and then use glTexSubImage2D to write the glyphs to it. If freetype re-rendering letters is a bottleneck you could also just write all the symbols you need into one texture at the beginning (for example the whole ASCII range) and then use that texture as a texture-atlas.
My general advice would also be that if you don't really want to learn OpenGL but just want to use some cross-platform rendering without bothering with the low-level stuff I'd recommend using a rendering framework instead.

issues with mixing glGetTexImage and imageStore on nvidia opengl

I wrote some code, too long to paste here, that renders into a 3D 1 component float texture via a fragment shader that uses bindless imageLoad and imageStore.
That code is definitely working.
I then needed to work around some GLSL compiler bugs, so wanted to read the 3D texture above back to the host via glGetTexImage. Yes, I did do a glMemoryBarrierEXT(GL_ALL_BARRIER_BITS).
I did check the texture info via glGetTexLevelparameteriv() and everything I see matches. I did check for OpenGL errors, and have none.
Sadly, though, glGetTexImage never seems to read what was written by the fragment shader. Instead, it only returns the fake values I put in when I called glTexImage3D() to create the texture.
Is that expected behavior? The documentation implies otherwise.
If glGetTexImage actually works that way, how can I read back the data in that 3D texture (resident on the device?) Clearly the driver can do that as it does when the texture is made non-resident. Surely there's a simple way to do this simple thing...
I was asking if glGetTexImage was supposed to work that way or not. Here's the code:
void Bindless3DArray::dump_array(Array3D<float> &out)
{
bool was_mapped = m_image_mapped;
if (was_mapped)
unmap_array(); // unmap array so it's accessible to opengl
out.resize(m_depth, m_height, m_width);
glBindTexture(GL_TEXTURE_3D, m_textureid); // from glGenTextures()
#if 0
int w,h,d;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_WIDTH, &w);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_HEIGHT, &h);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_DEPTH, &d);
int internal_format;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_INTERNAL_FORMAT, &internal_format);
int data_type_r, data_type_g;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_RED_TYPE, &data_type_r);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_GREEN_TYPE, &data_type_g);
int size_r, size_g;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_RED_SIZE, &size_r);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_GREEN_SIZE, &size_g);
#endif
glGetTexImage(GL_TEXTURE_3D, 0, GL_RED, GL_FLOAT, &out(0,0,0));
glBindTexture(GL_TEXTURE_3D, 0);
CHECK_GLERROR();
if (was_mapped)
map_array_to_cuda(); // restore state
}
Here's the code that creates the bindless array:
void Bindless3DArray::allocate(int w, int h, int d, ElementType t)
{
if (!m_textureid)
glGenTextures(1, &m_textureid);
m_type = t;
m_width = w;
m_height = h;
m_depth = d;
glBindTexture(GL_TEXTURE_3D, m_textureid);
CHECK_GLERROR();
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 0); // ensure only 1 miplevel is allocated
CHECK_GLERROR();
Array3D<float> foo(d, h, w);
// DEBUG -- glGetTexImage returns THIS data, not what's on device
for (int z=0; z<m_depth; ++z)
for (int y=0; y<m_height; ++y)
for (int x=0; x<m_width; ++x)
foo(z,y,x) = 3.14159;
//-- Texture creation
if (t == ElementInteger)
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32UI, w, h, d, 0, GL_RED_INTEGER, GL_INT, 0);
else if (t == ElementFloat)
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32F, w, h, d, 0, GL_RED, GL_FLOAT, &foo(0,0,0));
else
throw "Invalid type for Bindless3DArray";
CHECK_GLERROR();
m_handle = glGetImageHandleNV(m_textureid, 0, true, 0, (t == ElementInteger) ? GL_R32UI : GL_R32F);
glMakeImageHandleResidentNV(m_handle, GL_READ_WRITE);
CHECK_GLERROR();
#ifdef USE_CUDA
checkCuda(cudaGraphicsGLRegisterImage(&m_image_resource, m_textureid, GL_TEXTURE_3D, cudaGraphicsRegisterFlagsSurfaceLoadStore));
#endif
}
I allocate the array, render to it via an OpenGL fragment program, and then I call dump_array() to read the data back. Sadly, I only get what I loaded in the allocate call.
The render program looks like
void App::clear_deepz()
{
deepz_clear_program.bind();
deepz_clear_program.setUniformValue("sentinel", SENTINEL);
deepz_clear_program.setUniformValue("deepz", deepz_array.handle());
deepz_clear_program.setUniformValue("sem", semaphore_array.handle());
run_program();
glMemoryBarrierEXT(GL_ALL_BARRIER_BITS);
// glMemoryBarrierEXT(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
// glMemoryBarrierEXT(GL_SHADER_GLOBAL_ACCESS_BARRIER_BIT_NV);
deepz_clear_program.release();
}
and the fragment program is:
#version 420\n
in vec4 gl_FragCoord;
uniform float sentinel;
coherent uniform layout(size1x32) image3D deepz;
coherent uniform layout(size1x32) uimage3D sem;
void main(void)
{
ivec3 coords = ivec3(gl_FragCoord.x, gl_FragCoord.y, 0);
imageStore(deepz, coords, vec4(sentinel));
imageStore(sem, coords, ivec4(0));
discard; // don't write to FBO at all
}
discard; // don't write to FBO at all
That's not what discard means. Oh, it does mean that. But it also means that all Image Load/Store writes will be discarded too. Indeed, odds are, the compiler will see that statement and just do nothing for the entire fragment shader.
If you want to just execute the fragment shader, you can employ the GL 4.3 feature (available on your NVIDIA hardware) of having an empty framebuffer object. Or you could use a compute shader. If you can't use GL 4.3 yet, then use a write mask to turn off all color writes.
As Nicol mentions above, if you want side effects only of image load and store, the proper way is to use an empty frame buffer object.
The bug of mixing glGetTexImage() and bindless textures was in fact a driver bug, and has been fixed as of driver version 335.23. I filed the bug and have confirmed my code is now working properly.
Note I am using empty frame buffer objects in the code, and don't use "discard" any more.

OpenGL renders texture all white

I'm attempting to render a .png image as a texture. However, all that is being rendered is a white square.
I give my texture a unique int ID called texID, read the pixeldata into a buffer 'image' (declared in the .h file). I load my pixelbuffer, do all of my OpenGL stuff and bind that pixelbuffer to a texture for OpenGL. I then draw it all using glDrawElements.
Also I initialize the texture with a size of 32x32 when its contructor is called, therefore i doubt it is related to a power of two size issue.
Can anybody see any mistakes in my OpenGL GL_TEXTURE_2D setup that might give me a block white square.
#include "Texture.h"
Texture::Texture(int width, int height, string filename)
{
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
printf(fnPtr);
w = width; //give our texture a width and height, the reason that we need to pass in the width and height values manually
h = height;//UPDATE, these MUST be P.O.T.
unsigned error = lodepng::decode(image,w,h,fnPtr);//lodepng's decode function will load the pixel data into image vector
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
for(int i = 0; i<image.size(); i++)
{
printf("%i,", image.at(i));
}
printf("\nImage size is %i", image.size());
//image now contains our pixeldata. All ready for OpenGL to do its thing
//let's get this texture up in the video memory
texGLInit();
}
void Texture::texGLInit()
{
//WHERE YOU LEFT OFF: glGenTextures isn't assigning an ID to textures. it stays at zero the whole time
//i believe this is why it's been rendering white
glGenTextures(1, &textures);
printf("\ntexture = %u", textures);
glBindTexture(GL_TEXTURE_2D, textures);//evrything we're about to do is about this texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
//glDisable(GL_COLOR_MATERIAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image);
//we COULD free the image vectors memory right about now.
}
void Texture::draw(point centerPoint, point dimensions)
{
glEnable(GL_TEXTURE_2D);
printf("\nDrawing block at (%f, %f)",centerPoint.x, centerPoint.y);
glBindTexture(GL_TEXTURE_2D, textures);//bind the texture
//create a quick vertex array for the primitive we're going to bind the texture to
printf("TexID = %u",textures);
GLfloat vArray[8] =
{
centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2),//bottom left i0
centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top left i1
centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top right i2
centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2)//bottom right i3
};
//create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
GLfloat tArray[8] =
{
0.0f,0.0f, //0
0.0f,1.0f, //1
1.0f,1.0f, //2
1.0f,0.0f //3
};
//and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
GLubyte iArray[6] =
{
0,1,2,
0,2,3
};
//Activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Give openGL a pointer to our vArray and tArray
glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);
//Draw it all
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);
//glDrawArrays(GL_TRIANGLES,0,6);
//Disable the vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
//done!
/*glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glTexCoord2f(0.0f,1.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,1.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,0.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glEnd();*/
}
Texture::Texture(void)
{
}
Texture::~Texture(void)
{
}
I'll also include the main class' init, where I do a bit more OGL setup before this.
void init(void)
{
printf("\n......Hello Guy. \n....\nInitilising");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,XSize,0,YSize);
glEnable(GL_TEXTURE_2D);
myBlock = new Block(0,0,offset);
glClearColor(0,0.4,0.7,1);
glLineWidth(2); // Width of the drawing line
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST);
printf("\nInitialisation Complete");
}
Update: adding in the main function where I first setup my OpenGL window.
int main(int argc, char** argv)
{
glutInit(&argc, argv); // GLUT Initialization
glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE); // Initializing the Display mode
glutInitWindowSize(800,600); // Define the window size
glutCreateWindow("Gem Miners"); // Create the window, with caption.
printf("\n========== McLeanTech Systems =========\nBecoming Sentient\n...\n...\n....\nKILL\nHUMAN\nRACE \n");
init(); // All OpenGL initialization
//-- Callback functions ---------------------
glutDisplayFunc(display);
glutKeyboardFunc(mykey);
glutSpecialFunc(processSpecialKeys);
glutSpecialUpFunc(processSpecialUpKeys);
//glutMouseFunc(mymouse);
glutMainLoop(); // Loop waiting for event
}
Here's the usual checklist for whenever textures come out white:
OpenGL context created and being bound to current thread when attemting to load texture?
Allocated texture ID using glGenTextures?
Are the parameters format and internal format to glTex[Sub]Image… valid OpenGL tokens allowed as input for this function?
Is mipmapping being used?
YES: Supply all mipmap layers – optimally set glTexParameteri GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL, as well as GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOG.
NO: Turn off mipmap filtering by setting glTexParameteri GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.

glTexCoord2d has no effect

I'm trying to draw a simple texture in opengl. I made a simple class Texture:
class Texture{
public:
unsigned int id;
unsigned char image[256*256*3];
int level;
int border;
int width;
int height;
Texture (int level =0, int border = 0) : level(level), border(border) {
glGenTextures(1, &id);
width = 256, height = 256;
glTexImage2D(GL_TEXTURE_2D, level, GL_RGB, width, height, border, GL_RGB, GL_UNSIGNED_BYTE, &image[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
for (int i= 0; i<width*height*3; i+=3){
image[i]=1;//i%255;
image[i+1] =1;// 255-i%255;
image[i+2] =1;// i%128;
}
}
void useIt(){
glBindTexture( GL_TEXTURE_2D, id );
}
};
It creates an unsigned char array and fill it with some random data. I'm trying to use it this way:
glEnable(GL_TEXTURE_2D);
texture->useIt();
glBegin(GL_TRIANGLES);
glNormal3d(0, 1, 0);
glTexCoord2d(0.0,0.0);
glVertex3f(width/-2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,1.0);
glVertex3f(width/2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,0.0);
glVertex3f(width/2.f,height/2.f,depth/-2.f);
glTexCoord2d(0.0,0.0);
glVertex3f(width/-2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,1.0);
glVertex3f(width/2.f,height/2.f,depth/-2.f);
glTexCoord2d(0.0,1.0);
glVertex3f(width/-2.f,height/2.f,depth/-2.f);
glEnd();
glDisable(GL_TEXTURE_2D);
It draws the plane, but withouht texture (draws with the previously used material). what am i doing wrong?
Three possible issues with your code. Brett Hale already told you, that you need to bind a texture object before uploading data to it with glTexImage.
glTexImage creates copy of the data you supply to it (this is different to the glVertex…Pointer functions, which only take a pointer or offset into a buffer object). However you're filling the image array with data after you copied it's contents to the texture. Also you may safely delete the image array after copying the data to the texture.
Last but not least: Those operations are found in a constructor. If you have the texture class instance in a scope that's initialized before a OpenGL context has been created, nothing will happen at all, because there's no OpenGL context. So either make sure, the texture object is created only after a OpenGL context is available, or put the texture creation and upload code into a separate method, that you call once a OpenGL context is available.
glBindTexture is required in the Texture constructor, prior to the glTex* operations.
You might also require: glPixelStorei(GL_UNPACK_ALIGNMENT, 1) prior to glTexImage2D, since row memory addresses are not on 4-byte boundaries.
BTW..., you need to set the image data before you 'upload' it via glTexImage2D. Right now, you are just setting the texture with uninitialized data. Furthermore, the loop that sets the RGB byte data is just giving you values very close to black, all: (1, 1, 1).