Surface poorly filled with sdl_ttf - c++

I'm trying to make an openGL game in c++ and I'm trying to implement a text system,
to do this I'm trying to use SDL_ttf.
I already used SDL_ttf in an other project but with another api, so I made the same code but it happened to not fill the pixel data of the surface.
Here is my code :
void Text2Texture::setText(const char * text, size_t fontIndex){
SDL_Color c = {255, 255, 0, 255};
SDL_Surface * surface;
surface = TTF_RenderUTF8_Blended(loadedFonts_[fontIndex], text, c);
if(surface == nullptr) {
fprintf(stderr, "Error TTF_RenderText\n");
return;
}
GLenum texture_format;
GLint colors = surface->format->BytesPerPixel;
if (colors == 4) { // alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGBA;
else
texture_format = GL_BGRA_EXT;
} else { // no alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGB;
else
texture_format = GL_BGR_EXT;
}
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexImage2D(GL_TEXTURE_2D, 0, colors, surface->w, surface->h, 0, texture_format, GL_UNSIGNED_BYTE, surface->pixels);
///This line tell me pixel data is 8 bit witch isn't good ?
std::cout << "pixel size : " << sizeof(surface->pixels) << std::endl;
///This line give me correct result
fprintf(stderr, "texture size : %d %d\n", surface->w, surface->h);
glBindTexture(GL_TEXTURE_2D, 0);
}
As you can see in the comment, the pointer pixels in surface have a size of 8 bit, witch is way too low for a texture. I don't know why It do that.
At the end, the texture data look to be fully filled with 0 (resulting with a black squad using very basic shaders).
In this project I'm using glfw to create an openGL context so I'm not using sdl and I did not initialized it.
However, I did initialize sdl_ttf, here is all I did before calling setText :
std::vector<TTF_Font *> Text2Texture::loadedFonts_;
void Text2Texture::init(){
if(TTF_Init() == -1) {
fprintf(stderr, "TTF_Init: %s\n", TTF_GetError());
}
}
int Text2Texture::loadFont(std::string const& fontPath){
loadedFonts_.emplace_back();
loadedFonts_.back() = TTF_OpenFont(fontPath.data(), 32);
if( loadedFonts_.back() == nullptr ) {
fprintf(stderr, "TTF_OpenFont: %s \n", TTF_GetError());
loadedFonts_.pop_back();
return -1;
}
return ((int)loadedFonts_.size() - 1);
}
///The constructor initialize the texture :
Text2Texture::Text2Texture(){
glGenTextures(1, &textureId_);
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
My class got a static part here is it corp :
class Text2Texture {
public:
Text2Texture();
void setText(const char * text, size_t fontIndex = 0);
unsigned int getId() const;
//Partie static
static void init();
static void quit();
static int loadFont(std::string const& fontPath);
private:
unsigned int textureId_;
//Partie static
static std::vector<TTF_Font *> loadedFonts_;
};
I initialize sdl_ttf and load texture with static method then I create class instance to create specific texture.
If you find where is my mistake I would be pleased to read your answer.
(By the way, I'm not really sure using sdl_ttf is the good approach, if you have a better idea I would take it too but I would like to solve this problem first)

The format and type parameter of glTexImage2Dspecifiy how one single pixel is encoded.
When the texture font is created, each pixel is encoded to a single byte. This means your texture consist of a single color channel and each pixel has 1 byte.
I'm very sure that colors = surface->format->BytesPerPixel is 1.
Note that it is sufficient to encode the glyph in one color channel, because the glyph consists of information that would fit in a single byte.
By default, OpenGL assumes that the start of each row of an image is aligned 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since the image has 1 (red) color channel, and is tightly packed, the start of a row is possibly misaligned.
Change the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D).
Since the texture has only one (red) color channel, the green and blue color will be 0 and the alpha channel will be 1 when the texture is looked up. But you can treat green, blue and even alpha channels to be read from the red color channel, too.
This can be achieved by setting the texture swizzle parameters GL_TEXTURE_SWIZZLE_G, GL_TEXTURE_SWIZZLE_B respectively GL_TEXTURE_SWIZZLE_A. See glTexParameter.
Further, note that the texture parameter are stored in the texture object. glTexParameter changes the texture object which is currently bound to the specified target of the current texture unit. So it is sufficient to set the parameters once when the texture image is created.
In comparison, glPixelStore changes global states an ma have to be set to its default value after specifying the texture image (if later calls to glTexImage2D rely on it).
The specification of the 2-dimensional texture image and setting the parameters may look as follows:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, surface->w, surface->h, 0,
GL_RED, GL_UNSIGNED_BYTE, surface->pixels);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_A, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

Related

QOpenGLWidget optimal texture buffer allocation

My question is possiibly not related with Qt and/or QOpenGLWidget itself, but rather with OpenGL buffers in general. Anyway, I'm trying to implement a crossplatform renderer of YUV video frames, which requires converting YUV to RGB and rendering the result on a widget afterwards.
So far, I succeeded in the following:
Found two proper shaders (1 fragment & 1 vertex) to improve YUV 2 RGB conversion (Our project supports only Qt 5.6 so far, no better way for me to do it)
Used QOpenGLWidget to obtain a properly-behaving widget
Did my best with QOpenGLTexture to make the drawing
Here is my very sketchy code, which displays video frames from a raw YUV file and most of the job is done by GPU. I would be happy if it were not for the trouble of buffer allocations. The point is, frames are received from some legacy code, which provides me with custom wrappers around something like unsigned char *data, that is why I have to copy it like this:
//-----------------------------------------
GLvoid* mBufYuv; // buffer somewhere
int mFrameSize;
//-------------------------
void OpenGLDisplay::DisplayVideoFrame(unsigned char *data, int frameWidth, int frameHeight)
{
impl->mVideoW = frameWidth;
impl->mVideoH = frameHeight;
memcpy(impl->mBufYuv, data, impl->mFrameSize);
update();
}
While testing the concept, frame and buffer sizes were hardcoded like:
// Called from the outside, assuming video frame height/width are constant
void OpenGLDisplay::InitDrawBuffer(unsigned bsize)
{
impl->mFrameSize = bsize;
impl->mBufYuv = new unsigned char[bsize];
}
Qt texture classes served well for the pupose, so...
// Create y, u, v texture objects respectively
impl->mTextureY = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureU = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureV = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureY->create();
impl->mTextureU->create();
impl->mTextureV->create();
// Get the texture index value of the return y component
impl->id_y = impl->mTextureY->textureId();
// Get the texture index value of the returned u component
impl->id_u = impl->mTextureU->textureId();
// Get the texture index value of the returned v component
impl->id_v = impl->mTextureV->textureId();
And after all the rendering itself looks like:
void OpenGLDisplay::paintGL()
{
// Load y data texture
// Activate the texture unit GL_TEXTURE0
glActiveTexture(GL_TEXTURE0);
// Use the texture generated from y to generate texture
glBindTexture(GL_TEXTURE_2D, impl->id_y);
// Use the memory mBufYuv data to create a real y data texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW, impl->mVideoH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, impl->mBufYuv);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load u data texture
glActiveTexture(GL_TEXTURE1);//Activate texture unit GL_TEXTURE1
glBindTexture(GL_TEXTURE_2D, impl->id_u);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW/2, impl->mVideoH/2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load v data texture
glActiveTexture(GL_TEXTURE2);//Activate texture unit GL_TEXTURE2
glBindTexture(GL_TEXTURE_2D, impl->id_v);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW / 2, impl->mVideoH / 2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH * 5/4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Specify y texture to use the new value can only use 0, 1, 2, etc. to represent
// the index of the texture unit, this is the place where opengl is not humanized
//0 corresponds to the texture unit GL_TEXTURE0 1 corresponds to the
// texture unit GL_TEXTURE1 2 corresponds to the texture unit GL_TEXTURE2
glUniform1i(impl->textureUniformY, 0);
// Specify the u texture to use the new value
glUniform1i(impl->textureUniformU, 1);
// Specify v texture to use the new value
glUniform1i(impl->textureUniformV, 2);
// Use the vertex array way to draw graphics
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
As I've mentioned above, it works fine, but it's only a demo sketch, the goal was to implement generic video renderer, which means aspect ratio, resolution and frame fize may change dynamically; thus, the buffer (GLvoid* mBufYuv; in the code above) has to be reallocated and, even worse, I'll have to memcpy data to it 25 times per second. Looks definitely like something, that wouldn't work way too fast with Full HD video, for example.
Of course, several trivial optimizations are possible, leading to reduction of data copying, but Google told me that there are different ways to allocate buffers in OpenGL directly, those PBO/PUBO things and QOpenGLBuffer at least.
Now, there is a problem -- I'm quite confused with all those many ways to handle textures and don't know neither the best/optimal, nor the one best fitting my case.
Any piece of advice is appreciated.

Error when loading image with stb

I am attempting to load the following image:
As a texture for the stanford Dragon. The result however is as follows:
I have read that other people have had issues with this due to either not binding the textures correctly or using the wrong number of components when loading a texture. I think that I don't have either of those issues as I am both checking for the format of the image and binding the texture. I have managed to get other images to load correctly, so this seems like there is an issue specific to this image (I am not saying the image is corrupted, rather that something about this image is slightly different to the other images I ahve tried).
The code I am using to initialize the texture is as follows:
//Main constructor
Texture::Texture(string file_path, GLuint t_target)
{
//Change the coordinate system of the image
stbi_set_flip_vertically_on_load(true);
int numComponents;
//Load the pixel data of the image
void *data = stbi_load(file_path.c_str(), &width, &height, &numComponents, 0);
if (data == nullptr)//Error check
{
cerr << "Error when loading texture from file: " + file_path << endl;
Log::record_log(
string(80, '!') +
"\nError when loading texture from file: " + file_path + "\n" +
string(80, '!')
);
exit(EXIT_FAILURE);
}
//Create the texture OpenGL object
target = t_target;
glGenTextures(1, &textureID);
glBindTexture(target, textureID);
//Name the texture
glObjectLabel(GL_TEXTURE, textureID, -1,
("\"" + extract_name(file_path) +"\"").c_str());
//Set the color format
color_format = numComponents == 3 ? GL_RGB : GL_RGBA;
glTexImage2D(target, 0, color_format, width, height, 0,
color_format, GL_UNSIGNED_BYTE, data);
//Set the texture parameters of the image
glTexParameteri(target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//free the memory
stbi_image_free(data);
//Create a debug notification event
char name[100];
glGetObjectLabel(GL_TEXTURE, textureID, 100, NULL, name);
string message = "Succesfully created texture: " + string(name) +
". Bound to target: " + textureTargetEnumToString(target);
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_OTHER, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, message.size(), message.c_str());
}
A JPEG eh? Probably no alpha channel then. And 894 pixels wide isn't quite evenly divisible by 4.
Double-check if you're hitting the numComponents == 3 case and if so, make sure GL_UNPACK_ALIGNMENT is set to 1 (default 4) with glPixelStorei() before your glTexImage2D() call.

Something wrong with converting SDL surface to GL texture

I can't find my mistake, why text has not been created? When using texture instead of text I get nothing or black background with colored points, please help
GLuint texture;
SDL_Surface *text = NULL;
TTF_Font *font = NULL;
SDL_Color color = {0, 0, 0};
font = TTF_OpenFont("../test.ttf", 20);
text = TTF_RenderText_Solid(font, "Hello, SDL !!!", color);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, text->w, text->h, 0, GL_RGB, GL_UNSIGNED_BYTE, text->pixels);
SDL_FreeSurface(text);
One thing you could add is to specify texture filters, e.g.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Few things you have to check first
is the font loaded properly? check if "font == NULL", maybe your
font path is wrong
is the shader (if you use a shader) setup properly?
My guess is that you set the wrong pixel format type in glTexImage2D cause random color dots apear on your texture
Below is my code that load image via SDL_image for OpenGL use, I think it would be a good start to figure out what step you missed or forgot.
BTW, this code is not perfect. The types of pixel format is more than four (like index color) and I only handle some of them.
/*
* object_, originalWidth_ and originalHeight_ are private variables in
* this class, don't panic.
*/
void
Texture::Load(string filePath, GLint minMagFilter, GLint wrapMode)
{
SDL_Surface* image;
GLenum textureFormat;
GLint bpp; //Byte Per Pixel
/* Load image file */
image = IMG_Load(filePath.c_str());
if (image == nullptr) {
string msg("IMG error: ");
msg += IMG_GetError();
throw runtime_error(msg.c_str());
}
/* Find out pixel format type */
bpp = image->format->BytesPerPixel;
if (bpp == 4) {
if (image->format->Rmask == 0x000000ff)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
} else if (bpp == 3) {
if (image->format->Rmask == 0x000000ff)
textureFormat = GL_RGB;
else
textureFormat = GL_BGR;
} else {
string msg("IMG error: Unknow pixel format, bpp = ");
msg += bpp;
throw runtime_error(msg.c_str());
}
/* Store widht and height */
originalWidth_ = image->w;
originalHeight_ = image->h;
/* Make OpenGL texture */
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &object_);
glBindTexture(GL_TEXTURE_2D, object_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minMagFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, minMagFilter);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapMode);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapMode);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(
GL_TEXTURE_2D, // texture type
0, // level
bpp, // internal format
image->w, // width
image->h, // height
0, // border
textureFormat, // format(in this texture?)
GL_UNSIGNED_BYTE, // data type
image->pixels // pointer to data
);
/* Clean these mess up */
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
SDL_FreeSurface(image);
}
For more information, you should check out SDL wiki or deep into it's source code to fully understand the architecture of SDL_Surface.

DevIL image not rendering correctly

I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)

Texture loading with DevIL, equivalent code to texture loading with Qt?

I am working with opengl and glsl, in visual studio c++ 2010. I am writing shaders and I need
to load a texture. I am reading code from a book and in there they load textures with Qt, but I
need to do it with DevIl, can someone please write the equivalent code for texture loading with DevIL? I am new to DevIL and I don't know how to translate this.
// Load texture file
const char * texName = "texture/brick1.jpg";
QImage timg = QGLWidget::convertToGLFormat(QImage(texName,"JPG"));
// Copy file to OpenGL
glActiveTexture(GL_TEXTURE0);
GLuint tid;
glGenTextures(1, &tid);
glBindTexture(GL_TEXTURE_2D, tid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, timg.width(), timg.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, timg.bits());
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Given that DevIL is no longer maintained, and the ILUT part assumes the requirement for power-of-2 texture dimensions and does rescale the images in its convenience functions, it actually makes sense to take the detour of doing it manually.
First loading a image from a file with DevIL happens quite similar to loading a texture from an image in OpenGL. First you create a DevIL image name and bind it
GLuint loadImageToTexture(char const * const thefilename)
{
ILuint imageID;
ilGenImages(1, &imageID);
ilBindImage(imageID);
now you can load an image from a file
ilLoadImage(thefilename);
check that the image does offer data, if not so, clean up
void data = ilGetData();
if(!data) {
ilBindImage(0);
ilDeleteImages(1, &imageID);
return 0;
}
retrieve the important parameters
int const width = ilGetInteger(IL_IMAGE_WIDTH);
int const height = ilGetInteger(IL_IMAGE_HEIGHT);
int const type = ilGetInteger(IL_IMAGE_TYPE); // matches OpenGL
int const format = ilGetInteger(IL_IMAGE_FORMAT); // matches OpenGL
Generate a texture name
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
next we set the pixel store paremeters (your original code missed that crucial step)
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0); // rows are tightly packed
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // pixels are tightly packed
finally we can upload the texture image and return the ID
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, type, data);
next, for convenience we set the minification filter to GL_LINEAR, so that we don't have to supply mipmap levels.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
finally return the textureID
return textureID;
}
If you want to use mipmapping you can use the OpenGL glGenerateMipmap later on; use glTexParameter GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOD to control the span of the image pyramid generated.