I use a texture array to store texture atlases. For hardware which support OpenGL 4.2 I use the glTexStorage3D approach however I would like to use texture arrays pre 4.2 too.
I checked several other threads with the same problem like this or this. I tried to follow the solutions provided there however the texture array seems to be empty, no texture is visible during rendering.
My glTexStorage3D solution which works without any problem:
glTexStorage3D(GL_TEXTURE_2D_ARRAY,
1,
GL_R8,
2048, 2048,
100);
And the glTexImage3D which should be equivalent, however produces no display:
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0,
GL_R8,
2048, 2048, 100,
0,
GL_RED,
GL_UNSIGNED_BYTE,
0);
The texture data is uploaded to the specified index with the following snippet (atlas width and height are 2048 and depth is 1):
glBindTexture(GL_TEXTURE_2D_ARRAY, m_arrayTexture);
glTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0,
0, 0, m_nextTextureLevel,
atlas->width, atlas->height, atlas->depth,
GL_RED,
GL_UNSIGNED_BYTE,
atlas->data);
What am I missing here? Any help would be highly appreciated.
Edit:
Uploading the texture data to the array right away is not an option as new textures can be added to the array during execution.
Edit v2, solution
As usually the problem was something trivial which I overlooked. I dived into Nazar554's solution and tried to compare it to my code. The problem was that I accidentally set the texture parameters using the wrong constant, so the glTexParameteri calls were made with GL_TEXTURE_2D instead of GL_TEXTURE_2D_ARRAY. After changing these values everything worked like a charm.
You can take a look at my Texture.cpp I used in my project.
However I did not use glTexSubImage() in fallback case. Instead I uploaded the texture data immediately (you are passing a 0 to preallocate the buffer)
Functions that might be interesting to you: Texture::loadTexStorageInternal(const std::string& fileName) and
bool Texture::loadTexInternal(const std::string& fileName)
Here is one of them, it handles fallback when glTexStorage3D is unavailable. It is quite long because it tries to handle compressed formats/mipmaps.
bool Texture::loadTexInternal(const std::string& fileName)
{
gli::texture Texture = gli::load(fileName);
if(Texture.empty())
return 0;
const gli::gl GL(gli::gl::PROFILE_GL33);
const gli::gl::format Format = GL.translate(Texture.format(), Texture.swizzles());
GLenum Target = static_cast<GLenum>(GL.translate(Texture.target()));
Binder texBinder(*this, Target);
glTexParameteri(Target, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(Target, GL_TEXTURE_MAX_LEVEL, static_cast<GLint>(Texture.levels() - 1));
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_R, Format.Swizzles[0]);
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_G, Format.Swizzles[1]);
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_B, Format.Swizzles[2]);
glTexParameteri(Target, GL_TEXTURE_SWIZZLE_A, Format.Swizzles[3]);
if(Texture.levels() >= 1)
glTexParameteri(Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
else
glTexParameteri(Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(Target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(Target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(Target, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
//glm::tvec3<GLsizei> const Extent(Texture.extent());
for(std::size_t Layer = 0; Layer < Texture.layers(); ++Layer)
for(std::size_t Level = 0; Level < Texture.levels(); ++Level)
for(std::size_t Face = 0; Face < Texture.faces(); ++Face)
{
GLsizei const LayerGL = static_cast<GLsizei>(Layer);
glm::tvec3<GLsizei> loopExtent(Texture.extent(Level));
Target = gli::is_target_cube(Texture.target())
? static_cast<GLenum>(static_cast<GLint>(GL_TEXTURE_CUBE_MAP_POSITIVE_X) + static_cast<GLint>(Face))
: Target;
switch(Texture.target())
{
case gli::TARGET_1D:
if(gli::is_compressed(Texture.format()))
glCompressedTexImage1D(
Target,
static_cast<GLint>(Level),
static_cast<GLenum>(static_cast<GLenum>(Format.Internal)),
0, loopExtent.x,
static_cast<GLsizei>(Texture.size(Level)),
Texture.data(Layer, Face, Level));
else
glTexImage1D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x,
0,
static_cast<GLenum>(Format.External), static_cast<GLenum>(Format.Type),
Texture.data(Layer, Face, Level));
break;
case gli::TARGET_1D_ARRAY:
case gli::TARGET_2D:
case gli::TARGET_CUBE:
if(gli::is_compressed(Texture.format()))
glCompressedTexImage2D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x,
Texture.target() == gli::TARGET_1D_ARRAY ? LayerGL : loopExtent.y,
0,
static_cast<GLsizei>(Texture.size(Level)),
Texture.data(Layer, Face, Level));
else
glTexImage2D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x,
Texture.target() == gli::TARGET_1D_ARRAY ? LayerGL : loopExtent.y,
0,
static_cast<GLenum>(Format.External), static_cast<GLenum>(Format.Type),
Texture.data(Layer, Face, Level));
break;
case gli::TARGET_2D_ARRAY:
case gli::TARGET_3D:
case gli::TARGET_CUBE_ARRAY:
if(gli::is_compressed(Texture.format()))
glCompressedTexImage3D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x, loopExtent.y,
Texture.target() == gli::TARGET_3D ? loopExtent.z : LayerGL,
0,
static_cast<GLsizei>(Texture.size(Level)),
Texture.data(Layer, Face, Level));
else
glTexImage3D(
Target, static_cast<GLint>(Level),
static_cast<GLenum>(Format.Internal),
loopExtent.x, loopExtent.y,
Texture.target() == gli::TARGET_3D ? loopExtent.z : LayerGL,
0,
static_cast<GLenum>(Format.External), static_cast<GLenum>(Format.Type),
Texture.data(Layer, Face, Level));
break;
default:
return false;
}
}
return true;
}
Related
I'm trying to make an openGL game in c++ and I'm trying to implement a text system,
to do this I'm trying to use SDL_ttf.
I already used SDL_ttf in an other project but with another api, so I made the same code but it happened to not fill the pixel data of the surface.
Here is my code :
void Text2Texture::setText(const char * text, size_t fontIndex){
SDL_Color c = {255, 255, 0, 255};
SDL_Surface * surface;
surface = TTF_RenderUTF8_Blended(loadedFonts_[fontIndex], text, c);
if(surface == nullptr) {
fprintf(stderr, "Error TTF_RenderText\n");
return;
}
GLenum texture_format;
GLint colors = surface->format->BytesPerPixel;
if (colors == 4) { // alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGBA;
else
texture_format = GL_BGRA_EXT;
} else { // no alpha
if (surface->format->Rmask == 0x000000ff)
texture_format = GL_RGB;
else
texture_format = GL_BGR_EXT;
}
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexImage2D(GL_TEXTURE_2D, 0, colors, surface->w, surface->h, 0, texture_format, GL_UNSIGNED_BYTE, surface->pixels);
///This line tell me pixel data is 8 bit witch isn't good ?
std::cout << "pixel size : " << sizeof(surface->pixels) << std::endl;
///This line give me correct result
fprintf(stderr, "texture size : %d %d\n", surface->w, surface->h);
glBindTexture(GL_TEXTURE_2D, 0);
}
As you can see in the comment, the pointer pixels in surface have a size of 8 bit, witch is way too low for a texture. I don't know why It do that.
At the end, the texture data look to be fully filled with 0 (resulting with a black squad using very basic shaders).
In this project I'm using glfw to create an openGL context so I'm not using sdl and I did not initialized it.
However, I did initialize sdl_ttf, here is all I did before calling setText :
std::vector<TTF_Font *> Text2Texture::loadedFonts_;
void Text2Texture::init(){
if(TTF_Init() == -1) {
fprintf(stderr, "TTF_Init: %s\n", TTF_GetError());
}
}
int Text2Texture::loadFont(std::string const& fontPath){
loadedFonts_.emplace_back();
loadedFonts_.back() = TTF_OpenFont(fontPath.data(), 32);
if( loadedFonts_.back() == nullptr ) {
fprintf(stderr, "TTF_OpenFont: %s \n", TTF_GetError());
loadedFonts_.pop_back();
return -1;
}
return ((int)loadedFonts_.size() - 1);
}
///The constructor initialize the texture :
Text2Texture::Text2Texture(){
glGenTextures(1, &textureId_);
glBindTexture(GL_TEXTURE_2D, textureId_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
My class got a static part here is it corp :
class Text2Texture {
public:
Text2Texture();
void setText(const char * text, size_t fontIndex = 0);
unsigned int getId() const;
//Partie static
static void init();
static void quit();
static int loadFont(std::string const& fontPath);
private:
unsigned int textureId_;
//Partie static
static std::vector<TTF_Font *> loadedFonts_;
};
I initialize sdl_ttf and load texture with static method then I create class instance to create specific texture.
If you find where is my mistake I would be pleased to read your answer.
(By the way, I'm not really sure using sdl_ttf is the good approach, if you have a better idea I would take it too but I would like to solve this problem first)
The format and type parameter of glTexImage2Dspecifiy how one single pixel is encoded.
When the texture font is created, each pixel is encoded to a single byte. This means your texture consist of a single color channel and each pixel has 1 byte.
I'm very sure that colors = surface->format->BytesPerPixel is 1.
Note that it is sufficient to encode the glyph in one color channel, because the glyph consists of information that would fit in a single byte.
By default, OpenGL assumes that the start of each row of an image is aligned 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since the image has 1 (red) color channel, and is tightly packed, the start of a row is possibly misaligned.
Change the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D).
Since the texture has only one (red) color channel, the green and blue color will be 0 and the alpha channel will be 1 when the texture is looked up. But you can treat green, blue and even alpha channels to be read from the red color channel, too.
This can be achieved by setting the texture swizzle parameters GL_TEXTURE_SWIZZLE_G, GL_TEXTURE_SWIZZLE_B respectively GL_TEXTURE_SWIZZLE_A. See glTexParameter.
Further, note that the texture parameter are stored in the texture object. glTexParameter changes the texture object which is currently bound to the specified target of the current texture unit. So it is sufficient to set the parameters once when the texture image is created.
In comparison, glPixelStore changes global states an ma have to be set to its default value after specifying the texture image (if later calls to glTexImage2D rely on it).
The specification of the 2-dimensional texture image and setting the parameters may look as follows:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, surface->w, surface->h, 0,
GL_RED, GL_UNSIGNED_BYTE, surface->pixels);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_A, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
In my opengl application, texture is not rendered correctly on the model.
Here is a screenshot of the result:
Here is what the bunny should look like:
expected result
Here is the code to load the texture.
stbi_set_flip_vertically_on_load(1);
m_LocalBuffer = stbi_load(path.c_str(), &m_Width, &m_Height, &m_BPP, 0);
GLCall(glGenTextures(1, &m_RendererID));
GLCall(glBindTexture(GL_TEXTURE_2D, m_RendererID));
GLCall(glGenerateMipmap(GL_TEXTURE_2D));
GLenum format = GL_RGBA;
//..switching on m_BPP to set format, omitted here
GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE));
GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE));
GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_Width, m_Height, 0, format, GL_UNSIGNED_BYTE, m_LocalBuffer));
GLCall(glBindTexture(GL_TEXTURE_2D, 0));
if (m_LocalBuffer) {
stbi_image_free(m_LocalBuffer);
}
Here is the texture file I'm using
Texture File
I downloaded the asset from https://blenderartists.org/t/uv-unwrapped-stanford-bunny-happy-spring-equinox/1101297 (the 3.3Mb link)
Here is the code where I read in the texCoords
for (size_t i = 0; i < mesh->mNumVertices; i++) {
//..read in positions and normals
if (mesh->mTextureCoords[0]) {
vertex.TexCoords.x = mesh->mTextureCoords[0][i].x;
vertex.TexCoords.y = mesh->mTextureCoords[0][i].y;
}
}
I'm loading the model as an obj file using assimp. I just read the texture coord from the result and pass it to the shader. (GLCall is just a debug macro I have in the renderer)
What could potentially be the cause for this? Let me know if more info is needed. Thanks a lot!
The image seems to be flipped vertically (around the x-axis). To compensated that, you've to flip the image manually, after loading it. Or if you've flipped the image then you've to omit that. Whether the image has to be flipped or not, depends on the image format.
I am using OpenGL, GLM, ILU and GLUT libraries for loading and texturing 3D models. The models appear to load in correctly, however when it comes to the texturing the texture seems to repeat.
I have included two pictures below showing non-textured, textured.
non-textured:
textured:
If you look closely enough to the last image, the texture is applied to a tiny scale and repeated across the whole model.
For the code, I first start by loading the texture.
ILboolean success = false;
if (ilGetInteger(IL_VERSION_NUM) < IL_VERSION)
{
return false;
}
ilInit(); /*Initialize the DevIL library*/
ilGenImages(1, &ilTextureID); //Generate DevIL image objects
ilBindImage(ilTextureID); /* Binding of image object */
success = ilLoadImage((const ILstring)theFilename); /* Loading of image*/
if (!success)
{
ilDeleteImages(1, &ilTextureID);
return false;
}
success = ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE); // Convert every colour component into unsigned byte.
if (!success)
{
return false;
}
textureWidth = ilGetInteger(IL_IMAGE_WIDTH);
textureHeight = ilGetInteger(IL_IMAGE_HEIGHT);
glGenTextures(1, &GLTextureID); // GLTexture name generation
glBindTexture(GL_TEXTURE_2D, GLTextureID); // Binding of GLtexture name
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Use linear interpolation for magnification filter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Use linear interpolation for minifying filter
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData()); /* Texture specification */
glBindTexture(GL_TEXTURE_2D, GLTextureID); // Binding of GLtexture name
ilDeleteImages(1, &ilTextureID);
I have tried things like adding,
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
but this just seems to make the model non-textured.
Then I call the method to model loading method and apply the texture:
m_model = glmReadOBJ(mdlFilename);
glmFacetNormals(m_model);
glmVertexNormals(m_model, 180.0f, false);
m_TextureID = mdlTexture.getTexture();
m_model->textures[m_model->numtextures - 1].id = m_TextureID;
m_model->textures[m_model->numtextures - 1].width = mdlTexture.getTWidth();
m_model->textures[m_model->numtextures - 1].height =mdlTexture.getTHeight();
For the above code, whilst I was debugging I am getting negative values for
"vertices", "normals" and "facetnorms" for the 3D model, but I am getting values for "numnormals", "numtexcoords" and "numfacetnorms". I'm not entirely sure if this is normal.
And finally for the rendering of the model:
glPushMatrix();
//transformations here...
glTranslatef(mdlPosition.x, 0.0f, -mdlPosition.z);
glRotatef(mdlRotationAngle, 0, 1, 0);
glScalef(mdlScale.x, mdlScale.y, mdlScale.z);
glmDraw(m_model, GLM_SMOOTH | GLM_TEXTURE | GLM_MATERIAL);
glPopMatrix();
I've created an array of 2D textures and initialized it with glTexImage3D. Then I attached separate textures to color attachments with glFramebufferTextureLayer, Framebuffer creation doesn't throw an error and everything seems fine until the draw call happens.
When shader tries to access color attachment the following message appears:
OpenGL Debug Output message : Source : API; Type : ERROR; Severity : HIGH;
GL_INVALID_OPERATION error generated. <location> is invalid.
Shaders are accessing layers of an array with location qualifier:
layout (location = 0) out vec3 WorldPosOut;
layout (location = 1) out vec3 DiffuseOut;
layout (location = 2) out vec3 NormalOut;
layout (location = 3) out vec3 TexCoordOut;
Documentation says that glFramebufferTextureLayer works just like glFramebufferTexture2D, except the layer parameter, so can I use location qualifiers with texture array, or some other way exsists?
I finally managed to bind texture array as a color buffer. It is hard to find useful information on the topic, so here is an instruction:
№1. You need to create a texture array and initialize it properly:
glGenTextures(1, &arrayBuffer);
glBindTexture(GL_TEXTURE_2D_ARRAY, arrayBuffer);
// we should initialize layers for each mipmap level
for (int mip = 0; mip < mipLevelCount; ++mip) {
glTexImage3D(GL_TEXTURE_2D_ARRAY, mip, internalFormat, ImageWidth, ImageHeight,
layerCount, 0, GL_RGB, GL_UNSIGNED_INT, 0);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, textureFilter);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, textureFilter);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAX_LEVEL, mipLevelCount - 1);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
Keep in mind, that setting texture parameters like MIN/MAG filters and BASE/MAX mipmap level is important. OpenGL sets maximum mipmap level to 1000 and if you didn't provide the whole thousand of mipmaps you will get an incomplete texture, you won't get anything except the black screen.
№2. Don't forget to bind arrayBuffer to the GL_TEXTURE_2D_ARRAY target before attaching the layers to the color buffers:
glBindTexture(GL_TEXTURE_2D_ARRAY, arrayBuffer);
for (unsigned int i = 0; i < NUMBER_OF_TEXTURES; i++) {
glFramebufferTextureLayer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, arrayBuffer, 0, i);
}
Don't forget to set the GL_TEXTURE_2D_ARRAY target to 0 with glBindTexture or it can get modified outside of the initialization code.
№3. Since the internalFormat of each image in the array must stay the same, I recommend to create a separate texture for the depth/stencil buffer:
glGenTextures(1, &m_depthTexture);
...
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, WindowWidth,
WindowHeight, 0, GL_DEPTH_STENCIL, GL_FLOAT_32_UNSIGNED_INT_24_8_REV, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT,
GL_TEXTURE_2D, m_depthTexture, 0);
Don't forget to set up index for each color buffer:
for (int i = 0; i < GBUFFER_NUM_TEXTURES; ++i)
DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; //Sets appropriate indices for each color buffer
glDrawBuffers(ARRAY_SIZE_IN_ELEMENTS(DrawBuffers), DrawBuffers);
In shaders you can use layout(location = n) qualifiers to specify the color buffer.
OpenGL 3 Note (NVIDIA): glFramebufferTextureLayer is available since OpenGL 3.2 (Core profile), but on NVIDIA GPU's drivers will force OpenGL version to 4.5, so you should specify the exact version of OpenGL if you care about compatibility. I use SDL2 in my application, so I use the following calls to set OpenGL version:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
Results of the deferred shading:
I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)