Save image data to sqlite - c++

I have a function which loads an image from file and successfully creates an opengl texture from it.
/**
* #brief Loads a texture from file and generates an OpenGL texture from it.
*
* #param filename Path to the image file.
* #param out_texture Texture id the results are bound to.
* #param out_width Value pointer the resulting image width is written to.
* #param out_height Value pointer the resulting image height is written to.
* #param flip_image Stb indicator for flipping the image.
* #return true Image has been successfully loaded.
* #return false Failed loading the image.
*/
bool LoadTextureFromFile(const char *filename, GLuint *out_texture, int *out_width, int *out_height, bool flip_image = false)
{
// Load from file
int image_width = 0;
int image_height = 0;
stbi_set_flip_vertically_on_load(flip_image);
unsigned char *image_data = stbi_load(filename, &image_width, &image_height, NULL, 4);
if (image_data == NULL)
{
std::cout << "ERROR::Tools::GLHelper::LoadTextureFromFile - Failed to load image from file '" << filename << "'." << std::endl;
stbi_image_free(image_data);
return false;
}
// Create a OpenGL texture identifier
GLuint image_texture;
glGenTextures(1, &image_texture);
glBindTexture(GL_TEXTURE_2D, image_texture);
// Set texture wrapping parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
// Set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
glGenerateMipmap(GL_TEXTURE_2D);
*out_texture = image_texture;
*out_width = image_width;
*out_height = image_height;
stbi_image_free(image_data);
return true;
}
What I am trying to do is to load an image via stbi_load like above and save it as BLOB to sqlite. Afterwards I want to be able to load the very same blob and create an opengl texture from it in a separate function.
In the first step I created a function which only loads the image:
unsigned char *ImageDataFromFile(const char *filename)
{
int image_width = 0;
int image_height = 0;
unsigned char *image_data = stbi_load(filename, &image_width, &image_height, NULL, 4);
if (image_data == NULL)
{
std::cout << "ERROR::Tools::GLHelper::LoadTextureFromFile - Failed to load image from file '" << filename << "'." << std::endl;
stbi_image_free(image_data);
}
return image_data;
}
In the next step I want to store this data into my sqlite database:
void DBConnector::AddImage(std::string name, unsigned char *data)
{
sqlite3_stmt *stmt;
int err = sqlite3_prepare_v2(db, "INSERT INTO images (name, img_data) VALUES (?, ?)", -1, &stmt, NULL);
if (err != SQLITE_OK)
{
std::cout << "ERROR::DATA::DBConnector - Failed to prepare sqlite3 statement: \n"
<< sqlite3_errmsg(db) << std::endl;
}
sqlite3_bind_text(stmt, 1, name.c_str(), -1, SQLITE_TRANSIENT);
sqlite3_bind_blob(stmt, 2, data, -1, SQLITE_TRANSIENT);
sqlite3_step(stmt);
sqlite3_finalize(stmt);
return;
}
Finally I connect the pieces:
unsigned char *image_data = Tools::FileHelper::ImageDataFromFile(selected_filepath.c_str());
db->AddImage("Foo", image_data);
What happens is that seemingly arbitrary data ends up in the database, which is definetly not image data. Sometimes the entries are just empty.
I suspect that I am handling the return type of stbi_load incorrectly, forcing random memory data into the database. Extract from stbi documentation:
The return value from an image loader is an 'unsigned char *' which points to the pixel data..
As I understand it I am simply passing the array pointer to sqlite3_bind_blob which accepts const void * just like glTexImage2D does. So why is it working for the one but not for the other? Or could the error source be somewhere else?
Edit
I also tried something else. Normally I pass -1 for size when calling i.e. sqlite3_bind_text, because the call will then automatically search for a null terminator. So I thought that I might have to pass the correct size in bytes when calling sqlite3_bind_blob because there might be no terminator there. So for an image with the size of 225 x 225 with 3 channels, I passed 225 * 225 * 3 as size parameter. Unfortunately this did not work either.

Related

Generated glyphs from FreeType containing segments of memory

Implementing text rendering into my game engine using FreeType (2.10.1), I am encountering odd looking glyphs, containing the letter repeated four times, mirrored on the x axis and turned upside down.
Up above the desired letter there seems to be neighboring memory interpreted as glyph, which changes every run and causes a segfault on some launches.
This is what I get when I try to render the word "sphinx".
Here is the full example sentence "sphinx of black quartz, judge my vow" flipped horizontally and rotated 180 degrees.
I have compiled this code to rule out my MinGW environment being erroneous.
I'm very sure my usage of OpenGL is not the issue, since my texture uploading and rendering code works for other images.
Currently I'm wrapping the important bits of the FT_Glyph_Slot in a struct called Letter and caching that struct. Removing the wrapping and caching did not fix the error.
Here are the relevant code snippets:
FreeType initialization.
// src/library/services/graphics/font/FreeType.cpp
void FreeType::initialize() {
Logger::info("Initializing FreeType");
if (FT_Init_FreeType(&m_library)) {
Logger::error("Could not initialize FreeType");
return;
}
}
void FreeType::useFont(const std::string& fontName, const unsigned int fontSize = 42) {
Logger::info("Loading font " + fontName);
if (FT_New_Face(m_library, fontName.c_str(), 0, &m_currentFace)) {
Logger::error("Could not open font " + fontName);
return;
}
FT_Set_Pixel_Sizes(m_currentFace, 0, fontSize);
}
The code using FreeType to create a Letter.
// src/library/services/graphics/font/FreeType.cpp
std::shared_ptr<Letter> FreeType::getLetter(unsigned long character) {
// Try loading from cache
if (std::shared_ptr<Letter> letter = m_letters.get(std::to_string(character))) {
return letter;
}
return loadLetter(character);
}
std::shared_ptr<Letter> FreeType::loadLetter(unsigned long character) {
if (FT_Load_Char(m_currentFace, character, FT_LOAD_RENDER)) {
Logger::error("Could not load character " + std::string(1, character));
return std::shared_ptr<Letter>();
}
FT_GlyphSlot& glyph = m_currentFace->glyph;
Letter letter = {
.id = character,
.textureId = 0,
.bitmap = {
.buffer = glyph->bitmap.buffer,
.width = glyph->bitmap.width,
.height = glyph->bitmap.rows
},
.offset = {
.x = glyph->bitmap_left,
.y = glyph->bitmap_top
},
.advance = {
.x = glyph->advance.x,
.y = glyph->advance.y
}
};
std::shared_ptr<Letter> sharedLetter = std::make_shared<Letter>(letter);
cache(sharedLetter);
return sharedLetter;
}
void FreeType::cache(std::shared_ptr<Letter> letter) {
m_letters.add(std::to_string(letter->id), letter);
}
The graphics system initializing FreeType
// src/library/services/graphics/opengl/OpenGLGraphics.cpp
void OpenGLGraphics::initialize(int windowWidth, int windowHeight) {
// ... OpenGL initialization
m_freeType.initialize();
m_freeType.useFont("../../../src/library/assets/fonts/OpenSans-Regular.ttf");
}
The code getting the Letter in the text renderer.
// src/library/services/graphics/opengl/OpenGLGraphics.cpp
void OpenGLGraphics::drawText(const std::string &text, Vector2f location) {
for (auto iterator = text.begin(); iterator < text.end(); ++iterator) {
std::shared_ptr<Letter> letter = m_freeType.getLetter(*iterator);
if (!letter->textureId) {
std::shared_ptr<Texture> tex =
ImageLoader::loadFromCharArray(
letter->bitmap.buffer,
letter->bitmap.width,
letter->bitmap.height
);
letter->textureId = tex->id;
m_freeType.cache(letter);
}
// ... OpenGL text rendering
}
}
The code for generating a Texture from the bitmap->buffer.
// src/library/services/graphics/opengl/util/ImageLoader.cpp
std::shared_ptr<Texture>
ImageLoader::loadFromCharArray(const unsigned char *image, const unsigned int width, const unsigned int height) {
std::shared_ptr<Texture> texture = std::make_shared<Texture>();
texture->width = width;
texture->height = height;
glGenTextures(1, &texture->id);
glBindTexture(GL_TEXTURE_2D, texture->id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (GLsizei) width, (GLsizei) height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
return texture;
}
If the supplied code snippets should not suffice, I will gladly add more.
This project is open source and available here on GitHub.
You assume that FreeType always generates 8-bits-per-channel RGBA images, but it does not.
You need to check bitmap.pixel_mode to see what image format you got.
Usually it will be either FT_PIXEL_MODE_GRAY, meaning 8-bits-per-pixel greyscale, or FT_PIXEL_MODE_MONO, meaning 1-bit-per-pixel monochrome.
See the manual for more details.

Error when loading image with stb

I am attempting to load the following image:
As a texture for the stanford Dragon. The result however is as follows:
I have read that other people have had issues with this due to either not binding the textures correctly or using the wrong number of components when loading a texture. I think that I don't have either of those issues as I am both checking for the format of the image and binding the texture. I have managed to get other images to load correctly, so this seems like there is an issue specific to this image (I am not saying the image is corrupted, rather that something about this image is slightly different to the other images I ahve tried).
The code I am using to initialize the texture is as follows:
//Main constructor
Texture::Texture(string file_path, GLuint t_target)
{
//Change the coordinate system of the image
stbi_set_flip_vertically_on_load(true);
int numComponents;
//Load the pixel data of the image
void *data = stbi_load(file_path.c_str(), &width, &height, &numComponents, 0);
if (data == nullptr)//Error check
{
cerr << "Error when loading texture from file: " + file_path << endl;
Log::record_log(
string(80, '!') +
"\nError when loading texture from file: " + file_path + "\n" +
string(80, '!')
);
exit(EXIT_FAILURE);
}
//Create the texture OpenGL object
target = t_target;
glGenTextures(1, &textureID);
glBindTexture(target, textureID);
//Name the texture
glObjectLabel(GL_TEXTURE, textureID, -1,
("\"" + extract_name(file_path) +"\"").c_str());
//Set the color format
color_format = numComponents == 3 ? GL_RGB : GL_RGBA;
glTexImage2D(target, 0, color_format, width, height, 0,
color_format, GL_UNSIGNED_BYTE, data);
//Set the texture parameters of the image
glTexParameteri(target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//free the memory
stbi_image_free(data);
//Create a debug notification event
char name[100];
glGetObjectLabel(GL_TEXTURE, textureID, 100, NULL, name);
string message = "Succesfully created texture: " + string(name) +
". Bound to target: " + textureTargetEnumToString(target);
glDebugMessageInsert(GL_DEBUG_SOURCE_APPLICATION, GL_DEBUG_TYPE_OTHER, 0,
GL_DEBUG_SEVERITY_NOTIFICATION, message.size(), message.c_str());
}
A JPEG eh? Probably no alpha channel then. And 894 pixels wide isn't quite evenly divisible by 4.
Double-check if you're hitting the numComponents == 3 case and if so, make sure GL_UNPACK_ALIGNMENT is set to 1 (default 4) with glPixelStorei() before your glTexImage2D() call.

Texture Mapping a square image onto a circle OpenGl

I am trying to map a square image of a clock face onto a circle GL_POLYGON that I have created. I am currently using the following code:
float angle, radian, x, y, xcos, ysin, tx, ty;
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, an_face_texture1);
glBegin(GL_POLYGON);
for (angle=0.0; angle<360.0; angle+=2.0)
{
radian = angle * (pi/180.0f);
xcos = (float)cos(radian);
ysin = (float)sin(radian);
x = xcos * radius;
y = ysin * radius;
tx = (x/radius + 1)*0.5;
ty = (y/radius + 1)*0.5;
glTexCoord2f(tx, ty);
glVertex2f(x, y);
}
glEnd();
glDisable(GL_TEXTURE_2D);
However when I do it I end up with a weird overlapping image effect. As shown here: The original texture image is however the corners are cut out and it is png format. This way of generating the texture coordinates and is took from a previous answer: HERE
Below is the code used to load the image:
#ifndef PNGLOAD_H
#include <png.h>
#include <stdlib.h>
int png_load(const char* file_name,
int* width,
int* height,
char** image_data_ptr)
{
png_byte header[8];
FILE* fp = fopen(file_name, "rb");
if (fp == 0)
{
fprintf(stderr, "erro: could not open PNG file %s\n", file_name);
perror(file_name);
return 0;
}
// read the header
fread(header, 1, 8, fp);
if (png_sig_cmp(header, 0, 8))
{
fprintf(stderr, "error: %s is not a PNG.\n", file_name);
fclose(fp);
return 0;
}
png_structp png_ptr = png_create_read_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
if (!png_ptr)
{
fprintf(stderr, "error: png_create_read_struct returned 0.\n");
fclose(fp);
return 0;
}
// create png info struct
png_infop info_ptr = png_create_info_struct(png_ptr);
if (!info_ptr)
{
fprintf(stderr, "error: png_create_info_struct returned 0.\n");
png_destroy_read_struct(&png_ptr, (png_infopp)NULL, (png_infopp)NULL);
fclose(fp);
return 0;
}
// create png info struct
png_infop end_info = png_create_info_struct(png_ptr);
if (!end_info)
{
fprintf(stderr, "error: png_create_info_struct returned 0.\n");
png_destroy_read_struct(&png_ptr, &info_ptr, (png_infopp) NULL);
fclose(fp);
return 0;
}
// the code in this if statement gets called if libpng encounters an error
if (setjmp(png_jmpbuf(png_ptr))) {
fprintf(stderr, "error from libpng\n");
png_destroy_read_struct(&png_ptr, &info_ptr, &end_info);
fclose(fp);
return 0;
}
// init png reading
png_init_io(png_ptr, fp);
// let libpng know you already read the first 8 bytes
png_set_sig_bytes(png_ptr, 8);
// read all the info up to the image data
png_read_info(png_ptr, info_ptr);
// variables to pass to get info
int bit_depth, color_type;
png_uint_32 temp_width, temp_height;
// get info about png
png_get_IHDR(png_ptr, info_ptr, &temp_width, &temp_height, &bit_depth, &color_type,
NULL, NULL, NULL);
if (width) { *width = temp_width; }
if (height){ *height = temp_height; }
// Update the png info struct.
png_read_update_info(png_ptr, info_ptr);
// Row size in bytes.
int rowbytes = png_get_rowbytes(png_ptr, info_ptr);
// glTexImage2d requires rows to be 4-byte aligned
rowbytes += 3 - ((rowbytes-1) % 4);
// Allocate the image_data as a big block, to be given to opengl
png_byte* image_data;
image_data = (png_byte*)malloc(rowbytes * temp_height * sizeof(png_byte)+15);
if (image_data == NULL)
{
fprintf(stderr, "error: could not allocate memory for PNG image data\n");
png_destroy_read_struct(&png_ptr, &info_ptr, &end_info);
fclose(fp);
return 0;
}
// row_pointers is for pointing to image_data for reading the png with libpng
png_bytep* row_pointers = (png_bytep*)malloc(temp_height * sizeof(png_bytep));
if (row_pointers == NULL)
{
fprintf(stderr, "error: could not allocate memory for PNG row pointers\n");
png_destroy_read_struct(&png_ptr, &info_ptr, &end_info);
free(image_data);
fclose(fp);
return 0;
}
// set the individual row_pointers to point at the correct offsets of image_data
int i;
for (i = 0; i < temp_height; i++)
{
row_pointers[temp_height - 1 - i] = image_data + i * rowbytes;
}
// read the png into image_data through row_pointers
png_read_image(png_ptr, row_pointers);
// clean up
png_destroy_read_struct(&png_ptr, &info_ptr, &end_info);
//free(image_data);
*image_data_ptr = (char*)image_data; // return data pointer
free(row_pointers);
fclose(fp);
fprintf(stderr, "\t texture image size is %d x %d\n", *width, *height);
return 1;
}
#endif
and:
unsigned int load_and_bind_texture(const char* filename)
{
char* image_buffer = NULL; // the image data
int width = 0;
int height = 0;
// read in the PNG image data into image_buffer
if (png_load(filename, &width, &height, &image_buffer)==0)
{
fprintf(stderr, "Failed to read image texture from %s\n", filename);
exit(1);
}
unsigned int tex_handle = 0;
// request one texture handle
glGenTextures(1, &tex_handle);
// create a new texture object and bind it to tex_handle
glBindTexture(GL_TEXTURE_2D, tex_handle);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D, 0,
GL_RGB, width, height, 0,
GL_RGB, GL_UNSIGNED_BYTE, image_buffer);
free(image_buffer); // free the image buffer memory
return tex_handle;
}
these are then called from the init() method:
background_texture = load_and_bind_texture("images/office-wall.png");
an_face_texture1 = load_and_bind_texture("images/clock.png");
the image is loaded in the same way the background is loaded.
Yes, and that is almost certainly the problem. While both images are PNGs, they are almost certainly not the same format.
Let's actually debug what you see in the loaded texture. You see 2 overlapping with 10. 3 overlapped with 9. 8 overlapped with 4. All interlaced with each other. And this pattern repeats 3 times.
It's as if you took the original image, folded it over itself vertically, and then repeated it. 3 times.
The repetition of "3" in this strongly suggests a mismatch between what libPNG actually read and what you told OpenGL the texel data actually was. You told OpenGL that the texture was in the RGB format, 3 bytes per pixel.
But not every PNG is formatted that way. Some PNGs are greyscale; one byte per pixel. And because you used the low-level libPNG reading interface, you read the exact format of the pixel data from the PNG. Yes, it decompresses it. But you're reading exactly what the PNG stored conceptually.
So if the PNG is a greyscale PNG, your call to png_read_image can read data that isn't 3-bytes per pixel. But you told OpenGL that the data was 3 bytes per pixel. So if the libPNG wrote 1 byte per pixel, you will be giving OpenGL the wrong texel data.
That's bad.
If you're going to use libPNG's low-level reading routines, then you must actually check the format of the PNG being read and adjust your OpenGL code to match.
It would be much easier to use the higher-level reading routines and explicitly telling it to translate grayscale to RGB.

OpenGL transparency doing weird things

I am trying to render a texture with an alpha channel in it.
This is what I used for texture loading:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
I enabled GL_BLEND just before I render the texture: glEnable(GL_BLEND);
I also did this at the beginning of the code(the initialization): glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This is the result(It should be a transparent texture of a first person hand):
But when I load my texture like this(no alpha channel):
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
This is the result:
Does anyone know what can cause this, or do I have to give more code?
Sorry for bad English, thanks in advance.
EDIT:
My texture loading code:
GLuint Texture::loadTexture(const char * imagepath) {
printf("Reading image %s\n", imagepath);
// Data read from the header of the BMP file
unsigned char header[54];
unsigned int dataPos;
unsigned int imageSize;
unsigned int width, height;
// Actual RGB data
unsigned char * data;
// Open the file
FILE * file = fopen(imagepath, "rb");
if (!file) { printf("%s could not be opened. \n", imagepath); getchar(); exit(0); }
// Read the header, i.e. the 54 first bytes
// If less than 54 bytes are read, problem
if (fread(header, 1, 54, file) != 54) {
printf("Not a correct BMP file\n");
exit(0);
}
// A BMP files always begins with "BM"
if (header[0] != 'B' || header[1] != 'M') {
printf("Not a correct BMP file\n");
exit(0);
}
// Make sure this is a 24bpp file
if (*(int*)&(header[0x1E]) != 0) { printf("Not a correct BMP file\n");}
if (*(int*)&(header[0x1C]) != 24) { printf("Not a correct BMP file\n");}
// Read the information about the image
dataPos = *(int*)&(header[0x0A]);
imageSize = *(int*)&(header[0x22]);
width = *(int*)&(header[0x12]);
height = *(int*)&(header[0x16]);
// Some BMP files are misformatted, guess missing information
if (imageSize == 0) imageSize = width*height * 3; // 3 : one byte for each Red, Green and Blue component
if (dataPos == 0) dataPos = 54; // The BMP header is done that way
// Create a buffer
data = new unsigned char[imageSize];
// Read the actual data from the file into the buffer
fread(data, 1, imageSize, file);
// Everything is in memory now, the file wan be closed
fclose(file);
// Create one OpenGL texture
GLuint textureID;
glGenTextures(1, &textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textureID);
if (imagepath == "hand.bmp") {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
}else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
delete[] data;
return textureID;
}
As you can see its not my own written code, Ive got it from opengl-tutorial.org
My first comment stated:
The repeating, offset pattern looks like the data is treated as having a larger offset, when in reality it has smaller (or opposite).
And that was before I actually noticed what you did. Yes, this is precisely that. You can't treat 4-bytes-per-pixel data as 3-bytes-per-pixel data. The alpha channel gets interpreted as colour and that's why it all offsets this way.
If you want to disregard the alpha channel, you need to strip it off when loading so that it ends up having 3 bytes for each pixel value in the OpenGL texture memory. (That's what #RetoKoradi's answer is proposing, namely creating an RGB texture from RGBA data).
If it isn't actually supposed to look so blue-ish, maybe it's not actually in BGR layout?
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
^
\--- change to GL_RGBA as well
My wild guess is that human skin would have more red than blue light reflected by it.
It looks like you misunderstood how the arguments of glTexImage2D() work:
The 3rd argument (internalformat) defines what format you want to use for the data stored in the texture.
The 7th and 8th argument (format and type) define the format of the data you pass into the call as the last argument.
Based on this, if the format of the data you're passing as the last argument is BGRA, and you want to create an RGB texture from it, the correct call is:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
Note that the 7th argument is now GL_BGRA, matching your input data, while the 3rd argument is GL_RGB, specifying that you want to use an RGB texture.
Seams you chose worng texture pixel alignment. To specify the right one try to experiment with values (1, 2, 4) of glPixelStorei with GL_UNPACK_ALIGNMENT.
Specification:
void glPixelStorei( GLenum pname,
GLint param);
pname Specifies the symbolic name of the parameter to be set. One value affects the packing of pixel data into memory: GL_PACK_ALIGNMENT. The other affects the unpacking of pixel data from memory: GL_UNPACK_ALIGNMENT.
param Specifies the value that pname is set to.
glPixelStorei sets pixel storage modes that affect the operation of subsequent glReadPixels as well as the unpacking of texture patterns (see glTexImage2D and glTexSubImage2D).
pname is a symbolic constant indicating the parameter to be set, and param is the new value. One storage parameter affects how pixel data is returned to client memory:
GL_PACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on double-word boundaries).
The other storage parameter affects how pixel data is read from client memory:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on double-word boundaries).
The following table gives the type, initial value, and range of valid values for each storage parameter that can be set with glPixelStorei.
BMP format do not support transparency at least most common 3 version (only work GL_BGR mode and its masked modifications). USE PNG, DDS, TIFF, TGA(simplest) instead.
Secondly your total image data size computation formula is wrong
imageSize = width*height * 3; // 3 : one byte for each Red, Green and Blue component
Right formula is:
imageSize = 4 * ((width * bitsPerPel + 31) / 32) * height;
where bitsPerPel is current picture bits per pixel (8, 16 or 24).
Here is the code of function wich used to load simple TGA files with transparency support:
// Define targa header.
#pragma pack(1)
typedef struct
{
GLbyte identsize; // Size of ID field that follows header (0)
GLbyte colorMapType; // 0 = None, 1 = paletted
GLbyte imageType; // 0 = none, 1 = indexed, 2 = rgb, 3 = grey, +8=rle
unsigned short colorMapStart; // First colour map entry
unsigned short colorMapLength; // Number of colors
unsigned char colorMapBits; // bits per palette entry
unsigned short xstart; // image x origin
unsigned short ystart; // image y origin
unsigned short width; // width in pixels
unsigned short height; // height in pixels
GLbyte bits; // bits per pixel (8 16, 24, 32)
GLbyte descriptor; // image descriptor
} TGAHEADER;
#pragma pack(8)
GLbyte *gltLoadTGA(const char *szFileName, GLint *iWidth, GLint *iHeight, GLint *iComponents, GLenum *eFormat)
{
FILE *pFile; // File pointer
TGAHEADER tgaHeader; // TGA file header
unsigned long lImageSize; // Size in bytes of image
short sDepth; // Pixel depth;
GLbyte *pBits = NULL; // Pointer to bits
// Default/Failed values
*iWidth = 0;
*iHeight = 0;
*eFormat = GL_BGR_EXT;
*iComponents = GL_RGB8;
// Attempt to open the fil
pFile = fopen(szFileName, "rb");
if(pFile == NULL)
return NULL;
// Read in header (binary)
fread(&tgaHeader, 18/* sizeof(TGAHEADER)*/, 1, pFile);
// Do byte swap for big vs little endian
#ifdef __APPLE__
BYTE_SWAP(tgaHeader.colorMapStart);
BYTE_SWAP(tgaHeader.colorMapLength);
BYTE_SWAP(tgaHeader.xstart);
BYTE_SWAP(tgaHeader.ystart);
BYTE_SWAP(tgaHeader.width);
BYTE_SWAP(tgaHeader.height);
#endif
// Get width, height, and depth of texture
*iWidth = tgaHeader.width;
*iHeight = tgaHeader.height;
sDepth = tgaHeader.bits / 8;
// Put some validity checks here. Very simply, I only understand
// or care about 8, 24, or 32 bit targa's.
if(tgaHeader.bits != 8 && tgaHeader.bits != 24 && tgaHeader.bits != 32)
return NULL;
// Calculate size of image buffer
lImageSize = tgaHeader.width * tgaHeader.height * sDepth;
// Allocate memory and check for success
pBits = new GLbyte[lImageSize];
if(pBits == NULL)
return NULL;
// Read in the bits
// Check for read error. This should catch RLE or other
// weird formats that I don't want to recognize
if(fread(pBits, lImageSize, 1, pFile) != 1)
{
free(pBits);
return NULL;
}
// Set OpenGL format expected
switch(sDepth)
{
case 3: // Most likely case
*eFormat = GL_BGR_EXT;
*iComponents = GL_RGB8;
break;
case 4:
*eFormat = GL_BGRA_EXT;
*iComponents = GL_RGBA8;
break;
case 1:
*eFormat = GL_LUMINANCE;
*iComponents = GL_LUMINANCE8;
break;
};
// Done with File
fclose(pFile);
// Return pointer to image data
return pBits;
}
iWidth, iHeight return texture dimensions, eFormat i iCompoments external and internal image formats. than actual function return value is pointer to texture data.
So your function must look like:
GLuint Texture::loadTexture(const char * imagepath) {
printf("Reading image %s\n", imagepath);
// Data read from the header of the BMP file
int width, height;
int component;
GLenum eFormat;
// Actual RGB data
char * data = LoadTGA(imagepath, &width, &height, &component, &eFormat);
// Create one OpenGL texture
GLuint textureID;
glGenTextures(1, &textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textureID);
if (!strcmp(imagepath,"hand.tga")) { // important because we comparing strings not pointers
glTexImage2D(GL_TEXTURE_2D, 0, component, width, height, 0, eFormat, GL_UNSIGNED_BYTE, data);
}else {
glTexImage2D(GL_TEXTURE_2D, 0, component, width, height, 0, eFormat, GL_UNSIGNED_BYTE, data);
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
delete[] data;
return textureID;
}

c++ tga parsing incorrect_color/distortion with some resolutions

i'd like to get some help on my issue with .tga file format parsing. i have the code, which i use for a long time:
int fileLength = Input.tellg();
vector<char> tempData;
tempData.resize(fileLength);
Input.seekg(0);
Input.read(&tempData[0], fileLength);
Input.close();
// Load information about the tga, aka the header.
// Seek to the width.
w = byteToUnsignedShort(tempData[12], tempData[13]);
// Seek to the height.
h = byteToUnsignedShort(tempData[14], tempData[15]);
// Seek to the depth.
depth = unsigned(tempData[16]);
// Mode = components per pixel.
md = depth / 8;
// Total bytes = h * w * md.
t = h * w * md;
//Delete allocated data, if need to
clear();
//Allocate new storage
data.resize(t);
// Copy image data.
for(unsigned i = 0, s = 18; s < t + 18; s++, i++)
data[i] = unsigned char(tempData[s]);
// Mode 3 = RGB, Mode 4 = RGBA
// TGA stores RGB(A) as BGR(A) so
// we need to swap red and blue.
if(md > 2)
{
char aux;
for(unsigned i = 0; i < t; i+= md)
{
aux = data[i];
data[i] = data[i + 2];
data[i + 2] = aux;
}
}
but it keeps failing occasionally for some image resolutions(mostly odd numbers and non-POT resolutions). it results in distorted image(with diagonal patterns) or wrong colors. last time i've encountered it - it was 9x9 24bpp image showing weird colors.
i'm on windows(so it means little-endian), rendering with opengl(i'm taking in account alpha channel existence, when passing image data with glTexImage2D). i'm saving my images with photoshop, not setting RLE flag. this code always reads correct image resolution and color depth.
example of image causing trouble:
http://pastie.org/private/p81wbh5sb6coldspln6mw
after loading problematic image, this code:
for(unsigned f = 0; f < imageData.w * imageData.h * imageData.depth; f += imageData.depth)
{
if(f % (imageData.w * imageData.depth) == 0)
writeLog << endl;
writeLog << "[" << unsigned(imageData.data[f]) << "," << unsigned(imageData.data[f + 1]) << "," << unsigned(imageData.data[f + 2]) << "]" << flush;
}
outputs this:
[37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40]
[37,40,40][173,166,164][93,90,88][93,90,88][93,90,88][93,90,88][93,90,88][88,85,83][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][237,232,230][235,229,228][235,229,228][235,229,228][235,229,228][235,229,228][223,214,212][37,40,40]
[37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40]
so i guess it does read correct data.
that brings us to opengl;
glGenTextures(1, &textureObject);
glBindTexture(GL_TEXTURE_2D, textureObject);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
GLenum in_comp_mode, comp_mode;
if(linear) //false for that image
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
else
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//i don't use 1 or 2 - channel textures, so it's always 24 or 32bpp
if(imageData.depth == 24)
{
in_tex_mode = GL_RGB8;
tex_mode = GL_RGB;
}
else
{
in_tex_mode = GL_RGBA8;
tex_mode = GL_RGBA;
}
glTexImage2D(GL_TEXTURE_2D, 0, in_tex_mode, imageData.w, imageData.h, 0, tex_mode, GL_UNSIGNED_BYTE, &imageData.data[0]);
glBindTexture(GL_TEXTURE_2D, NULL);
texture compression code is omitted, 'cause it's not active for that texture.
This is probably a padding/alignment issue.
You're loading a TGA, which has no row-padding, but passing it to GL which by default expects rows of pixels to be padded to a multiple of 4 bytes.
Possible fixes for this are:
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Change the dimensions of your texture, such that there will be no padding.
Change the loading of your texture, such that the padding is consistent with what GL expects
most image format save image data aligned(4 bytes commonly).
for example, resolution: 1rows 1columns
each row has one pixel, so if RGB is used, each row has 3bytes.
and will be extend to 4bytes for alignment because the CPU like that.
english is not my native language, so my bad grammar will kill you. just try to understand it.