I have textured a sphere with a .bmp format image. The problem is, when the image is mapped on the sphere the colour of the image looks inverted. Like the RED becomes BLUE and BLUE becomes RED.
I have tried using GL_BGR instead of GL_RGB but its no use.
Do I have to change the code for loading the image. Because It produces warning for the use of fopen() function and also I don't think its relevant of what I am asking.
The image what I am getting after mapping istexured sphere with inverted colors
This is what I have tried for loading the image and applied some texture rendering stuff.
GLuint LoadTexture( const char * filename, int width, int height )
{
GLuint texture;
unsigned char * data;
FILE * file;
//The following code will read in our RAW file
file = fopen( filename, "rb" );
if ( file == NULL ) return 0;
data = (unsigned char *)malloc( width * height * 3 );
fread( data, width * height * 3, 1, file );
fclose( file );
glGenTextures( 1, &texture ); //generate the texturewith the loaded data
glBindTexture( GL_TEXTURE_2D, texture ); //bind thetexture to it’s array
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE ); //set texture environment parameters
// better quality
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR_MIPMAP_LINEAR );
//Here we are setting the parameter to repeat the textureinstead of clamping the texture
//to the edge of our shape.
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_REPEAT );
//Generate the texture with mipmaps
gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width, height,GL_RGB, GL_UNSIGNED_BYTE, data );
free( data ); //free the texture
return texture; //return whether it was successfull
}
void FreeTexture( GLuint texture )
{
glDeleteTextures( 1, &texture );
}
A BMP file starts with a BITMAPFILEHEADER struct, which contains (amongst other things) the offset to the actual start of the bits in the file.
So you could do something like this to get to the bits of the BMP file.
BITMAPFILEHEADER bmpFileHeader;
fread(&bmpFileHeader,sizeof(bmpFileHeader),1,file);
fsetpos(file,bmpFileHeader->bfOffBits,SEEK_SET);
size_t bufferSize = width * height * 3;
fread(data,bufferSize,1,file);
Of course, this is dangerous as you are expecting a properly sized and formatted BMP file. So you really need to read the BITMAPINFO too.
BITMAPFILEHEADER bmpFileHeader;
fread(&bmpFileHeader,sizeof(bmpFileHeader),1,file);
BITMAPINFO bmpInfo;
fread(&bmpInfo,sizeof(bmpInfo),1,file)
fsetpos(file,bmpFileHeader->bfOffBits,SEEK_SET);
int width = bmpInfo->bmiHeader.biWidth;
int height = bmpInfo->bmiHeader.biHeight;
assert(bmpInfo->bmiHeader.biCompression == BI_RGB);
assert(bmpInfo->bmiHeader.biBitCount == 24;
size_t bufferSize = width * height * 3;
fread(data,bufferSize,1,file);
You can obviously make this increasingly sophisticated in mapping bmp formats to allowed opengl formats.
Additional complications will be:
the bmp data is bottom up so will appear upsidedown unless you correct for that.
the pixel data in a bmp file is padded to ensure each row is a multiple of 4 bytes wide which can cause stride issues if your image is not either 32bits per pixel, or a multiple of 4 pixels wide (which is practice is usually true).
Just because you can feed a constant to OpenGL does not mean the format is actually supported. Sometimes you just have to get in there and re-order the bytes yourself.
Finally I got what i needed. I was using an wrong argument GL_RGB instead I have replaced it with GL_BRG_EXT in this function
gluBuild2DMipmaps( GL_TEXTURE_2D, 3, width, height,GL_BGR_EXT, GL_UNSIGNED_BYTE, data );
previously I was getting this.textured mapped sphere with inverted colors
now I am getting this after the change I have made above.textured mapped sphere with true colors
Related
This code fragment is meant to create a GL texture with a single color, then save the raw pixel data to disk. I then convert that to PNG using ffmpeg. I have tried multiple ways of generating the texture, and multiple ways of saving the texture data, but the result is always the same - a 1920x1080 image with a 64x64 black box in the corner. What I expected was a 1920x1080 image of a single color.
What am I doing wrong?
Conversion command:
ffmpeg -pix_fmt rgba -s 1920x1080 -i texture.raw -f image2 output.png
Code:
gpu::gles2::GLES2Interface* gl = GetContextProvider()->ContextGL();
GLuint texture;
gl->GenTextures(1, &texture);
gl->BindTexture(GL_TEXTURE_2D, texture);
int width = 1920;
int height = 1080;
std::vector<unsigned char> data(width * height * 4, 0);
for (size_t i = 2; i < data.size(); i += 4) {
data[i] = 255; // blue channel
}
gl->TexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data.data());
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
std::vector<unsigned char> buffer(width * height * 4);
gl->ReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer.data());
std::ofstream file("texture.raw", std::ios::binary);
file.write(reinterpret_cast<char*>(buffer.data()), buffer.size());
file.close();
Take a look at the description for glReadPixels from the main documentation here: https://registry.khronos.org/OpenGL-Refpages/gl4/html/glReadPixels.xhtml.
Essentially glReadPixels is for getting the pixels from the current frame buffer, not from the currently bound GL_TEXTURE_2D. I can't 100% confidently answer without more code and context, but it looks like the code you have there is before anything is actually rendered to the frame buffer and you're only setting things up. It's most likely that you get a black box because the data getting saved to the buffer isn't valid.
Hope that helps.
Ok so I need to create my own texture/image data and then display it onto a quad in OpenGL. I have the quad working and I can display a TGA file onto it with my own texture loader and it maps to the quad perfectly.
But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel? What is the format of the texture array, how do I for example set pixel (100,100) to black?
This is how I would imagine it for a completely white image/texture:
#DEFINE SCREEN_WIDTH 1000
#DEFINE SCREEN_HEIGHT 1000
unsigned int* texdata = new unsigned int[SCREEN_HEIGHT * SCREEN_WIDTH * 3];
for(int i=0; i<SCREEN_HEIGHT * SCREEN_WIDTH * 3; i++)
texdata[i] = 255;
GLuint t = 0;
glEnable(GL_TEXTURE_2D);
glGenTextures( 1, &t );
glBindTexture(GL_TEXTURE_2D, t);
// Set parameters to determine how the texture is resized
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MIN_FILTER , GL_LINEAR_MIPMAP_LINEAR );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_MAG_FILTER , GL_LINEAR );
// Set parameters to determine how the texture wraps at edges
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_S , GL_REPEAT );
glTexParameteri ( GL_TEXTURE_2D , GL_TEXTURE_WRAP_T , GL_REPEAT );
// Read the texture data from file and upload it to the GPU
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCREEN_WIDTH, SCREEN_HEIGHT, 0,
GL_RGB, GL_UNSIGNED_BYTE, texdata);
glGenerateMipmap(GL_TEXTURE_2D);
EDIT: Below answers are correct but I also found that OpenGL doesn't handle normal ints which I used but it works fine with uint8_t. I assume it's because of the GL_RGB together with the GL_UNSIGNED_BYTE (which is only 8 bits and a normal int is not 8 bit) flag that I use when I upload to GPU.
But how do I create my own "homemade image", that is 1000x1000 and 3 channels (RGB values) for each pixel?
std::vector< unsigned char > image( 1000 * 1000 * 3 /* bytes per pixel */ );
What is the format of the texture array
Red byte, then green byte, then blue byte. Repeat.
how do I for example set pixel (100,100) to black?
unsigned int width = 1000;
unsigned int x = 100;
unsigned int y = 100;
unsigned int location = ( x + ( y * width ) ) * 3;
image[ location + 0 ] = 0; // R
image[ location + 1 ] = 0; // G
image[ location + 2 ] = 0; // B
Upload via:
// the rows in the image array don't have any padding
// so set GL_UNPACK_ALIGNMENT to 1 (instead of the default of 4)
// https://www.khronos.org/opengl/wiki/Pixel_Transfer#Pixel_layout
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
glTexImage2D
(
GL_TEXTURE_2D, 0,
GL_RGB, 1000, 1000, 0,
GL_RGB, GL_UNSIGNED_BYTE, &image[0]
);
By default, each row of a texture should be aligned to 4 bytes.
The texture is an RGB texture, which needs 24 bits or 3 bytes for each texel and the texture is tightly packed especially the rows of the texture.
This means that the alignment of 4 bytes for the start of a line of the texture is disregarded (except 3 times the width of the texture is divisible by 4 without a remaining).
To deal with that the alignment has to be changed to 1.
This means the GL_UNPACK_ALIGNMENT paramter has to be set before loading a tightly packed texture to the GPU (glTexImage2D):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Otherwise an offset of 0-3 bytes per line is gained, at texture lookup. This causes a continuously twisted or tilted texture.
Since you use the soure format GL_RGB in GL_UNSIGNED_BYTE, each pixel consits of 3 color channels (red, green and blue) and each color channel is stored in one byte in range [0, 255].
If you want to set a pixel at (x, y) to the color R, G and B, the this is done like this:
texdata[(y*WIDTH+x)*3+0] = R;
texdata[(y*WIDTH+x)*3+1] = G;
texdata[(y*WIDTH+x)*3+2] = B;
I am trying to render a texture with an alpha channel in it.
This is what I used for texture loading:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
I enabled GL_BLEND just before I render the texture: glEnable(GL_BLEND);
I also did this at the beginning of the code(the initialization): glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This is the result(It should be a transparent texture of a first person hand):
But when I load my texture like this(no alpha channel):
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
This is the result:
Does anyone know what can cause this, or do I have to give more code?
Sorry for bad English, thanks in advance.
EDIT:
My texture loading code:
GLuint Texture::loadTexture(const char * imagepath) {
printf("Reading image %s\n", imagepath);
// Data read from the header of the BMP file
unsigned char header[54];
unsigned int dataPos;
unsigned int imageSize;
unsigned int width, height;
// Actual RGB data
unsigned char * data;
// Open the file
FILE * file = fopen(imagepath, "rb");
if (!file) { printf("%s could not be opened. \n", imagepath); getchar(); exit(0); }
// Read the header, i.e. the 54 first bytes
// If less than 54 bytes are read, problem
if (fread(header, 1, 54, file) != 54) {
printf("Not a correct BMP file\n");
exit(0);
}
// A BMP files always begins with "BM"
if (header[0] != 'B' || header[1] != 'M') {
printf("Not a correct BMP file\n");
exit(0);
}
// Make sure this is a 24bpp file
if (*(int*)&(header[0x1E]) != 0) { printf("Not a correct BMP file\n");}
if (*(int*)&(header[0x1C]) != 24) { printf("Not a correct BMP file\n");}
// Read the information about the image
dataPos = *(int*)&(header[0x0A]);
imageSize = *(int*)&(header[0x22]);
width = *(int*)&(header[0x12]);
height = *(int*)&(header[0x16]);
// Some BMP files are misformatted, guess missing information
if (imageSize == 0) imageSize = width*height * 3; // 3 : one byte for each Red, Green and Blue component
if (dataPos == 0) dataPos = 54; // The BMP header is done that way
// Create a buffer
data = new unsigned char[imageSize];
// Read the actual data from the file into the buffer
fread(data, 1, imageSize, file);
// Everything is in memory now, the file wan be closed
fclose(file);
// Create one OpenGL texture
GLuint textureID;
glGenTextures(1, &textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textureID);
if (imagepath == "hand.bmp") {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
}else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
delete[] data;
return textureID;
}
As you can see its not my own written code, Ive got it from opengl-tutorial.org
My first comment stated:
The repeating, offset pattern looks like the data is treated as having a larger offset, when in reality it has smaller (or opposite).
And that was before I actually noticed what you did. Yes, this is precisely that. You can't treat 4-bytes-per-pixel data as 3-bytes-per-pixel data. The alpha channel gets interpreted as colour and that's why it all offsets this way.
If you want to disregard the alpha channel, you need to strip it off when loading so that it ends up having 3 bytes for each pixel value in the OpenGL texture memory. (That's what #RetoKoradi's answer is proposing, namely creating an RGB texture from RGBA data).
If it isn't actually supposed to look so blue-ish, maybe it's not actually in BGR layout?
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
^
\--- change to GL_RGBA as well
My wild guess is that human skin would have more red than blue light reflected by it.
It looks like you misunderstood how the arguments of glTexImage2D() work:
The 3rd argument (internalformat) defines what format you want to use for the data stored in the texture.
The 7th and 8th argument (format and type) define the format of the data you pass into the call as the last argument.
Based on this, if the format of the data you're passing as the last argument is BGRA, and you want to create an RGB texture from it, the correct call is:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
Note that the 7th argument is now GL_BGRA, matching your input data, while the 3rd argument is GL_RGB, specifying that you want to use an RGB texture.
Seams you chose worng texture pixel alignment. To specify the right one try to experiment with values (1, 2, 4) of glPixelStorei with GL_UNPACK_ALIGNMENT.
Specification:
void glPixelStorei( GLenum pname,
GLint param);
pname Specifies the symbolic name of the parameter to be set. One value affects the packing of pixel data into memory: GL_PACK_ALIGNMENT. The other affects the unpacking of pixel data from memory: GL_UNPACK_ALIGNMENT.
param Specifies the value that pname is set to.
glPixelStorei sets pixel storage modes that affect the operation of subsequent glReadPixels as well as the unpacking of texture patterns (see glTexImage2D and glTexSubImage2D).
pname is a symbolic constant indicating the parameter to be set, and param is the new value. One storage parameter affects how pixel data is returned to client memory:
GL_PACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on double-word boundaries).
The other storage parameter affects how pixel data is read from client memory:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row in memory. The allowable values are 1 (byte-alignment), 2 (rows aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start on double-word boundaries).
The following table gives the type, initial value, and range of valid values for each storage parameter that can be set with glPixelStorei.
BMP format do not support transparency at least most common 3 version (only work GL_BGR mode and its masked modifications). USE PNG, DDS, TIFF, TGA(simplest) instead.
Secondly your total image data size computation formula is wrong
imageSize = width*height * 3; // 3 : one byte for each Red, Green and Blue component
Right formula is:
imageSize = 4 * ((width * bitsPerPel + 31) / 32) * height;
where bitsPerPel is current picture bits per pixel (8, 16 or 24).
Here is the code of function wich used to load simple TGA files with transparency support:
// Define targa header.
#pragma pack(1)
typedef struct
{
GLbyte identsize; // Size of ID field that follows header (0)
GLbyte colorMapType; // 0 = None, 1 = paletted
GLbyte imageType; // 0 = none, 1 = indexed, 2 = rgb, 3 = grey, +8=rle
unsigned short colorMapStart; // First colour map entry
unsigned short colorMapLength; // Number of colors
unsigned char colorMapBits; // bits per palette entry
unsigned short xstart; // image x origin
unsigned short ystart; // image y origin
unsigned short width; // width in pixels
unsigned short height; // height in pixels
GLbyte bits; // bits per pixel (8 16, 24, 32)
GLbyte descriptor; // image descriptor
} TGAHEADER;
#pragma pack(8)
GLbyte *gltLoadTGA(const char *szFileName, GLint *iWidth, GLint *iHeight, GLint *iComponents, GLenum *eFormat)
{
FILE *pFile; // File pointer
TGAHEADER tgaHeader; // TGA file header
unsigned long lImageSize; // Size in bytes of image
short sDepth; // Pixel depth;
GLbyte *pBits = NULL; // Pointer to bits
// Default/Failed values
*iWidth = 0;
*iHeight = 0;
*eFormat = GL_BGR_EXT;
*iComponents = GL_RGB8;
// Attempt to open the fil
pFile = fopen(szFileName, "rb");
if(pFile == NULL)
return NULL;
// Read in header (binary)
fread(&tgaHeader, 18/* sizeof(TGAHEADER)*/, 1, pFile);
// Do byte swap for big vs little endian
#ifdef __APPLE__
BYTE_SWAP(tgaHeader.colorMapStart);
BYTE_SWAP(tgaHeader.colorMapLength);
BYTE_SWAP(tgaHeader.xstart);
BYTE_SWAP(tgaHeader.ystart);
BYTE_SWAP(tgaHeader.width);
BYTE_SWAP(tgaHeader.height);
#endif
// Get width, height, and depth of texture
*iWidth = tgaHeader.width;
*iHeight = tgaHeader.height;
sDepth = tgaHeader.bits / 8;
// Put some validity checks here. Very simply, I only understand
// or care about 8, 24, or 32 bit targa's.
if(tgaHeader.bits != 8 && tgaHeader.bits != 24 && tgaHeader.bits != 32)
return NULL;
// Calculate size of image buffer
lImageSize = tgaHeader.width * tgaHeader.height * sDepth;
// Allocate memory and check for success
pBits = new GLbyte[lImageSize];
if(pBits == NULL)
return NULL;
// Read in the bits
// Check for read error. This should catch RLE or other
// weird formats that I don't want to recognize
if(fread(pBits, lImageSize, 1, pFile) != 1)
{
free(pBits);
return NULL;
}
// Set OpenGL format expected
switch(sDepth)
{
case 3: // Most likely case
*eFormat = GL_BGR_EXT;
*iComponents = GL_RGB8;
break;
case 4:
*eFormat = GL_BGRA_EXT;
*iComponents = GL_RGBA8;
break;
case 1:
*eFormat = GL_LUMINANCE;
*iComponents = GL_LUMINANCE8;
break;
};
// Done with File
fclose(pFile);
// Return pointer to image data
return pBits;
}
iWidth, iHeight return texture dimensions, eFormat i iCompoments external and internal image formats. than actual function return value is pointer to texture data.
So your function must look like:
GLuint Texture::loadTexture(const char * imagepath) {
printf("Reading image %s\n", imagepath);
// Data read from the header of the BMP file
int width, height;
int component;
GLenum eFormat;
// Actual RGB data
char * data = LoadTGA(imagepath, &width, &height, &component, &eFormat);
// Create one OpenGL texture
GLuint textureID;
glGenTextures(1, &textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, textureID);
if (!strcmp(imagepath,"hand.tga")) { // important because we comparing strings not pointers
glTexImage2D(GL_TEXTURE_2D, 0, component, width, height, 0, eFormat, GL_UNSIGNED_BYTE, data);
}else {
glTexImage2D(GL_TEXTURE_2D, 0, component, width, height, 0, eFormat, GL_UNSIGNED_BYTE, data);
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
delete[] data;
return textureID;
}
I have some code for loading textures where I'm using DevIL to load the images and then OpenGL creates a texture from the pixels. This code works fine and the texture shows up properly and that's all fine.
Besides that I can also make an array from within the program to create the texture or make changes in the texture's pixels directly. My problem is here: when handling the pixels their format seems to be ABGR rather than RGBA as I would have liked.
I stumbled upon this SO question that refers to the format that's passed in the glTexImage2D function:
(...) If you have GL_RGBA and GL_UNSIGNED_INT_8_8_8_8, that means that pixels are stored in 32-bit integers, and the colors are in the logical order RGBA in such an integer, e.g. the red is in the high-order byte and the alpha is in the low-order byte. But if the machine is little-endian (as with Intel CPUs), it follows that the actual order in memory is ABGR. Whereas, GL_RGBA with GL_UNSIGNED_BYTE will store the bytes in RGBA order regardless whether the computer is little-endian or big-endian. (...)
Indeed I have an Intel CPU. The images are loaded just fine the way things are right now and I actually use the GL_RGBA mode and GL_UNSIGNED_BYTE type.
GLuint makeTexture( const GLuint* pixels, GLuint width, GLuint height ) {
GLuint texture = 0;
glGenTextures( 1, &texture );
glBindTexture( GL_TEXTURE_2D, texture );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glBindTexture( GL_TEXTURE_2D, NULL );
GLenum error = glGetError();
if ( error != GL_NO_ERROR ) {
return 0;
}
return texture;
}
This function is used in my two methods for loading textures, the method that loads an image from a file and the one that creates a texture from an array.
Let's say that I want to create an array of pixels and create a texture,
GLuint pixels[ 128 * 128 ];
for ( int i = 0; i < 128 * 128; ++i ) {
pixels[ i ] = 0x800000FF;
}
texture.loadImageArray( pixels, 128, 128 );
By padding the pixels with this value I would expect to see a slightly dark red color.
red = 0x80, green = 0x00, blue = 0x00, alpha = 0xFF
But instead I get a transparent red,
alpha = 0x80, blue = 0x00, green = 0x00, red = 0xFF
Rather than using raw unsigned ints I made a structure to help me handling individual channels:
struct Color4C {
unsigned char alpha;
unsigned char blue;
unsigned char green;
unsigned char red;
...
};
I can easily replace an array of unsigned ints with an array of Color4C and the result is the same. If I invert the order of the channels (red first, alpha last) then I can easily pass 0xRRGGBBAA and make it work.
The easy solution is to simply handle these values in ABGR format. But I also find it easier to work with RGBA values. If I want to use hardcoded color values I would prefer to write them like 0xRRGGBBAA and not 0xAABBGGRR.
But let's say I start using the ABGR format. If I were to run my code in another machine, would I suddenly see strange colors wherever I changed pixels/channels directly?
Is there a better solution?
Promoting some helpful comments to an answer:
0xRR,0xGG,0xBB,0xAA on your (Intel, little-endian) machine is 0xAABBGGRR. You've already found the information saying that GL_UNSIGNED_BYTE preserves the format of binary blocks of data across machines, while GL_UNSIGNED_INT_8_8_8_8 preserves the format of literals like 0xRRGGBBAA. Because different machines have different correspondence between binary and literals, it is absolutely impossible for you to have both types of portability at once. – Ben Voigt
[Writing 0xAABBGGRR would actually be RGBA in your machine, but running this code in another machine could show different results] because OpenGL will reinterpret it as UNSIGNED_BYTE. C++ needs an endianess library, after all. – WorldSEnder
Here is my code to load a texture. I have tried to load a file using this example; it is a gif file. Can I ask if gif files can be loaded, or is it only raw files can be loaded?
void setUpTextures()
{
printf("Set up Textures\n");
//This is the array that will contain the image color information.
// 3 represents red, green and blue color info.
// 512 is the height and width of texture.
unsigned char earth[512 * 512 * 3];
// This opens your image file.
FILE* f = fopen("/Users/Raaj/Desktop/earth.gif", "r");
if (f){
printf("file loaded\n");
}else{
printf("no load\n");
fclose(f);
return;
}
fread(earth, 512 * 512 * 3, 1, f);
fclose(f);
glEnable(GL_TEXTURE_2D);
//Here 1 is the texture id
//The texture id is different for each texture (duh?)
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//In this line you only supply the last argument which is your color info array,
//and the dimensions of the texture (512)
glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE,earth);
glDisable(GL_TEXTURE_2D);
}
void Draw()
{
glEnable(GL_TEXTURE_2D);
// Here you specify WHICH texture you will bind to your coordinates.
glBindTexture(GL_TEXTURE_2D,1);
glColor3f(1,1,1);
double n=6;
glBegin(GL_QUADS);
glTexCoord2d(0,50); glVertex2f(n/2, n/2);
glTexCoord2d(50,0); glVertex2f(n/2, -n/2);
glTexCoord2d(50,50); glVertex2f(-n/2, -n/2);
glTexCoord2d(0,50); glVertex2f(-n/2, n/2);
glEnd();
// Do not forget this line, as then the rest of the colors in your
// Program will get messed up!!!
glDisable(GL_TEXTURE_2D);
}
And all I get is this:
Can I know why?
Basically, no, you can't just give arbitrary texture formats to GL - it only wants pixel data, not encoded files.
Your code, as posted, clearly declares an array for 24-bit RGB data, but then you open and attempt to read that much data from a GIF file. GIF is a compressed and palettised format, complete with header information etc., so that's never doing to work.
You need to use an image loader to decompress the file into raw pixels.
Also, your texture coordinates don't look right. There are four vertices, but only 3 distinct coordinates used, and 2 adjacent coordinates are diagonally opposite each other. Even if your texture was loaded correctly, that's unlikely to be what you want.