DDS texture transparency rendered black Opengl - opengl

I am currently trying to render textured objects in Opengl. Everything worked fine until I wanted to render a texture with transparency. Instead of showing the the object transparent it just rendered in total black.
The method fo loading the texture file is this:
// structures for reading and information variables
char magic[4];
unsigned char header[124];
unsigned int width, height, linearSize, mipMapCount, fourCC;
unsigned char* dataBuffer;
unsigned int bufferSize;
fstream file(path, ios::in|ios::binary);
// read magic and header
if (!file.read((char*)magic, sizeof(magic))){
cerr<< "File " << path << " not found!"<<endl;
return false;
}
if (magic[0]!='D' || magic[1]!='D' || magic[2]!='S' || magic[3]!=' '){
cerr<< "File does not comply with dds file format!"<<endl;
return false;
}
if (!file.read((char*)header, sizeof(header))){
cerr<< "Not able to read file information!"<<endl;
return false;
}
// derive information from header
height = *(int*)&(header[8]);
width = *(int*)&(header[12]);
linearSize = *(int*)&(header[16]);
mipMapCount = *(int*)&(header[24]);
fourCC = *(int*)&(header[80]);
// determine dataBuffer size
bufferSize = mipMapCount > 1 ? linearSize * 2 : linearSize;
dataBuffer = new unsigned char [bufferSize*2];
// read data and close file
if (file.read((char*)dataBuffer, bufferSize/1.5))
cout<<"Loading texture "<<path<<" successful"<<endl;
else{
cerr<<"Data of file "<<path<<" corrupted"<<endl;
return false;
}
file.close();
// check pixel format
unsigned int format;
switch(fourCC){
case FOURCC_DXT1:
format = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT;
break;
case FOURCC_DXT3:
format = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT;
break;
case FOURCC_DXT5:
format = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
break;
default:
cerr << "Compression type not supported or corrupted!" << endl;
return false;
}
glGenTextures(1, &ID);
glBindTexture(GL_TEXTURE_2D, ID);
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
unsigned int blockSize = (format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) ? 8 : 16;
unsigned int offset = 0;
/* load the mipmaps */
for (unsigned int level = 0; level < mipMapCount && (width || height); ++level) {
unsigned int size = ((width+3)/4)*((height+3)/4)*blockSize;
glCompressedTexImage2D(GL_TEXTURE_2D, level, format, width, height,
0, size, dataBuffer + offset);
offset += size;
width /= 2;
height /= 2;
}
textureType = DDS_TEXTURE;
return true;
In the fragment shader I just set the gl_FragColor = texture2D( myTextureSampler, UVcoords )
I hope that there is an easy explanation such as some code missing.
In the openGL initialization i glEnabled GL_Blend and set a blend function.
Does anyone have an idea of what I did wrong?

Make sure the blend function is the correct function for what you are trying to accomplish. For what you've described that should be glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
You probably shouldn't set the blend function in your openGL initialization function but should wrap it around your draw calls like:
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
//gl draw functions (glDrawArrays,glDrawElements,etc..)
glDisable(GL_BLEND)
Are you clearing the 2D texture binding before you swap buffers? i.e ...
glBindTexture(GL_TEXTURE_2D, 0);

Related

Getting incorrect width when decoding bitmap

I have an error with my source code which basically causes bitmap images to appear too wide. for example it will print the width and the height and the height is perfect (256) and the width should also be 256 but the programs says it is billions of pixels wide and it is different everytime. here is the source code.
#include "glob.h"
/* Image type - contains height, width, and data */
struct Image {
unsigned long sizeX;
unsigned long sizeY;
char *data;
};
typedef struct Image Image;
int ImageLoad(char *filename, Image *image) {
FILE *file;
unsigned long size; // size of the image in bytes.
unsigned long i; // standard counter.
unsigned short int planes; // number of planes in image (must be 1)
unsigned short int bpp; // number of bits per pixel (must be 24)
char temp; // temporary color storage for bgr-rgb conversion.
// make sure the file is there.
if ((file = fopen(filename, "rb"))==NULL){
printf("bitmap Not Found : %s\n",filename);
return 0;
}
// seek through the bmp header, up to the width/height:
fseek(file, 18, SEEK_CUR);
// read the width
if ((i = fread(&image->sizeX, 4, 1, file)) != 1) {
printf("Error reading width from %s.\n", filename);
return 0;
}
printf("Width of %s: %lu\n", filename, image->sizeX);
// read the height
if ((i = fread(&image->sizeY, 4, 1, file)) != 1) {
printf("Error reading height from %s.\n", filename);
return 0;
}
printf("Height of %s: %lu\n", filename, image->sizeY);
// calculate the size (assuming 24 bits or 3 bytes per pixel).
size = image->sizeX * image->sizeY * 3;
// read the planes
if ((fread(&planes, 2, 1, file)) != 1) {
printf("Error reading planes from %s.\n", filename);
return 0;
}
if (planes != 1) {
printf("Planes from %s is not 1: %u\n", filename, planes);
return 0;
}
// read the bpp
if ((i = fread(&bpp, 2, 1, file)) != 1) {
printf("Error reading bpp from %s.\n", filename);
return 0;
}
if (bpp != 24) {
printf("Bpp from %s is not 24: %u\n", filename, bpp);
return 0;
}
// seek past the rest of the bitmap header.
fseek(file, 24, SEEK_CUR);
// read the data.
image->data = (char *) malloc(size);
if (image->data == NULL) {
printf("Error allocating memory for color-corrected image data\n");
return 0;
}
if ((i = fread(image->data, size, 1, file)) != 1) {
printf("Error reading image data from %s.\n", filename);
return 0;
}
for (i=0; i<size; i+=3) { // reverse all of the colors. (bgr -> rgb)
temp = image->data[i];
image->data[i] = image->data[i+2];
image->data[i+2] = temp;
}
// we're done.
return 0;
}
// Load Bitmaps And Convert To Textures
void glob::LoadGLTextures() {
// Load Texture
Image *image1;
// allocate space for texture
image1 = (Image *) malloc(sizeof(Image));
if (image1 == NULL) {
printf("(image1 == NULL)\n");
exit(0);
}
ImageLoad("data/textures/NeHe.bmp", image1);
// Create Texture
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture); // 2d texture (x and y size)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // scale linearly when image bigger than texture
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); // scale linearly when image smalled than texture
// 2d texture, level of detail 0 (normal), 3 components (red, green, blue), x size from image, y size from image,
// border 0 (normal), rgb color data, unsigned byte data, and finally the data itself.
glTexImage2D(GL_TEXTURE_2D, 0, 3, image1->sizeX, image1->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, image1->data);
};
glob.h is this:
#ifndef GLOB_H_INCLUDED
#define GLOB_H_INCLUDED
#include <iostream>
#include <stdlib.h>
#include <stdio.h> // Header file for standard file i/o.
#include <GL/glx.h> /* this includes the necessary X headers */
#include <GL/gl.h>
//#include <GL/glut.h> // Header File For The GLUT Library
//#include <GL/glu.h> // Header File For The GLu32 Library
#include <X11/X.h> /* X11 constant (e.g. TrueColor) */
#include <X11/keysym.h>
class glob {
bool Running;
GLuint texture; //make an array when we start using more then 1
Display *dpy;
Window win;
XEvent event;
GLboolean doubleBuffer;
GLboolean needRedraw;
GLfloat xAngle, yAngle, zAngle;
float camera_x, camera_y, camera_z;
public:
glob();
int OnExecute();
public:
int init(int argc, char **argv);
void LoadGLTextures();
void OnEvent();
void redraw(void);
};
#endif // GLOB_H_INCLUDED
can any body help me fix this problem?
Lots of things could be going wrong.
If it's a very old file, it could have a BITMAPCOREHEADER which has size fields that are only 2 bytes each.
Is your machine little endian? BMP files are stored little endian.
Note that height may be negative, (which implies it's a top-down bitmap instead of a bottom up one). If you interpret a small negative number as an unsigned 32-bit int, you'll see values in the billions.
Also, your seek to the actual pixel data assumes that it starts right after the bitmap header. This is common, but not required. The file header contains the offset of the actual pixel data. (Microsoft documentation calls this the "bitmap bits" or the "color data".)
I recommend doing a hex dump of the beginning of your file and step through it by hand to make sure all your offsets and assumptions are correct. Feel free to paste the beginning of a hex dump into your question.
Are you on Windows? Can you just call LoadImage?

C++ OpenGL glTexImage2D Access Violation

I'm writing an application using OpenGL (freeglut and glew).
I also wanted textures so I did some research on the Bitmap file format and wrote a struct for the main header and another for the DIB header (info header).
Then I started writing the loader. It automatically binds the texture to OpenGL. Here is the function:
static unsigned int ReadInteger(FILE *fp)
{
int a, b, c, d;
// Integer is 4 bytes long.
a = getc(fp);
b = getc(fp);
c = getc(fp);
d = getc(fp);
// Convert the 4 bytes to an integer.
return ((unsigned int) a) + (((unsigned int) b) << 8) +
(((unsigned int) c) << 16) + (((unsigned int) d) << 24);
}
static unsigned int ReadShort(FILE *fp)
{
int a, b;
// Short is 2 bytes long.
a = getc(fp);
b = getc(fp);
// Convert the 2 bytes to a short (int16).
return ((unsigned int) a) + (((unsigned int) b) << 8);
}
GLuint LoadBMP(const char* filename)
{
FILE* file;
// Check if a file name was provided.
if (!filename)
return 0;
// Try to open file.
fopen_s(&file, filename, "rb");
// Return if the file could not be open.
if (!file)
{
cout << "Warning: Could not find texture '" << filename << "'." << endl;
return 0;
}
// Read signature.
unsigned char signature[2];
fread(&signature, 2, 1, file);
// Use signature to identify a valid bitmap.
if (signature[0] != BMPSignature[0] || signature[1] != BMPSignature[1])
{
fclose(file);
return 0;
}
// Read width and height.
unsigned long width, height;
fseek(file, 16, SEEK_CUR); // After the signature we have 16bytes until the width.
width = ReadInteger(file);
height = ReadInteger(file);
// Calculate data size (we'll only support 24bpp).
unsigned long dataSize;
dataSize = width * height * 3;
// Make sure planes is 1.
if (ReadShort(file) != 1)
{
cout << "Error: Could not load texture '" << filename << "' (planes is not 1)." << endl;
return 0;
}
// Make sure bpp is 24.
if (ReadShort(file) != 24)
{
cout << "Error: Could not load texture '" << filename << "' (bits per pixel is not 24)." << endl;
return 0;
}
// Move pointer to beggining of data. (after the bpp we have 24 bytes until the data)
fseek(file, 24, SEEK_CUR);
// Allocate memory and read the image data.
unsigned char* data = new unsigned char[dataSize];
if (!data)
{
fclose(file);
cout << "Warning: Could not allocate memory to store data of '" << filename << "'." << endl;
return 0;
}
fread(data, dataSize, 1, file);
if (data == NULL)
{
fclose(file);
cout << "Warning: Could no load data from '" << filename << "'." << endl;
return 0;
}
// Close the file.
fclose(file);
// Create the texture.
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //NEAREST);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, width, height, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
return texture;
}
I know that the bitmap's data is correctly read because I outputted it's data to the console and compared with the image opened in paint.
The problem here is this line:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width,
dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Most of the times I run the application this line crashes with the error:
Unhandled exception at 0x008ffee9 in GunsGL.exe: 0xC0000005: Access violation reading location 0x00af7002.
This is the Disassembly of where the error occurs:
movzx ebx,byte ptr [esi+2]
It's not an error with my loader, because I've downloaded other loaders.
A downloaded loader that I used was this one from NeHe.
EDIT: (CODE UPDATED ABOVE)
I rewrote the loader, but I still get the crash on the same line. Instead of that crash, sometimes I get a crash on mlock.c (same error message is I recall correctly):
void __cdecl _lock (
int locknum
)
{
/*
* Create/open the lock, if necessary
*/
if ( _locktable[locknum].lock == NULL ) {
if ( !_mtinitlocknum(locknum) )
_amsg_exit( _RT_LOCK );
}
/*
* Enter the critical section.
*/
EnterCriticalSection( _locktable[locknum].lock );
}
On the line:
EnterCriticalSection( _locktable[locknum].lock );
Also, here is a screen shot of one of those times the applications doesn't crash (the texture is obviously not right):
http://i.stack.imgur.com/4Mtso.jpg
Edit2:
Updated code with the new working one.
(The reply marked as an answer does not contain all that was needed for this to work, but it was vital)
Try glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before your glTexImage2D() call.
I know, it's tempting to read binary data like this
BitmapHeader header;
BitmapInfoHeader dibheader;
/*...*/
// Read header.
fread(&header, sizeof(BitmapHeader), 1, file);
// Read info header.
fread(&dibheader, sizeof(BitmapInfoHeader), 1, file);
but you really shouldn't do it that way. Why? Because the memory layout of structures may be padded to meet alignment constraints (yes, I know about packing pragmas), the type size of the used compiler may not match the data size in the binary file, and last but not least endianess may not match.
Always read binary data into a intermediary buffer of which you extract the fields in a well defined way with exactly specified offsets and typing.
// Allocate memory for the image data.
data = (unsigned char*)malloc(dibheader.dataSize);
If this is C++, then use the new operator. If this is C, then don't cast from void * to the L value type, it's bad style and may cover usefull compiler warnings.
// Verify memory allocation.
if (!data)
{
free(data);
If data is NULL you mustn't free it.
// Swap R and B because bitmaps are BGR and OpenGL uses RGB.
for (unsigned int i = 0; i < dibheader.dataSize; i += 3)
{
B = data[i]; // Backup Blue.
data[i] = data[i + 2]; // Place red in right place.
data[i + 2] = B; // Place blue in right place.
}
OpenGL does indeed support BGR alignment. The format parameter is, surprise, GL_BGR
// Generate texture image.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width, dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Well, and this misses setting all of the pixel store parameters. Always set every pixel store parameter before doing pixel transfers, they may be left in some undesired state from a previous operation. Better safe than sorry.

C++ OpenGL TGA Loading Failing

I've been working through a basic OpenGl tutorial on loading a TGA file, to be used as a texture on a 3d object. I've been able to load data from the TGA header, but when I attempt to load the actual image data, it fails. I'm not sure where it is going wrong. Here is my texture loading class:
Header file:
struct TGA_Header
{
GLbyte ID_Length;
GLbyte ColorMapType;
GLbyte ImageType;
// Color map specifications
GLbyte firstEntryIndex[2];
GLbyte colorMapLength[2];
GLbyte colorMapEntrySize;
//image specification
GLshort xOrigin;
GLshort yOrigin;
GLshort ImageWidth;
GLshort ImageHeight;
GLbyte PixelDepth;
GLbyte ImageDescriptor;
};
class Texture
{
public:
Texture(string in_filename, string in_name = "");
~Texture();
public:
unsigned short width;
unsigned short height;
unsigned int length;
unsigned char type;
unsigned char *imageData;
unsigned int bpp;
unsigned int texID;
string name;
static vector<Texture *> textures;
private:
bool loadTGA(string filename);
bool createTexture(unsigned char *imageData, int width, int height, int type);
void swap(unsigned char * ori, unsigned char * dest, GLint size);
void flipImage(unsigned char * image, bool flipHorizontal, bool flipVertical, GLushort width, GLushort height, GLbyte bpp);
};
Here is the load TGA function in the cpp:
bool Texture::loadTGA(string filename)
{
TGA_Header TGAheader;
ifstream file( filename.data(), std::ios::in, std::ios::binary );
//make sure the file was opened properly
if (!file.is_open() )
return false;
if( !file.read( (char *)&TGAheader, sizeof(TGAheader) ) )
return false;
//make sure the image is of a type we can handle
if( TGAheader.ImageType != 2 )
return false;
width = TGAheader.ImageWidth;
height = TGAheader.ImageHeight;
bpp = TGAheader.PixelDepth;
if( width < 0 || // if the width or height is less than 0, than
height <= 0 || // the image is corrupt
(bpp != 24 && bpp != 32) ) // make sure we are of the correct bit depth
{
return false;
}
//check for an alpha channel
GLuint type = GL_RGBA;
if ( bpp == 24 )
type = GL_RGB;
GLuint bytesPerPixel = bpp / 8;
//allocate memory for the TGA so we can read it
GLuint imageSize = width * height * bytesPerPixel;
imageData = new GLubyte[imageSize];
if ( imageData == NULL )
return false;
//make sure we are in the correct position to load the image data
file.seekg(-imageSize, std::ios::end);
// if something when wrong, make sure we free up the memory
//NOTE: It never gets past this point. The conditional always fails.
if ( !file.read( (char *)imageData, imageSize ) )
{
delete imageData;
return false;
}
//more code is down here, but it doesnt matter because it does not pass the above function
}
It seems to load some data, but it keeps returning that it failed. Any help on why would be greatly appreciated. Appologies if it gets a bit wordy, but I'm not sure what is or is not significant.
UPDATE:
So, I just rewrote the function. The ifsteam I was using, seemed to be the cause of the problem. Specifically, it would try and load far more bytes of data than I had entered. I don't know the cause of the behavior, but I've listed my functioning code below. Thank you every one for your help.
The problem could be depending on the TGA algorithm which do not support compressed TGA.
Make sure you do not compress the TGA and that the TGA order (less important) is in a Bottom Left origin.
I usually work with GIMP and at the moment of the same, uncheck the RLE compression and put the Bottom Left alignment.
I'm not familiar with C++, sorry.
Are you sure this line file.seekg(-imageSize, std::ios::end); is not supposed to be file.seekg(headerSize, std::ios::start); ?
Makes more sense to seek from start than from end.
You should also check for ColorMapType != 0.
P.S. Here if( width < 0 || height <=0 width check should be <= as well.
So, I changed from using an ifstream to a FILE. The ifstream, was trying to load far more bytes than I had listed in the arguments. Here is the new code. (NOTE: It still needs optomized. I believe there are some unused variables floating around, but it works perfectly.). Thanks again everyone for your help.
The header file:
//struct to hold tga data
struct TGA_Header
{
GLbyte ID_Length;
GLbyte ColorMapType;
GLbyte ImageType;
// Color map specifications
GLbyte firstEntryIndex[2];
GLbyte colorMapLength[2];
GLbyte colorMapEntrySize;
//image specification
GLshort xOrigin;
GLshort yOrigin;
GLshort ImageWidth;
GLshort ImageHeight;
GLbyte PixelDepth;
GLbyte ImageDescriptor;
};
class Texture
{
public:
//functions
Texture(string in_filename, string in_name = "");
~Texture();
public:
//vars
unsigned char *imageData;
unsigned int texID;
string name;
//temp global access point for accessing all loaded textures
static vector<Texture *> textures;
private:
//can add additional load functions for other image types
bool loadTGA(string filename);
bool createTexture(unsigned char *imageData, int width, int height, int type);
void swap(unsigned char * ori, unsigned char * dest, GLint size);
void flipImage(unsigned char * image, bool flipHorizontal, bool flipVertical, GLushort width, GLushort height, GLbyte bpp);
};
#endif
Here is the load TGA function:
bool Texture::loadTGA(string filename)
{
//var for swapping colors
unsigned char colorSwap = 0;
GLuint type;
TGA_Header TGAheader;
FILE* file = fopen(filename.c_str(), "rb");
unsigned char Temp_TGAheader[18];
//check to make sure the file loaded
if( file == NULL )
return false;
fread(Temp_TGAheader, 1, sizeof(Temp_TGAheader), file);
//pull out the relavent data. 2 byte data (short) must be converted
TGAheader.ID_Length = Temp_TGAheader[0];
TGAheader.ImageType = Temp_TGAheader[2];
TGAheader.ImageWidth = *static_cast<unsigned short*>(static_cast<void*>(&Temp_TGAheader[12]));
TGAheader.ImageHeight = *static_cast<unsigned short*>(static_cast<void*>(&Temp_TGAheader[14]));
TGAheader.PixelDepth = Temp_TGAheader[16];
//make sure the image is of a type we can handle
if( TGAheader.ImageType != 2 || TGAheader.ImageWidth <= 0 || TGAheader.ImageHeight <= 0 )
{
fclose(file);
return false;
}
//set the type
if ( TGAheader.PixelDepth == 32 )
{
type = GL_RGBA;
}
else if ( TGAheader.PixelDepth == 24 )
{
type = GL_RGB;
}
else
{
//incompatable image type
return false;
}
//remember bits != bytes. To convert we need to divide by 8
GLuint bytesPerPixel = TGAheader.PixelDepth / 8;
//The Memory Required For The TGA Data
unsigned int imageSize = TGAheader.ImageWidth * TGAheader.ImageHeight * bytesPerPixel;// Calculate
//request the needed memory
imageData = new GLubyte[imageSize];
if ( imageData == NULL ) // just in case
return false;
if( fread(imageData, 1, imageSize, file) != imageSize )
{
//Kill it
delete [] imageData;
fclose(file);
return false;
}
fclose(file);
for (unsigned int x = 0; x < imageSize; x +=bytesPerPixel)
{
colorSwap = imageData[x];
imageData[x] = imageData[x + 2];
imageData[x + 2] = colorSwap;
}
createTexture( imageData, TGAheader.ImageWidth, TGAheader.ImageHeight, type );
return true;
}

OpenGL Issue Drawing a Large Image Texture causing Skewing

I'm trying to store a 1365x768 image on a 2048x1024 texture in OpenGL ES but the resulting image once drawn appears skewed. If I run the same 1365x768 image through gluScaleImage() and fit it onto the 2048x1024 texture it looks fine when drawn but this OpenGL call is slow and hurts performance.
I'm doing this on an Android device (Motorola Milestone) which has 256MB of memory. Not sure if the memory is a factor though since it works fine when scaled using gluScaleImage() (it's just slower.)
Mapping smaller textures (854x480 onto 1024x512, for example) works fine though. Does anyone know why this is and suggestions for what I can do about it?
Update
Some code snippets to help understand context...
// uiImage is loaded. The texture dimensions are determined from upsizing the image
// dimensions to a power of two size:
// uiImage->_width = 1365
// uiImage->_height = 768
// width = 2048
// height = 1024
// Once the image is loaded:
// INT retval = gluScaleImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
copyImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
if (pixelFormat == RGB565 || pixelFormat == RGBA4444)
{
unsigned char* tempData = NULL;
unsigned int* inPixel32;
unsigned short* outPixel16;
tempData = new unsigned char[height*width*2];
inPixel32 = (unsigned int*)data;
outPixel16 = (unsigned short*)tempData;
if(pixelFormat == RGB565)
{
// "RRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" --> "RRRRRGGGGGGBBBBB"
for(unsigned int i = 0; i < numTexels; ++i, ++inPixel32)
{
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 3) << 11) |
((((*inPixel32 >> 8) & 0xFF) >> 2) << 5) |
((((*inPixel32 >> 16) & 0xFF) >> 3) << 0);
}
}
if(tempData != NULL)
{
delete [] data;
data = tempData;
}
}
// [snip..]
// Copy function (mostly)
static void copyImage(GLint widthin, GLint heightin, const unsigned int* datain, GLint widthout, GLint heightout, unsigned int* dataout)
{
unsigned int* p1 = const_cast<unsigned int*>(datain);
unsigned int* p2 = dataout;
int nui = widthin * sizeof(unsigned int);
for(int i = 0; i < heightin; i++)
{
memcpy(p2, p1, nui);
p1 += widthin;
p2 += widthout;
}
}
In the render code, without changing my texture coordinates I should see the correct image when using gluScaleImage() and a smaller image (that requires some later correction factors) for the copyImage() code. This is what happens when the image is small (854x480 for example works fine with copyImage()) but when I use the 1365x768 image, that's when the skewing appears.
Finally solved the issue. First thing to know is what's the maximum texture size allowed for the device:
GLint texSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &texSize);
When I ran this the texture size max for the Motorola Milestone was 2048x2048, which was fine in my case.
After messing with the texture mapping to no end I finally decided to try opening and resaving the image..and voilĂ  it suddenly began working. I don't know what was wrong with the format the original image was stored in but as advice to anyone else experiencing a similar problem: might be worth looking at your image itself.

How to get a C method to accept UIImage parameter?

I am trying to do some Image processing on a UIImage using some EAGLView code from the GLImageProcessing sample from Apple. The sample code is configured to perform processing to a pre-installed image (Image.png). I am trying to modify the code so that it will accept a UIImage (or at least CGImage data) of my choice and process that instead. Problem is, the texture-loader method loadTexture() (below) seems to accept only C structures as parameters, and I have not been able to get it to accept a UIImage* or a CGImage as a parameter. Can someone give me a clue as how to bridge the gap so that I can pass my UIImage into the C-method?
------------ from Texture.h ---------------
#ifndef TEXTURE_H
#define TEXTURE_H
#include "Imaging.h"
void loadTexture(const char *name, Image *img, RendererInfo *renderer);
#endif /* TEXTURE_H */
----------------from Texture.m---------------------
#import <UIKit/UIKit.h>
#import "Texture.h"
static unsigned int nextPOT(unsigned int x)
{
x = x - 1;
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >>16);
return x + 1;
}
// This is not a fully generalized image loader. It is an example of how to use
// CGImage to directly access decompressed image data. Only the most commonly
// used image formats are supported. It will be necessary to expand this code
// to account for other uses, for example cubemaps or compressed textures.
//
// If the image format is supported, this loader will Gen a OpenGL 2D texture object
// and upload texels from it, padding to POT if needed. For image processing purposes,
// border pixels are also replicated here to ensure proper filtering during e.g. blur.
//
// The caller of this function is responsible for deleting the GL texture object.
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;
// Parse CGImage info
info = CGImageGetBitmapInfo(CGImage); // CGImage may return pixels in RGBA, BGRA, or ARGB order
colormodel = CGColorSpaceGetModel(CGImageGetColorSpace(CGImage));
size_t bpp = CGImageGetBitsPerPixel(CGImage);
if (bpp < 8 || bpp > 32 || (colormodel != kCGColorSpaceModelMonochrome && colormodel != kCGColorSpaceModelRGB))
{
// This loader does not support all possible CGImage types, such as paletted images
CGImageRelease(CGImage);
return;
}
components = bpp>>3;
rowBytes = CGImageGetBytesPerRow(CGImage); // CGImage may pad rows
rowPixels = rowBytes / components;
imgWide = CGImageGetWidth(CGImage);
imgHigh = CGImageGetHeight(CGImage);
img->wide = rowPixels;
img->high = imgHigh;
img->s = (float)imgWide / rowPixels;
img->t = 1.0;
// Choose OpenGL format
switch(bpp)
{
default:
rt_assert(0 && "Unknown CGImage bpp");
case 32:
{
internal = GL_RGBA;
switch(info & kCGBitmapAlphaInfoMask)
{
case kCGImageAlphaPremultipliedFirst:
case kCGImageAlphaFirst:
case kCGImageAlphaNoneSkipFirst:
format = GL_BGRA;
break;
default:
format = GL_RGBA;
}
break;
}
case 24:
internal = format = GL_RGB;
break;
case 16:
internal = format = GL_LUMINANCE_ALPHA;
break;
case 8:
internal = format = GL_LUMINANCE;
break;
}
// Get a pointer to the uncompressed image data.
//
// This allows access to the original (possibly unpremultiplied) data, but any manipulation
// (such as scaling) has to be done manually. Contrast this with drawing the image
// into a CGBitmapContext, which allows scaling, but always forces premultiplication.
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(CGImage));
rt_assert(data);
pixels = (GLubyte *)CFDataGetBytePtr(data);
rt_assert(pixels);
// If the CGImage component layout isn't compatible with OpenGL, fix it.
// On the device, CGImage will generally return BGRA or RGBA.
// On the simulator, CGImage may return ARGB, depending on the file format.
if (format == GL_BGRA)
{
uint32_t *p = (uint32_t *)pixels;
int i, num = img->wide * img->high;
if ((info & kCGBitmapByteOrderMask) != kCGBitmapByteOrder32Host)
{
// Convert from ARGB to BGRA
for (i = 0; i < num; i++)
p[i] = (p[i] << 24) | ((p[i] & 0xFF00) << 8) | ((p[i] >> 8) & 0xFF00) | (p[i] >> 24);
}
// All current iPhoneOS devices support BGRA via an extension.
if (!renderer->extension[IMG_texture_format_BGRA8888])
{
format = GL_RGBA;
// Convert from BGRA to RGBA
for (i = 0; i < num; i++)
#if __LITTLE_ENDIAN__
p[i] = ((p[i] >> 16) & 0xFF) | (p[i] & 0xFF00FF00) | ((p[i] & 0xFF) << 16);
#else
p[i] = ((p[i] & 0xFF00) << 16) | (p[i] & 0xFF00FF) | ((p[i] >> 16) & 0xFF00);
#endif
}
}
// Determine if we need to pad this image to a power of two.
// There are multiple ways to deal with NPOT images on renderers that only support POT:
// 1) scale down the image to POT size. Loses quality.
// 2) pad up the image to POT size. Wastes memory.
// 3) slice the image into multiple POT textures. Requires more rendering logic.
//
// We are only dealing with a single image here, and pick 2) for simplicity.
//
// If you prefer 1), you can use CoreGraphics to scale the image into a CGBitmapContext.
POTWide = nextPOT(img->wide);
POTHigh = nextPOT(img->high);
if (!renderer->extension[APPLE_texture_2D_limited_npot] && (img->wide != POTWide || img->high != POTHigh))
{
GLuint dstBytes = POTWide * components;
GLubyte *temp = (GLubyte *)malloc(dstBytes * POTHigh);
for (y = 0; y < img->high; y++)
memcpy(&temp[y*dstBytes], &pixels[y*rowBytes], rowBytes);
img->s *= (float)img->wide/POTWide;
img->t *= (float)img->high/POTHigh;
img->wide = POTWide;
img->high = POTHigh;
pixels = temp;
rowBytes = dstBytes;
}
// For filters that sample texel neighborhoods (like blur), we must replicate
// the edge texels of the original input, to simulate CLAMP_TO_EDGE.
{
GLuint replicatew = MIN(MAX_FILTER_RADIUS, img->wide-imgWide);
GLuint replicateh = MIN(MAX_FILTER_RADIUS, img->high-imgHigh);
GLuint imgRow = imgWide * components;
for (y = 0; y < imgHigh; y++)
for (x = 0; x < replicatew; x++)
memcpy(&pixels[y*rowBytes+imgRow+x*components], &pixels[y*rowBytes+imgRow-components], components);
for (y = imgHigh; y < imgHigh+replicateh; y++)
memcpy(&pixels[y*rowBytes], &pixels[(imgHigh-1)*rowBytes], imgRow+replicatew*components);
}
if (img->wide <= renderer->maxTextureSize && img->high <= renderer->maxTextureSize)
{
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_2D, texID);
// Set filtering parameters appropriate for this application (image processing on screen-aligned quads.)
// Depending on your needs, you may prefer linear filtering, or mipmap generation.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, internal, img->wide, img->high, 0, format, GL_UNSIGNED_BYTE, pixels);
}
if (temp) free(temp);
CFRelease(data);
CGImageRelease(CGImage);
img->texID = texID;
}
Side Note: The above code is the original and unmodified sample code from Apple and does not generate any errors when compiled. However, when I try to modify the .h and .m to accept a UIImage* parameter (as below) the compiler generates the following error:"Error: expected declaration specifiers or "..." before UIImage"
----------Modified .h Code that generates the Compiler Error:-------------
void loadTexture(const char name, Image *img, RendererInfo *renderer, UIImage* newImage)
You are probably importing this .h into a .c somewhere. That tells the compiler to use C rather than Objective-C. UIKit.h (and it's many children) are in Objective-C and cannot be compiled by a C compiler.
You can rename all you .c files to .m, but what you really probably want is just to use CGImageRef and import CGImage.h. CoreGraphics is C-based. UIKit is Objective-C. There is no problem, if you want, for Texture.m to be in Objective-C. Just make sure that Texture.h is pure C. Alternatively (and I do this a lot with C++ code), you can make a Texture+C.h header that provides just the C-safe functions you want to expose. Import Texture.h in Objective-C code, and Texture+C.h in C code. Or name them the other way around if more convenient, with a Texture+ObjC.h.
It sounds like your file isn't importing the UIKit header.
WHy are you passing new image to loadTexture, instead of using loadTexture's own UImage loading to open the new image you want?
loadTexture:
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
[Why not have the following fetch your UIImage?]
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;