C++ OpenGL glTexImage2D Access Violation - c++

I'm writing an application using OpenGL (freeglut and glew).
I also wanted textures so I did some research on the Bitmap file format and wrote a struct for the main header and another for the DIB header (info header).
Then I started writing the loader. It automatically binds the texture to OpenGL. Here is the function:
static unsigned int ReadInteger(FILE *fp)
{
int a, b, c, d;
// Integer is 4 bytes long.
a = getc(fp);
b = getc(fp);
c = getc(fp);
d = getc(fp);
// Convert the 4 bytes to an integer.
return ((unsigned int) a) + (((unsigned int) b) << 8) +
(((unsigned int) c) << 16) + (((unsigned int) d) << 24);
}
static unsigned int ReadShort(FILE *fp)
{
int a, b;
// Short is 2 bytes long.
a = getc(fp);
b = getc(fp);
// Convert the 2 bytes to a short (int16).
return ((unsigned int) a) + (((unsigned int) b) << 8);
}
GLuint LoadBMP(const char* filename)
{
FILE* file;
// Check if a file name was provided.
if (!filename)
return 0;
// Try to open file.
fopen_s(&file, filename, "rb");
// Return if the file could not be open.
if (!file)
{
cout << "Warning: Could not find texture '" << filename << "'." << endl;
return 0;
}
// Read signature.
unsigned char signature[2];
fread(&signature, 2, 1, file);
// Use signature to identify a valid bitmap.
if (signature[0] != BMPSignature[0] || signature[1] != BMPSignature[1])
{
fclose(file);
return 0;
}
// Read width and height.
unsigned long width, height;
fseek(file, 16, SEEK_CUR); // After the signature we have 16bytes until the width.
width = ReadInteger(file);
height = ReadInteger(file);
// Calculate data size (we'll only support 24bpp).
unsigned long dataSize;
dataSize = width * height * 3;
// Make sure planes is 1.
if (ReadShort(file) != 1)
{
cout << "Error: Could not load texture '" << filename << "' (planes is not 1)." << endl;
return 0;
}
// Make sure bpp is 24.
if (ReadShort(file) != 24)
{
cout << "Error: Could not load texture '" << filename << "' (bits per pixel is not 24)." << endl;
return 0;
}
// Move pointer to beggining of data. (after the bpp we have 24 bytes until the data)
fseek(file, 24, SEEK_CUR);
// Allocate memory and read the image data.
unsigned char* data = new unsigned char[dataSize];
if (!data)
{
fclose(file);
cout << "Warning: Could not allocate memory to store data of '" << filename << "'." << endl;
return 0;
}
fread(data, dataSize, 1, file);
if (data == NULL)
{
fclose(file);
cout << "Warning: Could no load data from '" << filename << "'." << endl;
return 0;
}
// Close the file.
fclose(file);
// Create the texture.
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); //NEAREST);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, width, height, GL_BGR_EXT, GL_UNSIGNED_BYTE, data);
return texture;
}
I know that the bitmap's data is correctly read because I outputted it's data to the console and compared with the image opened in paint.
The problem here is this line:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width,
dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Most of the times I run the application this line crashes with the error:
Unhandled exception at 0x008ffee9 in GunsGL.exe: 0xC0000005: Access violation reading location 0x00af7002.
This is the Disassembly of where the error occurs:
movzx ebx,byte ptr [esi+2]
It's not an error with my loader, because I've downloaded other loaders.
A downloaded loader that I used was this one from NeHe.
EDIT: (CODE UPDATED ABOVE)
I rewrote the loader, but I still get the crash on the same line. Instead of that crash, sometimes I get a crash on mlock.c (same error message is I recall correctly):
void __cdecl _lock (
int locknum
)
{
/*
* Create/open the lock, if necessary
*/
if ( _locktable[locknum].lock == NULL ) {
if ( !_mtinitlocknum(locknum) )
_amsg_exit( _RT_LOCK );
}
/*
* Enter the critical section.
*/
EnterCriticalSection( _locktable[locknum].lock );
}
On the line:
EnterCriticalSection( _locktable[locknum].lock );
Also, here is a screen shot of one of those times the applications doesn't crash (the texture is obviously not right):
http://i.stack.imgur.com/4Mtso.jpg
Edit2:
Updated code with the new working one.
(The reply marked as an answer does not contain all that was needed for this to work, but it was vital)

Try glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before your glTexImage2D() call.

I know, it's tempting to read binary data like this
BitmapHeader header;
BitmapInfoHeader dibheader;
/*...*/
// Read header.
fread(&header, sizeof(BitmapHeader), 1, file);
// Read info header.
fread(&dibheader, sizeof(BitmapInfoHeader), 1, file);
but you really shouldn't do it that way. Why? Because the memory layout of structures may be padded to meet alignment constraints (yes, I know about packing pragmas), the type size of the used compiler may not match the data size in the binary file, and last but not least endianess may not match.
Always read binary data into a intermediary buffer of which you extract the fields in a well defined way with exactly specified offsets and typing.
// Allocate memory for the image data.
data = (unsigned char*)malloc(dibheader.dataSize);
If this is C++, then use the new operator. If this is C, then don't cast from void * to the L value type, it's bad style and may cover usefull compiler warnings.
// Verify memory allocation.
if (!data)
{
free(data);
If data is NULL you mustn't free it.
// Swap R and B because bitmaps are BGR and OpenGL uses RGB.
for (unsigned int i = 0; i < dibheader.dataSize; i += 3)
{
B = data[i]; // Backup Blue.
data[i] = data[i + 2]; // Place red in right place.
data[i + 2] = B; // Place blue in right place.
}
OpenGL does indeed support BGR alignment. The format parameter is, surprise, GL_BGR
// Generate texture image.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dibheader.width, dibheader.height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
Well, and this misses setting all of the pixel store parameters. Always set every pixel store parameter before doing pixel transfers, they may be left in some undesired state from a previous operation. Better safe than sorry.

Related

Access Violation after creating GL_TEXTURE_2D_ARRAY

I'm having access violation on every gl call after this texture initialization (actually the last GLCALL(glBindTexture(m_Target, bound)); is also causing access violation so the code at the top is what probably causing it):
Texture2D::Texture2D(unsigned int format, unsigned int width, unsigned int height, unsigned int unit, unsigned int mimapLevels, unsigned int layers)
: Texture(GL_TEXTURE_2D_ARRAY, unit)
{
unsigned int internalFormat;
if (format == GL_DEPTH_COMPONENT)
{
internalFormat = GL_DEPTH_COMPONENT32;
}
else
{
internalFormat = format;
}
m_Format = format;
m_Width = width;
m_Height = height;
unsigned int bound = 0;
glGetIntegerv(GL_TEXTURE_BINDING_2D_ARRAY, (int*)&bound);
GLCALL(glGenTextures(1, &m_ID));
GLCALL(glActiveTexture(GL_TEXTURE0 + m_Unit));
GLCALL(glBindTexture(m_Target, m_ID));
GLCALL(glTexParameteri(m_Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GLCALL(glTexParameteri(m_Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glTexStorage3D(m_Target, mimapLevels, internalFormat, width, height, layers));
for (size_t i = 0; i < layers; i++)
{
glTexSubImage3D(m_Target, 0, 0, 0, i, m_Width, m_Height, 1, m_Format, s_FormatTypeMap[internalFormat], NULL);
}
GLCALL(glBindTexture(m_Target, bound));
}
OGL pointers are initialized with glad at the beginning of the program:
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
{
std::cout << "Failed to initialize GLAD" << std::endl;
return -1;
}
And this only happens with GL_TEXTURE_2D_ARRAY, even when this is the first line of my code (after initialization of-course), example code:
auto t = Texture2D(GL_DEPTH_COMPONENT, 1024, 1024, 10, 1, 4);
Any idea what may be causing it?
Thanks in advance!
You're passing a NULL for the last argument of glTexSubImage3D, but OpenGL does not allow that:
TexSubImage*D and TextureSubImage*D arguments width, height, depth, format, type, and data match the corresponding arguments to the corresponding TexImage*D command (where those arguments exist), meaning that they accept the same values, and have the same meanings. The exception is that a NULL data pointer does not represent unspecified image contents.
...and there's no text that allows a NULL pointer, therefore you cannot pass NULL.
It's unclear what you're trying to achieve with those glTexSubImage3D calls. Since you're using an immutable texture (glTexStorage3D) you don't need to do anything extra. If instead you want to clear your texture then you can use glClearTexSubImage which does accept NULL for data to means 'clear with zeros'.

Magick Pixel data garbled after minify

I need to read in images of arbitrary sizes and apply them to GL textures. I am trying to resize the images with ImageMagick to get them to fit inside a maximum 1024 dimension texture.
Here is my code:
Magick::Image image(filename);
int width = image.columns();
int height = image.rows();
cout << "Image dimensions: " << width << "x" << height << endl;
// resize it to fit a texture
while ( width>1024 || height>1024 ) {
try {
image.minify();
}
catch (exception &error) {
cout << "Error minifying: " << error.what() << " Skipping." << endl;
return;
}
width = image.columns();
height = image.rows();
cout << " -- minified to: " << width << "x" << height << endl;
}
// transform the pixels to something GL can use
Magick::Pixels view(image);
GLubyte *pixels = (GLubyte*)malloc( sizeof(GLubyte)*width*height*3 );
for ( ssize_t row=0; row<height; row++ ) {
Magick::PixelPacket *im_pixels = view.get(0,row,width,1);
for ( ssize_t col=0; col<width; col++ ) {
*(pixels+(row*width+col)*3+0) = (GLubyte)im_pixels[col].red;
*(pixels+(row*width+col)*3+1) = (GLubyte)im_pixels[col].green;
*(pixels+(row*width+col)*3+2) = (GLubyte)im_pixels[col].blue;
}
}
texPhoto = LoadTexture( pixels, width, height );
free(pixels);
The code for LoadTexure() looks like this:
GLuint LoadTexture(GLubyte* pixels, GLuint width, GLuint height) {
GLuint textureId;
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
glGenTextures( 1, &textureId );
glBindTexture( GL_TEXTURE_2D, textureId );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, (unsigned int*)pixels );
return textureId;
}
All the textures work great except when they have had image.minify() applied to them. Once minified the pixels are basically just random noise. There must be something else going on that I'm not aware of. I am probably missing something in the ImageMagick docs about what I am supposed to do to get the pixel data after I minify it.
How do I properly get the pixel data after a call to minify()?
It turns out that the problem was in the libraries and has to do with the environment I'm running in which is on a Raspberry Pi embedded system. Maybe just a recompile of the sources would have been sufficient but for my purposes I decided I should also reduce the quantum size to 8 bits rather than Magick's default of 16. I also chose a few other configure options for my scenario.
It basically boiled down to this:
apt-get remove libmagick++-dev
wget http://www.imagemagick.org/download/ImageMagick.tar.gz
tar xvfz ImageMagick.tar.gz
cd IMageMagick-6.8.7-2
./configure --with-quantum-depth=8 --disable-openmp --disable-largefile --without-freetype --without-x
make
make install
And then compile against these libraries instead. I also needed to make soft links in /usr/lib to the so files.

DDS texture transparency rendered black Opengl

I am currently trying to render textured objects in Opengl. Everything worked fine until I wanted to render a texture with transparency. Instead of showing the the object transparent it just rendered in total black.
The method fo loading the texture file is this:
// structures for reading and information variables
char magic[4];
unsigned char header[124];
unsigned int width, height, linearSize, mipMapCount, fourCC;
unsigned char* dataBuffer;
unsigned int bufferSize;
fstream file(path, ios::in|ios::binary);
// read magic and header
if (!file.read((char*)magic, sizeof(magic))){
cerr<< "File " << path << " not found!"<<endl;
return false;
}
if (magic[0]!='D' || magic[1]!='D' || magic[2]!='S' || magic[3]!=' '){
cerr<< "File does not comply with dds file format!"<<endl;
return false;
}
if (!file.read((char*)header, sizeof(header))){
cerr<< "Not able to read file information!"<<endl;
return false;
}
// derive information from header
height = *(int*)&(header[8]);
width = *(int*)&(header[12]);
linearSize = *(int*)&(header[16]);
mipMapCount = *(int*)&(header[24]);
fourCC = *(int*)&(header[80]);
// determine dataBuffer size
bufferSize = mipMapCount > 1 ? linearSize * 2 : linearSize;
dataBuffer = new unsigned char [bufferSize*2];
// read data and close file
if (file.read((char*)dataBuffer, bufferSize/1.5))
cout<<"Loading texture "<<path<<" successful"<<endl;
else{
cerr<<"Data of file "<<path<<" corrupted"<<endl;
return false;
}
file.close();
// check pixel format
unsigned int format;
switch(fourCC){
case FOURCC_DXT1:
format = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT;
break;
case FOURCC_DXT3:
format = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT;
break;
case FOURCC_DXT5:
format = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
break;
default:
cerr << "Compression type not supported or corrupted!" << endl;
return false;
}
glGenTextures(1, &ID);
glBindTexture(GL_TEXTURE_2D, ID);
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
unsigned int blockSize = (format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) ? 8 : 16;
unsigned int offset = 0;
/* load the mipmaps */
for (unsigned int level = 0; level < mipMapCount && (width || height); ++level) {
unsigned int size = ((width+3)/4)*((height+3)/4)*blockSize;
glCompressedTexImage2D(GL_TEXTURE_2D, level, format, width, height,
0, size, dataBuffer + offset);
offset += size;
width /= 2;
height /= 2;
}
textureType = DDS_TEXTURE;
return true;
In the fragment shader I just set the gl_FragColor = texture2D( myTextureSampler, UVcoords )
I hope that there is an easy explanation such as some code missing.
In the openGL initialization i glEnabled GL_Blend and set a blend function.
Does anyone have an idea of what I did wrong?
Make sure the blend function is the correct function for what you are trying to accomplish. For what you've described that should be glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
You probably shouldn't set the blend function in your openGL initialization function but should wrap it around your draw calls like:
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
//gl draw functions (glDrawArrays,glDrawElements,etc..)
glDisable(GL_BLEND)
Are you clearing the 2D texture binding before you swap buffers? i.e ...
glBindTexture(GL_TEXTURE_2D, 0);

Getting incorrect width when decoding bitmap

I have an error with my source code which basically causes bitmap images to appear too wide. for example it will print the width and the height and the height is perfect (256) and the width should also be 256 but the programs says it is billions of pixels wide and it is different everytime. here is the source code.
#include "glob.h"
/* Image type - contains height, width, and data */
struct Image {
unsigned long sizeX;
unsigned long sizeY;
char *data;
};
typedef struct Image Image;
int ImageLoad(char *filename, Image *image) {
FILE *file;
unsigned long size; // size of the image in bytes.
unsigned long i; // standard counter.
unsigned short int planes; // number of planes in image (must be 1)
unsigned short int bpp; // number of bits per pixel (must be 24)
char temp; // temporary color storage for bgr-rgb conversion.
// make sure the file is there.
if ((file = fopen(filename, "rb"))==NULL){
printf("bitmap Not Found : %s\n",filename);
return 0;
}
// seek through the bmp header, up to the width/height:
fseek(file, 18, SEEK_CUR);
// read the width
if ((i = fread(&image->sizeX, 4, 1, file)) != 1) {
printf("Error reading width from %s.\n", filename);
return 0;
}
printf("Width of %s: %lu\n", filename, image->sizeX);
// read the height
if ((i = fread(&image->sizeY, 4, 1, file)) != 1) {
printf("Error reading height from %s.\n", filename);
return 0;
}
printf("Height of %s: %lu\n", filename, image->sizeY);
// calculate the size (assuming 24 bits or 3 bytes per pixel).
size = image->sizeX * image->sizeY * 3;
// read the planes
if ((fread(&planes, 2, 1, file)) != 1) {
printf("Error reading planes from %s.\n", filename);
return 0;
}
if (planes != 1) {
printf("Planes from %s is not 1: %u\n", filename, planes);
return 0;
}
// read the bpp
if ((i = fread(&bpp, 2, 1, file)) != 1) {
printf("Error reading bpp from %s.\n", filename);
return 0;
}
if (bpp != 24) {
printf("Bpp from %s is not 24: %u\n", filename, bpp);
return 0;
}
// seek past the rest of the bitmap header.
fseek(file, 24, SEEK_CUR);
// read the data.
image->data = (char *) malloc(size);
if (image->data == NULL) {
printf("Error allocating memory for color-corrected image data\n");
return 0;
}
if ((i = fread(image->data, size, 1, file)) != 1) {
printf("Error reading image data from %s.\n", filename);
return 0;
}
for (i=0; i<size; i+=3) { // reverse all of the colors. (bgr -> rgb)
temp = image->data[i];
image->data[i] = image->data[i+2];
image->data[i+2] = temp;
}
// we're done.
return 0;
}
// Load Bitmaps And Convert To Textures
void glob::LoadGLTextures() {
// Load Texture
Image *image1;
// allocate space for texture
image1 = (Image *) malloc(sizeof(Image));
if (image1 == NULL) {
printf("(image1 == NULL)\n");
exit(0);
}
ImageLoad("data/textures/NeHe.bmp", image1);
// Create Texture
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture); // 2d texture (x and y size)
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); // scale linearly when image bigger than texture
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); // scale linearly when image smalled than texture
// 2d texture, level of detail 0 (normal), 3 components (red, green, blue), x size from image, y size from image,
// border 0 (normal), rgb color data, unsigned byte data, and finally the data itself.
glTexImage2D(GL_TEXTURE_2D, 0, 3, image1->sizeX, image1->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, image1->data);
};
glob.h is this:
#ifndef GLOB_H_INCLUDED
#define GLOB_H_INCLUDED
#include <iostream>
#include <stdlib.h>
#include <stdio.h> // Header file for standard file i/o.
#include <GL/glx.h> /* this includes the necessary X headers */
#include <GL/gl.h>
//#include <GL/glut.h> // Header File For The GLUT Library
//#include <GL/glu.h> // Header File For The GLu32 Library
#include <X11/X.h> /* X11 constant (e.g. TrueColor) */
#include <X11/keysym.h>
class glob {
bool Running;
GLuint texture; //make an array when we start using more then 1
Display *dpy;
Window win;
XEvent event;
GLboolean doubleBuffer;
GLboolean needRedraw;
GLfloat xAngle, yAngle, zAngle;
float camera_x, camera_y, camera_z;
public:
glob();
int OnExecute();
public:
int init(int argc, char **argv);
void LoadGLTextures();
void OnEvent();
void redraw(void);
};
#endif // GLOB_H_INCLUDED
can any body help me fix this problem?
Lots of things could be going wrong.
If it's a very old file, it could have a BITMAPCOREHEADER which has size fields that are only 2 bytes each.
Is your machine little endian? BMP files are stored little endian.
Note that height may be negative, (which implies it's a top-down bitmap instead of a bottom up one). If you interpret a small negative number as an unsigned 32-bit int, you'll see values in the billions.
Also, your seek to the actual pixel data assumes that it starts right after the bitmap header. This is common, but not required. The file header contains the offset of the actual pixel data. (Microsoft documentation calls this the "bitmap bits" or the "color data".)
I recommend doing a hex dump of the beginning of your file and step through it by hand to make sure all your offsets and assumptions are correct. Feel free to paste the beginning of a hex dump into your question.
Are you on Windows? Can you just call LoadImage?

Custom static library linking error with MinGW and QT Creator

I have made a static library Win32 (it doesn't actually contain Win32 code) library in MSVC 2008 and I am currently trying to link to it in QT Creator. But whenever I compile, I get the error:
C:\Users\Snowball\Documents\QT Creator\Libraries\FontSystem\main.cpp:-1: error: undefined reference to `NGUI::CFont::CFont()'
The library is a font system that loads a PNG using FreeImage and then "cuts" it up into individual symbols then passes the image data to gluBuild2DMipMaps() which then creates an OpenGL texture out of it to be used later on when drawing strings. I have ALL my class methods defined and the whole class is part of a namespace called NGUI. This way the font system won't be confused with another if for some reason two are in use. To link this library, I simply added the following code to my .pro file: LIBS += FontSystem.lib
and the only file in the application right now is this:
#include "fontsystem.h"
using namespace NGUI;
int main(int argc, char *argv[])
{
cout<< "Starting the FontSystem..."<< endl;
CFont *cFont = new CFont();
cout<< "FontSystem Started!"<< endl;
system("sleep 1");
return 0;
}
The file fontsystem.h looks like this:
#ifndef FONTSYSTEM_H
#define FONTSYSTEM_H
// Include the Basic C++ headers.
#include <cstdio>
#include <cstdlib>
#include <iostream>
#include <assert.h>
#include <limits>
using namespace std;
// Include the OpenGL and Image Headers.
#include <GL/gl.h>
#include <GL/glu.h>
#include "utilities.h"
// Macros.
#define DIM(x) (sizeof(x)/sizeof(*(x)))
/*
The Font System works by loading all the font images (max image size 32px^2) into memory and storing
the OpenGL texture ID's into an array that can be access at all times. The DrawString() functions will
search through the string for the specified character requested to draw and then it will draw a quad
and paint the texture on it. Since all the images are pre-loaded, no loading of the images at load time
is necessary, this is memory consuming but efficiant for the CPU. ALL functions WILL return a character
string specifing errors or success. A function will work as long as it can and when an error happens,
unless the error is fatal, the functions will NOT rollback changes! This ensures that during testing, a
very certain bug can be spotted.
*/
namespace NGUI // This namespace is used to ensure no confusion happens. This font system paints 2D fonts for GUIs.
{
class CFont
{
public:
CFont();
~CFont();
template<typename tChar> char* DrawString(tChar *apString, int aiSize, int aiX, int aiY);
template<typename tNum> char* DrawString(tNum anNumber, int aiSize, int aiX, int aiY);
private:
char* SetupFont(); // This function will load as many images as possible into memory.
GLuint miTextIDs[36];
int miDrawIDs[1024];
};
}
#endif // FONTSYSTEM_H
EDIT:
Here is the implementation file for fontsystem.h
#include "fontsystem.h"
#include "fontsystem.h"
namespace NGUI
{
CFont::CFont()
{
SetupFont();
}
CFont::~CFont() {}
template<typename tChar>
char* CFont::DrawString(tChar *apString, int aiSize, int aiX, int aiY)
{
// Search the string from most significant character to least significant.
int iSelectIndex = 0;
for(size_t i = 0; apString[i] != NULL; ++i)
{
iSelectIndex = apString[i] >= '0' && apString[i] <= '9' ? (apString[i] - '0') :
apString[i] >= 'A' && apString[i] <= 'Z' ? (apString[i] - 'A' + 10) :
apString[i] >= 'a' && apString[i] <= 'z' ? (apString[i] - 'a' + 10) :
apString[i] == ' ' ? 36 : // This is a special case, This see's if the current character is a space or not.
-1;
if(iSelectIndex == -1)
{
return "The String Is Corrupt! Aborting!";
}
// Add the current selected character to the drawing array.
miDrawIDs[i] = iSelectIndex;
}
// Go through and draw each and every character.
for(size_t i = 0; i < DIM(miDrawIDs); ++i)
{
// Paint each qaud with the X,Y coordinates. After each quad has been successfully drawn,
// Add the size to the X coordinate. NOTE: Each character is square!!!
if(miDrawIDs[i] != 36)
{
glBindTexture(GL_TEXTURE_2D, miDrawIDs[i]);
}
// The font color is always white.
glColor4f(1.0, 1.0, 1.0, 0.0); // The alpha argument in the function call is set to 0 to allow color only where image data is present.
glBegin(GL_QUADS);
glTexCoord2i(0, 0);
glVertex2i(aiX, aiY);
glTexCoord2i(1, 0);
glVertex2i(aiX + aiSize, aiY);
glTexCoord2i(1, 1);
glVertex2i(aiX + aiSize, aiY + aiSize);
glTexCoord2i(0, 1);
glVertex2i(aiX, aiY + aiSize);
glEnd();
// Now, increase the X position by the size.
aiX += aiSize;
}
return "Successful Drawing of String!";
}
template<typename tNum>
char* CFont::DrawString(tNum anNumber, int aiSize, int aiX, int aiY)
{
// Convert the supplied number to a character string via snprintf().
char *vTempString = new char[1024];
snprintf(vTempString, 1024, "%f", anNumber);
// Next, run DrawString().
return DrawString<char>(vTempString, aiSize, aiX, aiY);
}
char* CFont::SetupFont()
{
// First Load The PNG file holding the font.
FreeImage_Initialise(false);
FIBITMAP *spBitmap = FreeImage_Load(FIF_PNG, "Font.png", BMP_DEFAULT);
if(!spBitmap)
{
return "Was Unable To Open/Decode Bitmap!";
}
// Do an image sanity check.
if(!FreeImage_HasPixels(spBitmap))
{
return "The Image doesn't contain any pixel data! Aborting!";
}
// The Image will have the red and blue channel reversed, so we need to correct them.
SwapRedBlue32(spBitmap);
// Retrieve all the image data from FreeImage.
unsigned char *pData = FreeImage_GetBits(spBitmap);
int iWidth = FreeImage_GetWidth(spBitmap);
// Cutup the PNG.
int iFontElementSize = (32*32)*4; // The first two numbers, are the dimensions fo the element, the last number (4) is the number of color channels (Red Green Blue and Alpha)
bool bDone = false; // This bit is only set when the entire image has been loaded.
unsigned char *pElemBuff = new unsigned char[iFontElementSize]; // The temporary element buffer.
int iDataSeek = 4; // Start with an offset of 4 because the first byte of image data starts there.
int iTexIdx = 0; // This is an offset specifing which texture to create/bind to.
// Create all 36 OpenGL texures. 0-9 and A-Z and finally space (' ')
glGenTextures(37, miTextIDs);
while(!bDone)
{
// Now load the an element into the buffer.
for(int i = 0, iXCount = 0, iYCount = 0;
i < iFontElementSize; ++i, ++iXCount)
{
if(iXCount >= (32*4))
{
iXCount = 0; // Reset the column offset.
++iYCount; // Move down 1 row.
iDataSeek += ((iWidth * 4) - (32*4)); // Set the data seek to the next corrosponding piece of image data.
}
if(pData[iDataSeek] == NULL)
{
break;
}
pElemBuff[i] = pData[iDataSeek];
}
// Check to see if we are done loading to prevent memory corruption and leakage.
if(bDone || iTexIdx >= 37)
{
break;
}
// Create The OpenGL Texture with the current Element.
glBindTexture(GL_TEXTURE_2D, miTextIDs[iTexIdx]);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGBA, 32, 32, GL_RGBA, GL_UNSIGNED_BYTE, pElemBuff);
// Create the correct texture envirnment to the current texture.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
}
// Do a little house cleaning!
delete[] pElemBuff;
delete pData;
FreeImage_Unload(spBitmap);
FreeImage_DeInitialise();
}
}
PLEASE NOTE: this code hasn't been tested yet but compile fine (according to MSVC 2008)
I forgot to change the system("sleep 1"); to system("PAUSE"); and the reason I had it invoking that command is because I originally was building this in linux.
EDIT 2: I have updated the implementation code to reflect the file.