Fbo textures get flipped/rotated - opengl

I am capturing a couple of images through fbo's. I then reuse these images, adding something to them (using fbo's and shaders). Now, for some reason the images get rotated and I have no idea where it happens.
Below some of the code the bug may be connected with. I can supply more code on request.
I save the images like this:
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
int bpp = 4; // Assuming a 32-bit display with a byte each for red, green, blue, and alpha.
ByteBuffer buffer = BufferUtils.createByteBuffer(SAVE_WIDTH * SAVE_HEIGHT * bpp);
glReadPixels(0, 0, SAVE_WIDTH, SAVE_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, buffer );
File file = new File("picture" + k + ".png"); // The file to save to.
String format = "png"; // Example: "PNG" or "JPG"
BufferedImage image = new BufferedImage(SAVE_WIDTH, SAVE_HEIGHT, BufferedImage.TYPE_INT_ARGB);
for(int x = 0; x < SAVE_WIDTH; x++)
for(int y = 0; y < SAVE_HEIGHT; y++)
{
int i = (x + (SAVE_WIDTH * y)) * bpp;
int r = buffer.get(i) & 0xFF;
int g = buffer.get(i + 1) & 0xFF;
int b = buffer.get(i + 2) & 0xFF;
int a = buffer.get(i + 3) & 0xFF;
image.setRGB(x, SAVE_HEIGHT - (y + 1), (a << 24) | (r << 16) | (g << 8) | b);
}
try {
ImageIO.write(image, format, file);
} catch (IOException e) {
e.printStackTrace();
}
And I load them like this:
ByteBuffer buf = null;
File file = new File(filename);
if (file.exists()) {
try {
BufferedImage image = ImageIO.read(file);
buf = Util.getImageDataFromImage(image);
} catch (IOException ex) {
Logger.getLogger(SkyBox.class.getName()).log(Level.SEVERE, null, ex);
}
} else {
int length = SAVE_WIDTH * SAVE_HEIGHT * 4;
buf = ByteBuffer.allocateDirect(length);
for (int i = 0; i < length; i++)
buf.put((byte)0xFF);
buf.rewind();
}
// Create a new texture object in memory and bind it
glBindTexture(GL_TEXTURE_2D, pictureTextureId);
// All RGB bytes are aligned to each other and each component is 1 byte
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Upload the texture data and generate mip maps (for scaling)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SAVE_WIDTH, SAVE_HEIGHT, 0,
GL_RGBA, GL_UNSIGNED_BYTE, buf);
// Setup what to do when the texture has to be scaled
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_NEAREST);
getImageDataFromImage()
WritableRaster wr = bufferedImage.getRaster();
DataBuffer db = wr.getDataBuffer();
DataBufferByte dbb = (DataBufferByte) db;
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(dbb.getData().length);
byte[] bytes = dbb.getData();
for(int i=0; i<bytes.length; i+=4) {
byteBuffer.put(bytes[i+3]);
byteBuffer.put(bytes[i+2]);
byteBuffer.put(bytes[i+1]);
byteBuffer.put(bytes[i]);
}
byteBuffer.flip();
return byteBuffer;

Rotated, or flipped in the vertical? If they're flipped, then that's because OpenGL and image file formats don't neccesarily agree on the origin of the coordinate system. With OpenGL and the usual projection setups the origin is in the lower left. Most image file formats and IO libraries assume the origin in the upper left.

Related

Extracting Pixels From SDL2 Surface Created With SDL_TTF

I'm working on a program that creates an SDL_Surface using http://www.fontspace.com/work-ins-studio/variane-script. I set the background of the surface to be transparent. Then I extract the pixels from the surface, and place them in part of an opengl texture.
It all works fine, except that the text ends up looking like this (the text should read "testing")
My question: Did I mess up the math somehow and do this myself, or is this just the behaviour of SDL_TTF? And, if it is just the behaviour of SDL_TTF, how do I work around it to get pixel data that I can use?
Here is the relevant code:
int main(int argc, char* args[]) {
//other sdl and opengl overhead stuff here...
TTF_Init();
//shader setup here...
TTF_Font *font;
font = TTF_OpenFont("VarianeScript.ttf", 50);
SDL_Surface* surface;
SDL_Color color = { 255, 0, 0 };
surface = TTF_RenderText_Solid(font, "testing", color);
SDL_SetSurfaceAlphaMod(surface, 255);
int surfaceWidth = surface->w;
int surfaceHeight = surface->h;
Uint8 red, green, blue, alpha;
float* textImage = new float[(surfaceWidth * surfaceHeight) * 4];
int countText = 0;
SDL_LockSurface(surface);
Uint8* p = (Uint8*)surface->pixels;
for (int y = 0; y < surfaceHeight; ++y) {
for (int x = 0; x < (surfaceWidth); ++x) {
Uint8 pixel = p[(y * surface->w) + x];
SDL_GetRGBA(pixel, surface->format, &red, &green, &blue, &alpha);
textImage[countText] = ((float)red / 255.0f);
++countText;
textImage[countText] = ((float)green / 255.0f);
++countText;
textImage[countText] = ((float)blue / 255.0f);
++countText;
textImage[countText] = ((float)alpha / 255.0f);
++countText;
}
}
SDL_UnlockSurface(surface);
SDL_FreeSurface(surface);
GLuint texture;
float* image;
int width = 1000, height = 1000;
int textX = width - (int)(width / 1.5);
int textY = height - (int)(height / 1.5);
setupTexture(texture, shader, width, height, image, textImage, textX, textY, surfaceWidth, surfaceHeight);
//etc...
also (the important part starts around where I declare the startpos variables)
void setupTexture(GLuint &texture, Shader &shader, int &width, int &height, float* &image, float* text, int textX, int textY, int textW, int textH) {
glGenTextures(1, &texture);
image = new float[(width * height) * 3];
for (int a = 0; a < (width * height) * 3; ) {
if (a < ((width * height) * 3) / 2) {
image[a] = 0.5f;
++a;
image[a] = 1.0f;
++a;
image[a] = 0.3f;
++a;
}
else {
image[a] = 0.0f;
++a;
image[a] = 0.5f;
++a;
image[a] = 0.7f;
++a;
}
}
int startpos1, startpos2;
for(int y = 0; y < textH; ++y) {
for(int x = 0; x < textW; ++x) {
startpos1 = (((y + textY) * width) * 3) + ((x + textX) * 3);
startpos2 = ((y * textW) *4) + (x * 4);
if (text[startpos2 + 3] != 0.0) {
image[startpos1] = text[startpos2];
image[startpos1 + 1] = text[startpos2 + 1];
image[startpos1 + 2] = text[startpos2 + 2];
}
}
}
glActiveTexture(GL_TEXTURE0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_FLOAT, image);
glUniform1i(glGetUniformLocation(shader.shaderProgram, "texSampler"), 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
Your problem is in the way you extract pixel from surface:
Uint8 pixel = p[(y * surface->w) + x];
You assume that each pixel takes one byte (could be verified by inspecting surface->format->BytesPerPixel), and that each row is surface->w*1 bytes long - but it isn't. Instead, each row is surface->pitch bytes long, so your code should be
Uint8 pixel = p[y * surface->pitch + x];
(that still assumes it is 1 byte long, but that's beside the point).
It is quite weird that you use floats to represent pixel data, as it gives you nothing here aside from much slower loading.

Can load a single texture but having problems loading into an array

I am having trouble loading textures onto an array of type GLuints but the problem doesn't occur with loading a single texture. When I debug it, I see that textureIDs isn't assigned any values and has only 0s even after setTexture(...) is called
but the problem doesn't happen on a single texture loaded onto textureID. Either I'm missing something so obvious or my understanding of Opengl or C++ is lacking.
Relevant functions within this context
GLuint textureIDs[3];
GLuint textureID;
Ground Constructor
Ground::Ground(void) :
textureId(0),groundTextures()
{
setAllTexture();
}
Draw : ground function that draws.. well the ground
void Ground::draw()
{
glPushMatrix();
//Ground
glPushMatrix();
glTranslatef(0,-1,0); //all buildings and ground
//ground
glPushMatrix();
glTranslatef(-5,0,-20);
glScalef(40,0.2,40);
setTexture("Textures/rue2.bmp", textureId, true, true);
drawRectangle(1.0f);
glPopMatrix();
glPopMatrix();
}
settexture() sets textures
void Ground::setTexture(const char* textureName, GLuint& textId, bool stretchX, bool stretchZ)
{
if (textId == 0)
{
textId = (GLuint)createTexture(textureName);
}
if (textId != 0)
{
//enable texture coordinate generation
glEnable(GL_NORMALIZE);
glEnable(GL_DEPTH_TEST);
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND); //Enable alpha blending
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //Set the blend function
glBindTexture(GL_TEXTURE_2D, textId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, stretchX ? GL_REPEAT : GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, stretchZ ? GL_REPEAT : GL_CLAMP);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
}
}
setAllTexture: loads textures onto the array of GLuints
int Ground::setAllTexture()
{
setTexture("Textures/asphalt.bmp",groundTextures[0],false, false );
//set up textures for all of them and bind them to ground textures.
glBindTexture(GL_TEXTURE_2D, groundTextures[0]);
setTexture("Textures/concreteFloor.bmp",groundTextures[1],false, false);
//set up textures for all of them and bind them to ground textures.
glBindTexture(GL_TEXTURE_2D, groundTextures[1]);
setTexture("Textures/dirtyGrass.bmp",groundTextures[2],false, false);
//set up textures for all of them and bind them to ground texture
glBindTexture(GL_TEXTURE_2D, groundTextures[2]);
for ( unsigned int i =0 ; i < 3 ; i ++ )
//check to see if all textures loaded properly
{
if ( groundTextures[i] == 0 )
return false;
}
return true;
//if you're here, every texture was loaded correctly.
}
ImageLoader: loads a 24bit RGB bitmap
Image* loadBMP(const char* filename) {
ifstream input;
input.open(filename, ifstream::binary);
assert(!input.fail() || !"Could not find file");
char buffer[2];
input.read(buffer, 2);
assert(buffer[0] == 'B' && buffer[1] == 'M' || !"Not a bitmap file");
input.ignore(8);
int dataOffset = readInt(input);
//Read the header
int headerSize = readInt(input);
int width;
int height;
switch(headerSize) {
case 40:
//V3
width = readInt(input);
height = readInt(input);
input.ignore(2);
assert(readShort(input) == 24 || !"Image is not 24 bits per pixel");
assert(readShort(input) == 0 || !"Image is compressed");
break;
case 12:
//OS/2 V1
width = readShort(input);
height = readShort(input);
input.ignore(2);
assert(readShort(input) == 24 || !"Image is not 24 bits per pixel");
break;
case 64:
//OS/2 V2
assert(!"Can't load OS/2 V2 bitmaps");
break;
case 108:
//Windows V4
assert(!"Can't load Windows V4 bitmaps");
break;
case 124:
//Windows V5
assert(!"Can't load Windows V5 bitmaps");
break;
default:
assert(!"Unknown bitmap format");
}
//Read the data
int bytesPerRow = ((width * 3 + 3) / 4) * 4 - (width * 3 % 4);
int size = bytesPerRow * height;
auto_array<char> pixels(new char[size]);
input.seekg(dataOffset, ios_base::beg);
input.read(pixels.get(), size);
//Get the data into the right format
auto_array<char> pixels2(new char[width * height * 3]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
for(int c = 0; c < 3; c++) {
pixels2[3 * (width * y + x) + c] =
pixels[bytesPerRow * y + 3 * x + (2 - c)];
}
}
}
input.close();
return new Image(pixels2.release(), width, height);
}
createTexture: creates a texture and returns a GLuint
unsigned int createTexture( const char* imageName )
{
Image* image = loadBMP(imageName);
GLuint textureId = 0;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,image->width, image- >height,0,GL_RGB,GL_UNSIGNED_BYTE,image->pixels);
delete image;
return (unsigned int) textureId;
}

c++ tga parsing incorrect_color/distortion with some resolutions

i'd like to get some help on my issue with .tga file format parsing. i have the code, which i use for a long time:
int fileLength = Input.tellg();
vector<char> tempData;
tempData.resize(fileLength);
Input.seekg(0);
Input.read(&tempData[0], fileLength);
Input.close();
// Load information about the tga, aka the header.
// Seek to the width.
w = byteToUnsignedShort(tempData[12], tempData[13]);
// Seek to the height.
h = byteToUnsignedShort(tempData[14], tempData[15]);
// Seek to the depth.
depth = unsigned(tempData[16]);
// Mode = components per pixel.
md = depth / 8;
// Total bytes = h * w * md.
t = h * w * md;
//Delete allocated data, if need to
clear();
//Allocate new storage
data.resize(t);
// Copy image data.
for(unsigned i = 0, s = 18; s < t + 18; s++, i++)
data[i] = unsigned char(tempData[s]);
// Mode 3 = RGB, Mode 4 = RGBA
// TGA stores RGB(A) as BGR(A) so
// we need to swap red and blue.
if(md > 2)
{
char aux;
for(unsigned i = 0; i < t; i+= md)
{
aux = data[i];
data[i] = data[i + 2];
data[i + 2] = aux;
}
}
but it keeps failing occasionally for some image resolutions(mostly odd numbers and non-POT resolutions). it results in distorted image(with diagonal patterns) or wrong colors. last time i've encountered it - it was 9x9 24bpp image showing weird colors.
i'm on windows(so it means little-endian), rendering with opengl(i'm taking in account alpha channel existence, when passing image data with glTexImage2D). i'm saving my images with photoshop, not setting RLE flag. this code always reads correct image resolution and color depth.
example of image causing trouble:
http://pastie.org/private/p81wbh5sb6coldspln6mw
after loading problematic image, this code:
for(unsigned f = 0; f < imageData.w * imageData.h * imageData.depth; f += imageData.depth)
{
if(f % (imageData.w * imageData.depth) == 0)
writeLog << endl;
writeLog << "[" << unsigned(imageData.data[f]) << "," << unsigned(imageData.data[f + 1]) << "," << unsigned(imageData.data[f + 2]) << "]" << flush;
}
outputs this:
[37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40]
[37,40,40][173,166,164][93,90,88][93,90,88][93,90,88][93,90,88][93,90,88][88,85,83][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][237,232,230][235,229,228][235,229,228][235,229,228][235,229,228][235,229,228][223,214,212][37,40,40]
[37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40]
so i guess it does read correct data.
that brings us to opengl;
glGenTextures(1, &textureObject);
glBindTexture(GL_TEXTURE_2D, textureObject);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
GLenum in_comp_mode, comp_mode;
if(linear) //false for that image
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
else
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//i don't use 1 or 2 - channel textures, so it's always 24 or 32bpp
if(imageData.depth == 24)
{
in_tex_mode = GL_RGB8;
tex_mode = GL_RGB;
}
else
{
in_tex_mode = GL_RGBA8;
tex_mode = GL_RGBA;
}
glTexImage2D(GL_TEXTURE_2D, 0, in_tex_mode, imageData.w, imageData.h, 0, tex_mode, GL_UNSIGNED_BYTE, &imageData.data[0]);
glBindTexture(GL_TEXTURE_2D, NULL);
texture compression code is omitted, 'cause it's not active for that texture.
This is probably a padding/alignment issue.
You're loading a TGA, which has no row-padding, but passing it to GL which by default expects rows of pixels to be padded to a multiple of 4 bytes.
Possible fixes for this are:
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Change the dimensions of your texture, such that there will be no padding.
Change the loading of your texture, such that the padding is consistent with what GL expects
most image format save image data aligned(4 bytes commonly).
for example, resolution: 1rows 1columns
each row has one pixel, so if RGB is used, each row has 3bytes.
and will be extend to 4bytes for alignment because the CPU like that.
english is not my native language, so my bad grammar will kill you. just try to understand it.

android native app opengl es white textures

I'm writing a native app that should only display a little triangle with a texture.
But unfortunately, it everytime only displays a white triangle.
My code is very simple.
First to load a tga Image:
static const GLenum gl_format[4] = { GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, GL_RGBA };
unsigned int LoadTGATextureFromFile(const char* filename)
{
unsigned int handle;
unsigned char hdr[18];
unsigned char file_id[256 + 1];
int file;
file = open(filename, O_RDONLY);
if(file < 0)
{
Log(LOGLEVEL_ERROR, APPNAME, "Error: Failed to open tga file '%s' for read ing\n", filename);
return 0;
}
if(read(file, hdr, 18) != 18 || read(file, file_id, hdr[0]) != hdr[0])
{
Log(LOGLEVEL_ERROR, APPNAME, "Error: Unexpected EOF while reading header of '%s'\n", filename);
close(file);
return 0;
}
file_id[hdr[0]] = 0;
if(hdr[1] != 0 || (hdr[2] != 2 && hdr[2] != 3) || (hdr[16] != 8 && hdr[16] != 16 && hdr[16] != 24 && hdr[16] != 32))
{
Log(LOGLEVEL_ERROR, APPNAME, "Error: File '%s' has invalid format\n", filename);
close(file);
return 0;
}
int width = *(short*)(hdr + 12);
int height = *(short*)(hdr + 14);
if((width & (width - 1)) != 0 || (height & (height - 1)) != 0)
{
Log(LOGLEVEL_ERROR, APPNAME, "Error: File '%s' has invalid resolution %dx%d\n", filename, width, height);
close(file);
return 0;
}
int components = hdr[16] / 8;
unsigned char* data = new unsigned char [width * height * components];
if (read(file, data, width * height * components) != width * height * components)
{
Log(LOGLEVEL_ERROR, APPNAME, "Error: Unexpected EOF while reading image data of '%s'\n", filename);
close(file);
return 0;
}
close(file);
char dummy;
if(read(file, &dummy, 1) == 1)
Log(LOGLEVEL_ERROR, APPNAME, "Warning: TGA file '%s' has overlength\n", filename);
switch (components - 1)
{
char tmp;
case 2:
for (int i = 0; i < width * height; i += 3)
{
tmp = data[i];
data[i] = data[i + 2];
data[i + 2] = tmp;
}
break;
case 3:
for (int i = 0; i < width * height; i += 4)
{
tmp = data[i];
data[i] = data[i + 2];
data[i + 2] = tmp;
}
break;
default:
break;
}
glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glBindTexture(GL_TEXTURE_2D, 0);
delete [] data;
Log(LOGLEVEL_ERROR, APPNAME, "'%s' successfully loaded [handle = %d, FILE_ID = \"%s\", width = %d, height = %d, depth = %d] :)\n", filename, handle, file_id, width, height, components * 8);
return handle;
}
Loading the texture:
int texture = LoadTextureFormFile("/sdcard/test.tga");
Then to draw:
float tricoords[6] = { 0.0, 0.0, 1.0, 1.0, 0.0, 1.0 };
float texcoords[6] = { 0.0, 0.0, 1.0, 1.0, 0.0, 1.0 };
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, tricoords);
glTexCoordPointer(2, GL_FLOAT, 0, texcoords);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
I know, that this code isnt optimized, but its only for debugging.
The logcat of my app prints:
successfully loaded tga [handle = 1, FILE_ID = "", width = 64, height = 128, depth = 32] :)
But the texture stays white.
Just found the mistake, texture mipmapping was enabled for the loaded texture the mipmaps were never created.
Changing this line:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
to this will disable mipmaps for the texture.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

OpenGL textures appear just black

I am trying to apply a texture to a quad, but I only get a black box instead of the texture. I am using DevIL to load images from files and OpenGL does the rest.
Here is what I am doing so far:
The following class abstracts the DevIL representation for an image.
#include "Image.h"
Image::Image()
{
ilGenImages(1, &this->imageId);
}
Image::~Image()
{
ilDeleteImages(1, &this->imageId);
}
ILint Image::getWidth()
{
return this->width;
}
ILint Image::getHeight()
{
return this->height;
}
ILint Image::getDepth()
{
return this->depth;
}
ILint Image::getBpp()
{
return this->bpp;
}
ILint Image::getFormat()
{
return this->format;
}
ILubyte* Image::getData()
{
return ilGetData();
}
bool Image::loadFromFile(wchar_t *filename)
{
// Load the image from file.
ILboolean retval = ilLoadImage(filename);
if (!retval) {
ILenum error;
while ((error = ilGetError()) != IL_NO_ERROR) {
wcout << error << L" " << iluErrorString(error);
}
return false;
}
this->width = ilGetInteger(IL_IMAGE_WIDTH);
this->height = ilGetInteger(IL_IMAGE_HEIGHT);
this->depth = ilGetInteger(IL_IMAGE_DEPTH);
this->bpp = ilGetInteger(IL_IMAGE_BPP);
this->format = ilGetInteger(IL_IMAGE_FORMAT);
return true;
}
bool Image::convert()
{
ILboolean retval = ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE);
if (!retval) {
ILenum error;
while ((error = ilGetError()) != IL_NO_ERROR) {
wcout << error << L" " << iluErrorString(error);
}
return false;
}
return true;
}
bool Image::scale(ILint width, ILint height, ILint depth)
{
ILboolean retval = iluScale(width, height, depth);
if (!retval) {
ILenum error;
while ((error = ilGetError()) != IL_NO_ERROR) {
wcout << error << L" " << iluErrorString(error);
}
return false;
}
return true;
}
void Image::bind()
{
ilBindImage(this->imageId);
}
This class abstracts the texture representation for OpenGL.
#include "Texture.h"
Texture::Texture(int width, int height)
{
glGenTextures(1, &this->textureId);
this->width = width;
this->height = height;
}
int Texture::getWidth()
{
return this->width;
}
int Texture::getHeight()
{
return this->height;
}
void Texture::initFilter()
{
// We will use linear interpolation for magnification filter.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// We will use linear interpolation for minifying filter.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
}
void Texture::unpack()
{
glPixelStoref(GL_UNPACK_ALIGNMENT, 1);
}
void Texture::bind()
{
glBindTexture(GL_TEXTURE_2D, this->textureId);
}
Texture::~Texture()
{
glDeleteTextures(1, &this->textureId);
}
The following class contains the texture loading process.
#include "TextureLoader.h"
void TextureLoader::initialize()
{
if (ilGetInteger(IL_VERSION_NUM) < IL_VERSION) {
debug("Wrong DevIL version detected.");
return;
}
ilInit();
ilutRenderer(ILUT_OPENGL);
}
Texture* TextureLoader::createTexture(wchar_t *filename, Color *color)
{
// Generate some space for an image and bind it.
Image *image = new Image();
image->bind();
bool retval = image->loadFromFile(filename);
if (!retval) {
debug("Could not load image from file.");
return 0;
}
retval = image->convert();
if (!retval) {
debug("Could not convert image from RGBA to unsigned byte");
}
int pWidth = getNextPowerOfTwo(image->getWidth());
int pHeight = getNextPowerOfTwo(image->getHeight());
int size = pWidth * pHeight;
retval = image->scale(pWidth, pHeight, image->getDepth());
if (!retval) {
debug("Could not scale image from (w: %i, h: %i) to (w: %i, h: %i) with depth %i.", image->getWidth(), image->getHeight(), pWidth, pHeight, image->getDepth());
return 0;
}
// Generate some space for a texture and bind it.
Texture *texture = new Texture(image->getWidth(), image->getHeight());
texture->bind();
// Set the interpolation filters.
texture->initFilter();
// Unpack pixels.
texture->unpack();
ILubyte *imageData = image->getData();
TextureLoader::setColorKey(imageData, size, new Color(0, 0, 0));
TextureLoader::colorize(imageData, size, new Color(255, 0, 0));
debug("bpp: %i", image->getBpp());
debug("width: %i", image->getWidth());
debug("height: %i", image->getHeight());
debug("format: %i", image->getFormat());
// Map image data to texture data.
glTexImage2D(GL_TEXTURE_2D, 0, image->getBpp(), image->getWidth(), image->getHeight(), 0, image->getFormat(), GL_UNSIGNED_BYTE, imageData);
delete image;
return texture;
}
void TextureLoader::setColorKey(ILubyte *imageData, int size, Color *color)
{
for (int i = 0; i < size * 4; i += 4)
{
if (imageData[i] == color->r && imageData[i + 1] == color->g && imageData[i + 2] == color->b)
{
imageData[i + 3] = 0;
}
}
}
void TextureLoader::colorize(ILubyte *imageData, int size, Color *color)
{
for (int i = 0; i < size * 4; i += 4)
{
int rr = (int(imageData[i]) * int(color->r)) >> 8;
int rg = (int(imageData[i + 1]) * int(color->g)) >> 8;
int rb = (int(imageData[i + 2]) * int(color->b)) >> 8;
int fak = int(imageData[i]) * 5 - 4 * 256 - 138;
if (fak > 0)
{
rr += fak;
rg += fak;
rb += fak;
}
rr = rr < 255 ? rr : 255;
rg = rg < 255 ? rg : 255;
rb = rb < 255 ? rb : 255;
imageData[i] = rr > 0 ? (GLubyte) rr : 1;
imageData[i + 1] = rg > 0 ? (GLubyte) rg : 1;
imageData[i + 2] = rb > 0 ? (GLubyte) rb : 1;
}
}
The last class does the drawing.
#include "Texturizer.h"
void Texturizer::draw(Texture *texture, float x, float y, float angle)
{
// Enable texturing.
glEnable(GL_TEXTURE_2D);
// Bind the texture for drawing.
texture->bind();
// Enable alpha blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
int width = texture->getWidth();
int height = texture->getHeight();
// Create centered dimension vectors.
b2Vec2 vertices[4];
vertices[0] = 0.5f * b2Vec2(- width, - height);
vertices[1] = 0.5f * b2Vec2(+ width, - height);
vertices[2] = 0.5f * b2Vec2(+ width, + height);
vertices[3] = 0.5f * b2Vec2(- width, + height);
b2Mat22 matrix = b2Mat22();
matrix.Set(angle);
glBegin(GL_QUADS);
for (int i = 0; i < 4; i++) {
float texCoordX = i == 0 || i == 3 ? 0.0f : 1.0f;
float texCoordY = i < 2 ? 0.0f : 1.0f;
glTexCoord2f(texCoordX, texCoordY);
// Rotate and move vectors.
b2Vec2 vector = b2Mul(matrix, vertices[i]) + meter2pixel(b2Vec2(x, y));
glVertex2f(vector.x, vector.y);
}
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}
Last but not least, the following method initializes OpenGL (and triggers the initialization of DevIL):
void GraphicsEngine::initialize(int argc, char **argv)
{
// Initialize the window.
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(WIDTH, HEIGHT);
// Set shading model.
glShadeModel(GL_SMOOTH);
// Create the window.
this->mainWindow = glutCreateWindow(TITLE);
// Set keyboard methods.
glutKeyboardFunc(&onKeyDownCallback);
glutKeyboardUpFunc(&onKeyUpCallback);
glutSpecialFunc(&onSpecialKeyDownCallback);
glutSpecialUpFunc(&onSpecialKeyUpCallback);
// Set mouse callbacks.
glutMouseFunc(&onMouseButtonCallback);
#ifdef FREEGLUT
glutMouseWheelFunc(&onMouseWheelCallback);
#endif
glutMotionFunc(&onMouseMotionCallback);
glutPassiveMotionFunc(&onMousePassiveMotionCallback);
// Set display callbacks.
glutDisplayFunc(&onDrawCallback);
glutReshapeFunc(&onReshapeCallback);
// Set a timer to control the frame rate.
glutTimerFunc(FRAME_PERIOD, onTimerTickCallback, 0);
// Set clear color.
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
Camera::getInstance()->subscribe(this);
// Initialize texture loader.
TextureLoader::initialize();
}
The image I am using already worked for another OpenGL/DevIL project, so it cannot be the source of the problem.
The texture is created inside of every class which represents a world object (it's a game...). The character is called Blobby and here are the most important parts of its implementation:
#include "Blobby.h"
Blobby::Blobby()
{
this->isJumping = false;
this->isRotating = false;
this->isWalking = false;
this->isDucking = false;
this->isStandingUp = false;
this->isOnGround = false;
this->isTouchingWall = false;
this->angle = 0;
this->direction = DIRECTION_UNKNOWN;
this->wallDirection = DIRECTION_UNKNOWN;
// Create a red blobby texture.
this->texture = TextureLoader::createTexture(L"D:/01.bmp", new Color(255, 0, 0));
ContactListener::getInstance()->subscribe(this);
}
void Blobby::draw()
{
GraphicsEngine::drawString(35, 40, "isOnGround = %s", this->isOnGround ? "true" : "false");
GraphicsEngine::drawString(35, 55, "inJumping = %s", this->isJumping ? "true" : "false");
GraphicsEngine::drawString(35, 70, "isRotating = %s", this->isRotating ? "true" : "false");
GraphicsEngine::drawString(35, 85, "isTouchingWall = %s (%i)", this->isTouchingWall ? "true" : "false", this->wallDirection);
Texturizer::draw(this->texture, this->getBody(0)->GetPosition().x, this->getBody(0)->GetPosition().y, this->getBody(0)->GetAngle());
AbstractEntity::draw(); // draws debug information... not important
}
The OpenGL timer callback calls a step method which ends here:
void Simulator::step()
{
// Update physics.
this->gameWorld->step();
b2Vec2 p = Camera::convertWorldToScreen(meter2pixel(this->cameraBlobby->getBody(0)->GetPosition().x), 300.0f);
if (p.x < 300) {
Camera::getInstance()->setViewCenter(Camera::convertScreenToWorld(400 - (300 - int(p.x)), 300));
} else if (p.x > 500) {
Camera::getInstance()->setViewCenter(Camera::convertScreenToWorld(400 + (int(p.x) - 500), 300));
}
for (unsigned int i = 0; i < this->gameWorld->getEntityCount(); i++) {
IEntity *entity = this->gameWorld->getEntity(i);
entity->draw();
}
}
IEntity is a pure virtual class (i.e. interface), AbstractEntity implements this interface and adds global methods. Blobby inherits from AbstractEntity and adds routines which are special for this world object.
EDIT:
I have uploaded a more recent version of the code (the whole project incl. dependencies) here:
http://upload.visusnet.de/uploads/BlobbyWarriors-rev19.zip (~9.5 MB)
I'm not familiar with DevIL, but... are you providing the right diffuse color for your vertices? If lighting is enabled, are there some lights pointing on the quad? Does the camera look at the front side of the quad?
EDIT:
You got a bug in the code, but not the one you posted here, but in the version in the archive you linked.
You call glColor3i(255, 255, 255), and it sets the diffuse color to (very nearly) black as expected. glColor3i does not accept the color values in the target (calculation or framebuffer) range. The possible values are scaled to the entire range of the int type. This means the maximum value (1.0 in float) is represented by MAX_INT (2,147,483,647)
, 0 is 0, and -1.0 is MIN_INT (-2,147,483,648). The 255 value you provided represents about 0.000000118, which is very nearly zero.
I believe you intended one of the following (completely equivalent) forms:
glColor3f(1.0, 1.0, 1.0), glColor3ub(255, 255, 255),
glColor3i(2147483647, 2147483647, 2147483647).
What is in the b2Mat22 matrix? Could it be that multiplying by this matrix is causing your vertices to be drawn in a clockwise order, because I think in that case your square's back would be facing you, and the texture might be on the other (invisible) side.
I had an issue like this a long time ago, I think back then it was a problem with the texture dimensions not being an exponent of 2 (128x128, 512x512, etc.). I'm sure they've fixed that by now, but it might be something to try.