Odd results using glTexImage2D - c++

I've been trying to figure out how glTexImage2D works and an seeing some odd results from some pretty clear-cut code. My code simply draws a rough circle into a 256*256 length unsigned array and then sends that data out to become a texture. The texture displayed however is turning out as variations of red and orange no matter what combinations I select inside my image creation loop:
unsigned* data = new unsigned[256*256];
for (int y = 0; y < 256; ++y)
for (int x = 0; x < 256; ++x)
if ((x - 100)*(x - 100) + (y - 156)*(y - 156) < 75*75)
data[256*y + x] = ((156 << 24) | (256 << 16) | (156 << 8) | (200 << 0));
else
data[256*y + x] = 0; // I'd expect this to be transparent and the above to be slightly transparent and green, but it's red somehow.
glBindTexture(GL_TEXTURE_2D, texid);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)data);
OpenGL options:
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH);
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE);
//glEnable(GL_BLEND);
//glDisable(GL_CULL_FACE);
glGenTextures(1, &leaf[0]);
createLeaf(leaf[0]); // createLeaf(GLuint& texid) is posted entirely above
The rest of the code does nothing but display the texture on a single quad in a window. (x64 win7)
Edit: I tried Rickard's solution exactly and I'm still getting a purple circle.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)data);
First the positive things. You use a sized internal format (GL_RGBA8, rather than GL_RGBA). This is very good; keep doing that. You have a clear understanding of the difference between the internal format (GL_RGBA8) and the pixel transfer format (GL_RGBA). This is also good.
The problem is this. You told OpenGL that your data was a stream of unsigned bytes. But it's not a stream of unsigned bytes; it's a stream of unsigned integers. That's how you declared data, that's how you filled data. So why are you lying to OpenGL?
The problem is with your colors. This is one of your color values:
((156 << 24) | (256 << 16) | (156 << 8) | (200 << 0))
First, 256 is not a valid color. 256 in hex is 0x100, which is two bytes, not one.
The unsigned integer you would get from this is:
0x9D009CC8
If these are intended to be RGBA colors in that order, then the red is 0x9D, green is 0x00, blue is 0x9C, and alpha is 0xC8.
Now, because you're probably working on a little-endian computer, that 4 bytes is stored flipped, like this:
0xC89C009D
When you tell OpenGL to pretend that this is a byte array (which it is not), you are losing the little-endian conversion. So OpenGL sees the byte array starting with 0xC8, so that is the red value. And so on.
You need to tell OpenGL what you're actually doing: you're storing four 8-bit unsigned values in a single unsigned 32-bit integer. To do this, use the following:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, (GLvoid*)data);
The GL_UNSIGNED_INT_8_8_8_8 says that you're feeding OpenGL an array of unsigned 32-bit integers (which you are). The first 8-bits of the 32-bit integer is red, the second is green, the third is blue, and the fourth is alpha.
So, to completely fix your code, you need this:
GLuint* data = new GLuint[256*256]; //Use OpenGL's types
for (int y = 0; y < 256; ++y)
for (int x = 0; x < 256; ++x)
if ((x - 100)*(x - 100) + (y - 156)*(y - 156) < 75*75)
data[256*y + x] = ((0x9C << 24) | (0xFF << 16) | (0x9C << 8) | (0xC8 << 0));
else
data[256*y + x] = 0; // I'd expect this to be transparent and the above to be slightly transparent and green, but it's red somehow.
glBindTexture(GL_TEXTURE_2D, texid);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0); //Always set the base and max mipmap levels of a texture.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_RGBA, GL_UNSIGNED_INT_8_8_8_8, (GLvoid*)data);
// I'd expect this to be transparent and the above to be slightly transparent and green, but it's red somehow.
Alpha doesn't mean transparent; it means nothing at all unless you give it a meaning. Alpha only represents transparency if you use blending and set up a blending mode that causes a low alpha to make things transparent. Otherwise, it means nothing at all.

If I where to do the same thing as you I would use a array of unsigned chars instead of unsigned int with 4 times the length.
unsigned char* data = new unsigned char[256*256*4];
for (int y = 0; y < 256; ++y)
for (int x = 0; x < 256; ++x)
if ((x - 100)*(x - 100) + (y - 156)*(y - 156) < 75*75){
data[(256*y + x)*4+0] = 156;
data[(256*y + x)*4+1] = 256;
data[(256*y + x)*4+2] = 156;
data[(256*y + x)*4+3] = 200;
}else{
data[(256*y + x)*4+0] = 0;
data[(256*y + x)*4+1] = 0;
data[(256*y + x)*4+2] = 0;
data[(256*y + x)*4+3] = 0;
}
glBindTexture(GL_TEXTURE_2D, texid);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)data);
But your code looks right to me and I not sure if the code above will change anything. If its the same result, then try change GL_RGBA8 to just GL_RGBA. And what is the varible type of texid. I alwas call glBindTexture with a GLuint by refrense (&texid) but if your texid is a pointer to a GLuint (GLuint *texid;) then I guess that part is ok. (Edit: Just realize that im wrong on the last part, i was thinking about glGenTexture and not glBindTexture)

Related

OpenGL single channel viability to multiple channel

When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?
In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.

OpenGL Texture corruption

i am rendering simple pixel buffer in OpenGL. First, i create a quad, then i create a texture. It works correctly if there is no changes in buffer. When i change my buffer and add new buffer into texture by glTexSubImage2D or glTexImage2D my texture's top section corrupts like image.
I create my buffer like this.
int length = console->width * console->height * 3;
GLubyte buf[length];
for(int i = 0; i < length; i += 3) {
buf[i] = 0;
buf[i + 1] = 0;
buf[i + 2] = 0;
}
console->buffer = buf;
I create texture like this.
glGenTextures(1, &console->textureID);
glBindTexture(GL_TEXTURE_2D, console->textureID);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, console->width, console->height, 0, GL_RGB, GL_UNSIGNED_BYTE, console->buffer);
tpUseShader(console); // -> calls glUseShader(console->programID);
glUniform1i(glGetUniformLocation(console->programID, "texture"), 0);
I update texture like this.
glBindTexture(GL_TEXTURE_2D, console->textureID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, console->width, console->height, GL_RGB, GL_UNSIGNED_BYTE, console->buffer);
For testing i change my buffer like this in render function
if(console->buffer[6] == 255) {
console->buffer[6] = 0; // 6 is second pixel's red value.
console->buffer[10] = 255; // 10 is third pixel's green value
} else {
console->buffer[6] = 255;
console->buffer[10] = 0;
}
Then i call tpUseShader and render my quad.
How can i fix this problem?
I changed my console size to 10x10 and run again this time i got same results but, in image you can see from bottom left 3rd pixel is dark blue. When i print printf("3rd pixel: %d- %d - %d\n", console->buffer[12], console->buffer[13], console->buffer[14]);. value i got red: 0 green: 0 blue: 0 values. That means my buffer is normal.
I got the solution. As pleluron said in comments of question. I changed buf in to console->buffer, and it worked!. Now my buffer initialization code is like this:
console->buffer = malloc(sizeof(GLubyte) * length);
for(int i = 0; i < length; i += 3) {
console->buffer[i] = 0;
console->buffer[i + 1] = 0;
console->buffer[i + 2] = 0;
}

OpenGL changing color of generated texture

I'm creating a sheet of characters and symbols from a font file, which works fine, except on the generated sheet all the pixels are black (with varying alpha). I would prefer them to be white so I can apply color multiplication and have different colored text. I realize that I can simply invert the color in the fragment shader, but I want to reuse the same shader for all my GUI elements.
I'm following this tutorial: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_02
Here's a snippet:
// Create map texture
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &map);
glBindTexture(GL_TEXTURE_2D, map);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mapWidth, mapHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Draw bitmaps onto map
for (uint i = start; i < end; i++) {
charInfo curChar = character[i];
if (FT_Load_Char(face, i, FT_LOAD_RENDER)) {
cout << "Loading character " << (char)i << " failed!" << endl;
continue;
}
glTexSubImage2D(GL_TEXTURE_2D, 0, curChar.mapX, 0, curChar.width, curChar.height, GL_ALPHA, GL_UNSIGNED_BYTE, glyph->bitmap.buffer);
}
The buffer of each glyph contains values of 0-255 for the alpha of the pixels. My question is, how do I generate white colored pixels instead of black? Is there a setting for this? (I've tried some blend modes but without success)
Since you create the texture with
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mapWidth, mapHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
you can either change the GL_RGBA to GL_RED (or GL_LUMINANCE for pre-3.0 OpenGL) or you can create the RGBA buffer and copy the glyph data there.
I.e., you have
glyph->bitmap.buffer
then you do
unsigned char* glyphRGBA = new unsigned char[ curChar.width * curChar.height * 4];
for(int j = 0 ; j < curChar.height ; j++)
for(int i = 0 ; i < curChar.width ; i++)
{
int ofs = j * curChar.width + i;
for(int k = 0; k < 3 ; k++)
glyphRGBA[ofs + k] = YourTextColor[k];
// set alpha
glyphRGBA[ofs + 3] = glyph->bitmap.buffer[ofs];
}
In the code above YourTextColor is unsigned char[3] array with RGB components of the text color. The glyphRGBA array can be fed to glTexSubImage2D.

Freetype glyphs wrap when loaded into openGL

I'm trying to load TrueTypeFonts through freetype and display them using openGL and this is what I'm getting:
As you can see mostly it's fine but if you look closer each individual glyph seems to have some small incongruities around the borders. I noticed that these strange lines are actually carried over pixels from the other side. Look at the 'T' and 'h' in particular where you can see small bars corresponding with the opposite side of the texture. This happens with different fonts as well. Here is the code responsible for copying the glyph bitmap buffer into openGL:
void load(FT_GlyphSlot glyphSlot, double loadedHeight){
this->loadedHeight = loadedHeight;
int width = glyphSlot->bitmap.width;
int height = glyphSlot->bitmap.rows;
sizeRatios = Vector2D(width / loadedHeight, height / loadedHeight);
offsetRatios = Vector2D(glyphSlot->bitmap_left / loadedHeight, glyphSlot->metrics.horiBearingY / loadedHeight);
advanceRatios = Vector2D((glyphSlot->advance.x >> 6) / loadedHeight, (glyphSlot->advance.y >> 6) / loadedHeight);
std::cout << width << ", " << height << std::endl;
GLubyte * textureData = new GLubyte[width * height * 4];
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
for(int i = 0; i < 4; i++){
textureData[(x + y * width) * 4 + i] = glyphSlot->bitmap.buffer[x + width * y];
}
}
}
texture.bind();
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
texture.load(GL_TEXTURE_2D, 0, GL_RGBA, glyphSlot->bitmap.width, glyphSlot->bitmap.rows, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
// glGenerateMipmap(GL_TEXTURE_2D);
delete [] textureData;
}
The size of the font face is set elsewhere and passed into this method along with the glyph slot that I want to load. The texture object is just a class that creates a texture handle and keeps track of it, load just passes the parameters directly into glTexImage2D().
I've tried shifting the pixels by one using modulus rotation and it worked vertically but not horizontally. I have also tried loading the texture by passing the buffer directly into load by changing the format to GL_RED as described here but the problem doesn't go away, so I think that maybe it might even be a flaw in freetype?
I wonder is there is some basic element of texture loading that I do not understand.
If you need some additional source code to understand what is wrong please ask.
GL_TEXTURE_WRAP_S/GL_TEXTURE_WRAP_T default to GL_REPEAT.
Use GL_CLAMP_TO_EDGE instead.

c++ tga parsing incorrect_color/distortion with some resolutions

i'd like to get some help on my issue with .tga file format parsing. i have the code, which i use for a long time:
int fileLength = Input.tellg();
vector<char> tempData;
tempData.resize(fileLength);
Input.seekg(0);
Input.read(&tempData[0], fileLength);
Input.close();
// Load information about the tga, aka the header.
// Seek to the width.
w = byteToUnsignedShort(tempData[12], tempData[13]);
// Seek to the height.
h = byteToUnsignedShort(tempData[14], tempData[15]);
// Seek to the depth.
depth = unsigned(tempData[16]);
// Mode = components per pixel.
md = depth / 8;
// Total bytes = h * w * md.
t = h * w * md;
//Delete allocated data, if need to
clear();
//Allocate new storage
data.resize(t);
// Copy image data.
for(unsigned i = 0, s = 18; s < t + 18; s++, i++)
data[i] = unsigned char(tempData[s]);
// Mode 3 = RGB, Mode 4 = RGBA
// TGA stores RGB(A) as BGR(A) so
// we need to swap red and blue.
if(md > 2)
{
char aux;
for(unsigned i = 0; i < t; i+= md)
{
aux = data[i];
data[i] = data[i + 2];
data[i + 2] = aux;
}
}
but it keeps failing occasionally for some image resolutions(mostly odd numbers and non-POT resolutions). it results in distorted image(with diagonal patterns) or wrong colors. last time i've encountered it - it was 9x9 24bpp image showing weird colors.
i'm on windows(so it means little-endian), rendering with opengl(i'm taking in account alpha channel existence, when passing image data with glTexImage2D). i'm saving my images with photoshop, not setting RLE flag. this code always reads correct image resolution and color depth.
example of image causing trouble:
http://pastie.org/private/p81wbh5sb6coldspln6mw
after loading problematic image, this code:
for(unsigned f = 0; f < imageData.w * imageData.h * imageData.depth; f += imageData.depth)
{
if(f % (imageData.w * imageData.depth) == 0)
writeLog << endl;
writeLog << "[" << unsigned(imageData.data[f]) << "," << unsigned(imageData.data[f + 1]) << "," << unsigned(imageData.data[f + 2]) << "]" << flush;
}
outputs this:
[37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40]
[37,40,40][173,166,164][93,90,88][93,90,88][93,90,88][93,90,88][93,90,88][88,85,83][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][228,221,219][221,212,209][221,212,209][221,212,209][221,212,209][221,212,209][140,134,132][37,40,40]
[37,40,40][237,232,230][235,229,228][235,229,228][235,229,228][235,229,228][235,229,228][223,214,212][37,40,40]
[37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40][37,40,40]
so i guess it does read correct data.
that brings us to opengl;
glGenTextures(1, &textureObject);
glBindTexture(GL_TEXTURE_2D, textureObject);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
GLenum in_comp_mode, comp_mode;
if(linear) //false for that image
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
else
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
//i don't use 1 or 2 - channel textures, so it's always 24 or 32bpp
if(imageData.depth == 24)
{
in_tex_mode = GL_RGB8;
tex_mode = GL_RGB;
}
else
{
in_tex_mode = GL_RGBA8;
tex_mode = GL_RGBA;
}
glTexImage2D(GL_TEXTURE_2D, 0, in_tex_mode, imageData.w, imageData.h, 0, tex_mode, GL_UNSIGNED_BYTE, &imageData.data[0]);
glBindTexture(GL_TEXTURE_2D, NULL);
texture compression code is omitted, 'cause it's not active for that texture.
This is probably a padding/alignment issue.
You're loading a TGA, which has no row-padding, but passing it to GL which by default expects rows of pixels to be padded to a multiple of 4 bytes.
Possible fixes for this are:
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Change the dimensions of your texture, such that there will be no padding.
Change the loading of your texture, such that the padding is consistent with what GL expects
most image format save image data aligned(4 bytes commonly).
for example, resolution: 1rows 1columns
each row has one pixel, so if RGB is used, each row has 3bytes.
and will be extend to 4bytes for alignment because the CPU like that.
english is not my native language, so my bad grammar will kill you. just try to understand it.