GLUT textures artefacts - c++

I'm traying to render texture to plane using:
unsigned char image[HEIGHT][WIDTH][3];
...
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGB,
WIDTH, HEIGHT,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
image);
...
draw();
and that code ran smothly, but wher I'm traying to do this on dynamicly alocated array GLut is rendering artefacts. shorted code:
unsigned char ***image;
image = new unsigned char**[HEIGHT];
for (int i = 0; i < HEIGHT; i++ )
{
image[i] = new unsigned char*[WIDTH];
for (int j = 0; j < WIDTH ; j++ )
{
image[i][j] = new unsigned char[3];
}
}
...
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGB,
WIDTH, HEIGHT,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
image);
...
draw();
both arrays has identical content (checked bit by bit).
full code:
main.cpp
http://pastebin.com/dzDbNgMa
TEXT_PLANE.hpp (using headers, to ensure inlinement):
http://pastebin.com/0HxcAnkW
I'm sory for the mess in code, but it's only a blasting side.
I would be very greatfull for any help.

What you're using as your texture is the WIDTH * HEIGHT * 3 bytes of memory starting at image.
For this, you need contiguous data like in the first example.
Your second example is not an array of array of arrays, it's an array of pointers to arrays of pointers. These pointers can point anywhere.
(An array is not a pointer, and a pointer is not an array.)
If you need dynamic allocation, use
unsigned char image* = new unsigned char [WIDTH * HEIGHT * 3];
and do your own indexing arithmetic; the components would be
image[row * WIDTH + 3 * column]
image[row * WIDTH + 3 * column + 1]
image[row * WIDTH + 3 * column + 2]
(or
image[column * HEIGHT + 3 * row], etc.
Pick one.)

Related

rgba arrays to OpenGL texture

For the gui for my game, I have a custom texture object that stores the rgba data for a texture. Each GUI element registered by my game adds to the final GUI texture, and then that texture is overlayed onto the framebuffer after post-processing.
I'm having trouble converting my Texture object to an openGL texture.
First I create a 1D int array that goes rgbargbargba... etc.
public int[] toIntArray(){
int[] colors = new int[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = r[x][y];
colors[i+1] = g[x][y];
colors[i+2] = b[x][y];
colors[i+3] = a[x][y];
i += 4;
}
}
return colors;
}
Where r, g, b, and a are jagged int arrays from 0 to 255. Next I create the int buffer and the texture.
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
IntBuffer iBuffer = BufferUtils.createIntBuffer(((width * height)*4));
int[] data = toIntArray();
iBuffer.put(data);
iBuffer.rewind();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, iBuffer);
glBindTexture(GL_TEXTURE_2D, 0);
After that I add a 50x50 red square into the upper left of the texture, and bind the texture to the framebuffer shader and render the fullscreen rect that displays my framebuffer.
frameBuffer.unbind(window.getWidth(), window.getHeight());
postShaderProgram.bind();
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, guiManager.texture()); // this gets the texture id that was created
postShaderProgram.setUniform("gui_texture", 1);
mesh.render();
postShaderProgram.unbind();
And then in my fragment shader, I try displaying the GUI:
#version 330
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D texFramebuffer;
uniform sampler2D gui_texture;
void main()
{
outColor = texture(gui_texture, Texcoord);
}
But all it outputs is a black window!
I added a red 50x50 rectangle into the upper left corner and verified that it exists, but for some reason it isn't showing in the final output.
That gives me reason to believe that I'm not converting my texture into an opengl texture with glTexImage2D correctly.
Can you see anything I'm doing wrong?
Update 1:
Here I saw them doing a similar thing using a float array, so I tried converting my 0-255 to a 0-1 float array and passing it as the image data like so:
public float[] toFloatArray(){
float[] colors = new float[(width*height)*4];
int i = 0;
for(int y = 0; y < height; ++y){
for(int x = 0; x < width; ++x){
colors[i] = (( r[x][y] * 1.0f) / 255);
colors[i+1] = (( g[x][y] * 1.0f) / 255);
colors[i+2] = (( b[x][y] * 1.0f) / 255);
colors[i+3] = (( a[x][y] * 1.0f) / 255);
i += 4;
}
}
return colors;
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_FLOAT, toFloatArray());
And it works!
I'm going to leave the question open however as I want to learn why the int buffer wasn't working :)
When you specified GL_UNSIGNED_INT as the type of the "Host" data, OpenGL expected 32 bits allocated for each color. Since OpenGL only maps the output colors in the default framebuffer to the range [0.0f, 1.0f], it'll take your input color values (mapped in the range [0, 255]) and divide all of them by the maximum size of an int (about 4.2 Billion) to get the final color displayed on screen. As an exercise, using your original code, set the "clear" color of the screen to white, and see that a black rectangle is getting drawn on screen.
You have two options. The first is to convert the color values to the range specified by GL_UNSIGNED_INT, which means for each color value, multiply them by Math.pow((long)2, 24), and trust that the integer overflow of multiplying by that value will behave correctly (since Java doesn't have unsigned integer types).
The other, far safer option, is to store each 0-255 value in a byte[] object (do not use char. char is 1 byte in C/C++/OpenGL, but is 2 bytes in Java) and specify the type of the elements as GL_UNSIGNED_BYTE.

OpenGL changing color of generated texture

I'm creating a sheet of characters and symbols from a font file, which works fine, except on the generated sheet all the pixels are black (with varying alpha). I would prefer them to be white so I can apply color multiplication and have different colored text. I realize that I can simply invert the color in the fragment shader, but I want to reuse the same shader for all my GUI elements.
I'm following this tutorial: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_02
Here's a snippet:
// Create map texture
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &map);
glBindTexture(GL_TEXTURE_2D, map);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mapWidth, mapHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Draw bitmaps onto map
for (uint i = start; i < end; i++) {
charInfo curChar = character[i];
if (FT_Load_Char(face, i, FT_LOAD_RENDER)) {
cout << "Loading character " << (char)i << " failed!" << endl;
continue;
}
glTexSubImage2D(GL_TEXTURE_2D, 0, curChar.mapX, 0, curChar.width, curChar.height, GL_ALPHA, GL_UNSIGNED_BYTE, glyph->bitmap.buffer);
}
The buffer of each glyph contains values of 0-255 for the alpha of the pixels. My question is, how do I generate white colored pixels instead of black? Is there a setting for this? (I've tried some blend modes but without success)
Since you create the texture with
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mapWidth, mapHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
you can either change the GL_RGBA to GL_RED (or GL_LUMINANCE for pre-3.0 OpenGL) or you can create the RGBA buffer and copy the glyph data there.
I.e., you have
glyph->bitmap.buffer
then you do
unsigned char* glyphRGBA = new unsigned char[ curChar.width * curChar.height * 4];
for(int j = 0 ; j < curChar.height ; j++)
for(int i = 0 ; i < curChar.width ; i++)
{
int ofs = j * curChar.width + i;
for(int k = 0; k < 3 ; k++)
glyphRGBA[ofs + k] = YourTextColor[k];
// set alpha
glyphRGBA[ofs + 3] = glyph->bitmap.buffer[ofs];
}
In the code above YourTextColor is unsigned char[3] array with RGB components of the text color. The glyphRGBA array can be fed to glTexSubImage2D.

Uploading alternate rows of Pixel Data OpenGL

I am uploading an interlaced image to an OpenGL texture using glTexImage2D which of course uploads whole image. What I need is to upload only alternate rows, so on first texture odd rows and on second even rows.
I don't want to create another copy of the Pixel Data on CPU.
You can set GL_UNPACK_ROW_LENGTH to twice the actual row length. This will effectively skip every second row. If the size of your texture is width x height:
glPixelStorei(GL_UNPACK_ROW_LENGTH, 2 * width);
glBindTexture(GL_TEXTURE_2D, tex1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, width);
glBindTexture(GL_TEXTURE_2D, tex2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
Instead of setting GL_UNPACK_SKIP_PIXELS to skip the first row, you can also increment the data pointer accordingly.
There is an ancient SGI extension (GL_SGIX_interlace) for transferring interlaced pixel data, but it is probably not supported on your implementation.
An alternative you might consider is memory mapping a Pixel Buffer Object. You can fill this buffer over two passes and then use it as the source of image data in a call to glTexImage2D (...). You essentially do the de-interlacing yourself, but since this is done by mapping a buffer object's memory you are not making an unnecessary copy of the image on the CPU.
Pseudo code showing how to do this:
GLuint deinterlace_pbo;
glGenBuffers (1, &deinterlace_pbo);
// `GL_PIXEL_UNPACK_BUFFER`, when non-zero is the source of memory for `glTexImage2D`
glBindBuffer (GL_PIXEL_UNPACK_BUFFER, deinterlace_pbo);
// Reserve memory for the de-interlaced image
glBufferData (GL_PIXEL_UNPACK_BUFFER, sizeof (pixel) * interlaced_rows * width * 2,
NULL, GL_STATIC_DRAW);
// Returns a pointer to the ***GL-managed memory*** where you will write the image
void* pixel_data = glMapBuffer (GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
// Odd Rows First
for (int i = 0; i < interlaced_rows; i++) {
for (int j = 0; j < width; j++) {
//Fill in pixel_data for each pixel in row (i*2+1)
}
}
// Even Rows
for (int i = 0; i < interlaced_rows; i++) {
for (int j = 0; j < width; j++) {
//Fill in pixel_data for each pixel in row (i*2)
}
}
glUnmapBuffer ();
// This will read the memory in the object bound to `GL_PIXEL_UNPACK_BUFFER`
glTexImage2D (..., NULL);
glBindBuffer (GL_PIXEL_UNPACK_BUFFER, 0);

OpenGL fails when resizing buffer

I need to update an array of pixels to the screen every frame. It works initially, however when I try to resize the screen it glitches and eventually throws EXC_BAD_ACCESS 1. I already checked that the buffer is allocated to the correct size before every frame, however it does not seem to affect the result.
#include <stdio.h>
#include <stdlib.h>
#include <GLUT/GLUT.h>
unsigned char *buffer = NULL;
int width = 400, height = 400;
unsigned int screenTexture;
void Display()
{
for (int y = 0; y < height; y+=4) {
for (int x = 0; x < width; x++) {
buffer[(x + y * width) * 3] = 255;
}
}
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
// This function results in EXC_BAD_ACCESS 1, although the buffer is always correctly allocated
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, buffer);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
glBegin (GL_QUADS);
glTexCoord2f(0,0); glVertex2i(0, 0);
glTexCoord2f(1,0); glVertex2i(width,0);
glTexCoord2f(1,1); glVertex2i(width,height);
glTexCoord2f(0,1); glVertex2i(0, height);
glEnd ();
glFlush();
glutPostRedisplay();
}
void Resize(int w, int h)
{
width = w;
height = h;
buffer = (unsigned char *)realloc(buffer, sizeof(unsigned char) * width * height * 3);
if (!buffer) {
printf("Error Reallocating buffer\n");
exit(1);
}
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutInitWindowSize(width, height);
glutCreateWindow("Rasterizer");
glutDisplayFunc(Display);
glutReshapeFunc(Resize);
glGenTextures(1, &screenTexture);
glBindTexture(GL_TEXTURE_2D, screenTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glDisable(GL_DEPTH_TEST);
buffer = (unsigned char *)malloc(sizeof(unsigned char) * width * height * 3);
glutMainLoop();
}
After resizing the screen does not display properly either:
What is causing this problem? The code compiles and runs you just have to link GLUT and OpenGL.
As #genpfault mentioned, OpenGL reads 4 bytes per pixel instead of your assumption of 3.
Instead of changing GL_UNPACK_ALIGNMENT, you can also change your code to the correct assumption of 4 bytes per pixel via a simple struct:
struct pixel {
unsigned char r, g, b;
unsigned char unused;
};
Then, instead of using the magic constant 3, you can use the much clearer sizeof(struct pixel). This makes it easier to read and to convey the intent of the code, and it doesn't result in any extra code (as the structure is "effectively" an array of 4 bytes).
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, buffer);
^^^^^^
GL_UNPACK_ALIGNMENT defaults to 4, not 1. So OpenGL will read 4 bytes for every pixel, not the 3 that you're assuming.
Set GL_UNPACK_ALIGNMENT to 1 using glPixelStorei().
It sounds like you found something that works, but I don't think the problem was properly diagnosed. I believe the biggest issue is in the way you initialize your texture data here:
for (int y = 0; y < height; y+=4) {
for (int x = 0; x < width; x++) {
buffer[(x + y * width) * 3] = 255;
}
}
This only sets data in every 4th row, and then only for every 3rd byte within those rows. To initialize all the data to white, you need to increment the row number (y) by 1 instead of 4, and set all 3 components inside the loop:
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
buffer[(x + y * width) * 3 ] = 255;
buffer[(x + y * width) * 3 + 1] = 255;
buffer[(x + y * width) * 3 + 2] = 255;
}
}
You also need to set GL_UNPACK_ALIGNMENT to 1:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This controls the row alignment (not the pixel alignment, as suggested in a couple other answers). The default value for GL_UNPACK_ALIGNMENT is 4. But with 3 bytes per pixel in the GL_RGB format you are using, the size of a row is only a multiple of 4 bytes if the number of pixels is a multiple of 4. So for tightly packed rows with 3 bytes/pixel, the value needs to be set to 1.

Renderbuffers larger than window size - OpenGL

I'm trying to draw to a renderbuffer (512x512) that's larger than the screen size (i.e., 320x480).
After doing a glReadPixels, the image looks correct, except once the dimensions of the image exceed that of the screen size- in this example, past 320 horizontal and 480 vertical. What causes this anomaly? Is there something I'm missing?
When the window size is >= the size of the renderbuffer, this code works absolutely fine.
Example image that was rendered to the buffer & glReadPixel'd:
http://img593.imageshack.us/img593/3220/rendertobroke.png
unsigned int canvasFrameBuffer;
bglGenFramebuffers(1, &canvasFrameBuffer);
bglBindFramebuffer(BGL_RENDERBUFFER, canvasFrameBuffer);
// Attach renderbuffer
unsigned int canvasRenderBuffer;
bglGenRenderbuffers(1, &canvasRenderBuffer);
bglBindRenderbuffer(BGL_RENDERBUFFER, canvasRenderBuffer);
bglRenderbufferStorage(BGL_RENDERBUFFER, BGL_RGBA4, width, height);
bglFramebufferRenderbuffer(BGL_FRAMEBUFFER, BGL_COLOR_ATTACHMENT0, BGL_RENDERBUFFER, canvasRenderBuffer);
bglViewport(0, 0, width, height);
Matrix::matrix_t identity, colorMatrix;
Matrix::LoadIdentity(&identity);
Matrix::LoadIdentity(&colorMatrix);
bglClearColor(1.0f, 1.0f, 1.0f, 1.0f);
bglClear(BGL_COLOR_BUFFER_BIT);
Vector::vector_t oldPos, oldScale;
Vector::Copy(&oldPos, &pos);
Vector::Mul(&pos, 0.0f);
Vector::Copy(&oldScale, &scale);
Vector::Load(&scale, 1, 1, 1);
int oldHAlign = halignment;
int oldVAlign = valignment;
halignment = Font::HALIGN_LEFT;
valignment = Font::VALIGN_BOTTOM;
float oldXRatio = vid.xratio;
float oldYRatio = vid.yratio;
vid.xratio = 1;
vid.yratio = 1;
Drawing::Set2D(this->size.x, this->size.y); // glOrtho and setup projection/modelview matrices
Draw(&identity, &colorMatrix);
Vector::Copy(&pos, &oldPos);
Vector::Copy(&scale, &oldScale);
halignment = oldHAlign;
valignment = oldVAlign;
vid.xratio = oldXRatio;
vid.yratio = oldYRatio;
byte *buffer = (byte*)Z_Malloc(width * height * 3, ZT_STATIC);
bglPixelStorei(BGL_PACK_ALIGNMENT, 1);
bglReadPixels(0, 0, width, height, BGL_RGB, BGL_UNSIGNED_BYTE, buffer);
byte *final = RGBtoLuminance(buffer, width, height);
SaveTGA("canvas.tga", final, width, height, 1);
Z_Free(buffer);
// unbind frame buffer
bglBindRenderbuffer(BGL_RENDERBUFFER, 0);
bglBindFramebuffer(BGL_FRAMEBUFFER, 0);
bglDeleteRenderbuffers(1, &canvasRenderBuffer);
bglDeleteFramebuffers(1, &canvasFrameBuffer);
bglViewport(0, 0, vid.width, vid.height);
Here's the answer.
Change this line:
bglBindFramebuffer(BGL_RENDERBUFFER, canvasFrameBuffer);
to this:
bglBindFramebuffer(BGL_FRAMEBUFFER, canvasFrameBuffer);