What's wrong with my usage of glBindTexture? - c++

I am rendering a chess board, using 2 different textures. One for the black squares and one for the white squares. However instead of each different square having their own texture, they all take on the last texture that I bound calling glBindTexture(GL_TEXTURE_2D, id);.
This is my approach:
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
// square 0, 0 ( front left )
glBindTexture(GL_TEXTURE_2D, textureBlackSquare->texID);
glNormal3f(0, 1, 0);
glTexCoord2f(0, 0); glVertex3f(-8.0, 0.5, 8.0);
glTexCoord2f(1, 0); glVertex3f(-6.0, 0.5, 8.0);
glTexCoord2f(1, 1); glVertex3f(-6.0, 0.5, 6.0);
glTexCoord2f(0, 1); glVertex3f(-8.0, 0.5, 6.0);
glEnd();
glBegin(GL_QUADS);
// square 1, 0
glBindTexture(GL_TEXTURE_2D, textureWhiteSquare->texID);
glTexCoord2f(0, 0); glVertex3f(-6.0, 0.5, 8.0);
glTexCoord2f(1, 0); glVertex3f(-4.0, 0.5, 8.0);
glTexCoord2f(1, 1); glVertex3f(-4.0, 0.5, 6.0);
glTexCoord2f(0, 1); glVertex3f(-6.0, 0.5, 6.0);
glEnd();
When I run this code, both quads have the white texture bound. How do I get each quad to have its own texture?

You cannot call glBindTexture in the middle of glBegin/End. You can only call vertex functions within begin/end.
Also, why don't you just make a single texture as an 8x8 checkerboard, and then just render a single quad to draw the whole checkerboard?

From the documentation:
GL_INVALID_OPERATION is generated if glBindTexture is executed between
the execution of glBegin and the corresponding execution of glEnd.
You forgot to check for errors, and thus missed that your program is invalid.

You can't bind a texture within a glBegin-glEnd block. Also you should avoid switching textures where possible, since switching the texture is among the most expensive things you can ask the GPU to do (a texture switch invalidates all texel fetch caches).
Instead you sort your scene objects by the texture they use and group them by this. So you first render all checkerboard quads using the first texture (say white), and after that all the quads using the second texture (black then).

Related

How come we are looking at negative z by default in openGL?

I have this code
glColor3f(1, 0, 0);// red quad
glBegin(GL_QUADS);
glVertex3f(-1, 0, -0.1);
glVertex3f(1, 0, -0.1);
glVertex3f(1, 1, -0.1);
glVertex3f(-1, 1, -0.1);
glEnd();
glColor3f(0, 1, 0); //green quad
glBegin(GL_QUADS);
glVertex3f(-1, 0, -0.2);
glVertex3f(1, 0, -0.2);
glVertex3f(1, 1, -0.2);
glVertex3f(-1, 1, -0.2);
glEnd();
glutSwapBuffers();
Using default projection matrix, the one that appears is my green quad.
If we're looking to negative z (from 1 to -1), shouldn't the green quad behind the red quad?
All matrices in compatibility mode OpenGL start off as identity matrices; they don't apply any transformations.
In Normalized Device Coordinates, +Z is into the window; you're looking at +Z. Matrices and shaders can, of course, change this.
Also make sure that depth testing is enabled and you create your window with a depth buffer.
If red quad is outside frustum's near and far plane then your red quad will not be visible because it gets clipped out. More information

Low OpenGL alpha values cropping

I've injected a DLL into a game process to make a overlay interface, but the problem is that alpha values are being "cropped" (not rendering at all)
I've tested several alpha values and it seems to fail if alpha is below 0.3.
To illustrate what happens, the image that I'm trying to render is:
and the game redering the image is:
What is exactly happening here? Its the current state of opengl? I'm new to the API, and I have no idea why this happens.
More information:
The texture is being created from a buffer with:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, this->width, this->height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, this->buffer);
I receive this buffer from Awesomium, and the values are right... I've checked the alpha values.
The rendering is done using this function (I've tried calling game's texture rendering function too but the same problem happens):
void DrawTextureExt(int texture, float x, float y, float width, float height)
{
glPushAttrib(GL_ALL_ATTRIB_BITS);
{
glPushMatrix();
{
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// glBlendFunc(GL_ONE, GL_ONE); << tried this too.. ugly results
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glTranslatef(x, y, 0.0);
glRotatef(0, 0.0, 0.0, 1.0);
glTranslatef(-x, -y, 0.0);
glBegin(GL_QUADS);
{
glTexCoord2f(0, 0); glVertex2f(x, y);
glTexCoord2f(0, 1); glVertex2f(x, y + height);
glTexCoord2f(1, 1); glVertex2f(x + width, y + height);
glTexCoord2f(1, 0); glVertex2f(x + width, y);
}
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
}
glPopMatrix();
}
glPopAttrib();
}
Sound to me like the program you're hooking into is using alpha tests at the end of it's rendering function (which would make sense) and left alpha testing enabled at the end, which you are then running into. Try what happens if you disable the alpha test first thing in your hook. (glDiable(GL_ALPHA_TEST)).

OpenGL fill frustum with quad

I'm trying to write an OpenGL/GLSL app that will use GLSL for image processing. I've done some research and come to the conclusion that the right approach is to render to a framebuffer object and then retrieve the image from the gpu. Unfortunately I can't figure out how to set up the frustum and render the quadrilateral so that it fills it properly. Does anyone know how to do this?
You need to render with an orthogonal projection matrix.
glPushMatrix(GL_WORLDVIEW);
glLoadIdentity();
glPushMatrix(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
glBegin(GL_QUADS);
glVertex2i(0, 0);
glVertex2i(width, 0);
glVertex2i(width, height);
glVertex2i(0, height);
glEnd();
glPopMatrix();
glMatrixMode(GL_WORLDVIEW);
glPopMatrix();
Width and height are the dimensions of your FBO. Of course they could be both one if you don't need to address special parts of your FBO by drawing quads at pixel positions.

fastest way to set every pixel

I have programmed a little raytracer in c++,
and want to show the raytraced image in a window.
I tried using a pixel buffer object in opengl,
then map the buffer into memory and manipulate the pixels one by one,
but at fullscreen resolution 1920x1080, I only get 4 fps
without raytracing and without changing the pixels colors
just the mapping and unmapping!
so i'm basically looking for the fastest way to display a raytraced image in a window.
i'm currently doing this way:
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pbo);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, width * height * 4, 0, GL_STREAM_DRAW_ARB);
if (pixels = (uint*)glMapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY_ARB))
{
//modify pixels
glUnmapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB);
}
else
return;
//copy from pbo to texture
glBindTexture(GL_TEXTURE_2D, pbo_texture);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pbo);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, 0);
glEnable(GL_TEXTURE_2D);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
//draw image
glColor4f(1.0, 1.0, 1.0, 1.0);
glBindTexture(GL_TEXTURE_2D, pbo_texture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, -1.0, 0.0);
glTexCoord2f(1.0, 1.0); glVertex3f( 1.0, -1.0, 0.0);
glTexCoord2f(1.0, 0.0); glVertex3f( 1.0, 1.0, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, 1.0, 0.0);
glEnd();
glutSwapBuffers();
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Check the memory traversal if you use loops. You should traverse your buffer in the right order, otherwise you may have cache miss at each iteration. If you use nested loops sometimes you only have to switch the x/y iteration order.
Also, don't read data from graphic memory. It tends to be slow. Only write to PBO.
It looks like a syncro issue. I'm not sure you need to map pbo at every frame. Check this link on OpenGL Pixel Buffer Object (PBO). There's also a workaround for stalls which could improve things

Blending in Different OpenGL editions

Below is a piece of code I use to achieve a demo about how blending works:
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glBegin(GL_QUADS);
glColor4f(1.0f, 0.0f, 0.0f, 0.5f);
glVertex3i(2, 0, 0);
glVertex3i(2, 6, 0);
glVertex3i(6, 6, 0);
glVertex3i(6, 0, 0);
glEnd();
glEnable(GL_BLEND);
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA);
glBegin(GL_QUADS);
glColor4f(0.0, 1.0, 0.0, 0.5f);
glVertex3i(3, 2, -1);
glVertex3i(3, 8, -1);
glVertex3i(8, 8, -1);
glVertex3i(8, 2, -1);
glEnd();
The problem is: It shows what I want on my laptop, which means that the intersection of the two quads is blended, and the area of the green quad left out on black background also blended with background whose alpha is 0.0. However, on another PC, only the red quad appears...
The OpenGL on the laptop is 2.0, and the one on the PC is over 4.0. I want to know whether the problem is the edition of OpenGL or not.
BTW: I know the order I should follow when I want to draw a translucent and an opaque object; I only use this demo to show how much trouble there will be if we do not follow it...