Fixing color rendering in openGL with glScalef - opengl

I'm using glScalef (-1.0, 1.0, 1.0) to flip my openGL image axis. However this completely messes up the rendering of the objects and the colors. I've tried the following things to no avail:
glEnable(GL_NORMALIZE);
glEnable (GL_DEPTH_TEST) ;
glEnable(GL_COLOR_MATERIAL);
glCullFace(GL_BACK);
If I flip in x and y - glScalef (-1.0, -1.0, 1.0) then the colors are fine, but I don't want to flip both dimensions. Flipping x and z does not fix the colors.
Any ideas?

You changed the handedness of your coordinate space by doing this. Effectively when it comes time to rasterize your triangles, the front becomes the back because the winding direction is reversed.
You should be aware of how the rasterizer determines the front/back face of a polygon. It uses the post-projected position of your vertices and tests the winding direction. You flipped 1-axis in your post-projected coordinate system, which changes handedness (more formally, its chirality). This produces a mirror image and the winding of your vertices is backwards.
You can solve this with glFrontFace (GL_CW), or simply wind your vertices the other way by hand.

Related

Open GL can't draw some triangles

I drawed a triangle {(0,0), (1,0), (0,1)}. Now i want to draw a second one. But for some reason not any triangle draws. For example triangle: {(1.5654, 1.2), (1.1, 1.4564), (1.5, 1.15)} is drawn normal, but triangle {(1,1), (1,0), (0, 1)} doesn't appear. Hear the code i use to draw:
glBegin(GL_TRIANGLES);
invers_sh.setAttributeValue(b_colorLoc, colors[0]);
glVertex2d(1.5654, 1.2);
invers_sh.setAttributeValue(b_colorLoc, colors[1]);
glVertex2d(1.1, 1.4564);
invers_sh.setAttributeValue(b_colorLoc, colors[2]);
glVertex2d(1.5, 1.15);
glEnd();
For first triangle it's the same code (but coordinates are different). I tryed to unite both drawings (in one glBegin/glEnd) - same result. What i do wrong?
you need to draw all vertices in a clockwise or counter clockwise order depending on frontface setting, you can google it for more details.
As pointed out in the other answer, you need to draw the vertices for all triangles in the same order, clockwise or counter-clockwise (OpenGL default).
To correct the ordering, just swap out the first and last vertex.
You can control this behaviour (called FaceCulling) with OpenGL-commands like glCullFace, glFrontFace and glEnable/glDisable with GL_CULL_FACE.

Drawing a primitive ( GL_QUADS ) on top of a 2D texture - no quad rendered, texture colour changed

I am trying to draw a 2D scene with a texture as background and then ( as the program flows and does computations ) draw different primitives on the "canvas". As a test case I wanted to draw a blue quad on the background image.
I have looked at several resources and SO questions to try get the information I need to accomplish the task ( e.g. this tutorial for first primitive rendering, SOIL "example" for texture loading ).
My understanding was that the texture will be drawn on Z=0, and quad as well. Quad would thus "cover" a portion of texture - be drawn on it, which is what I want. Instead the result of my display function is my initial texture in black/blue colour, and not my texture ( in original colour ) with a blue quad drawn on it. This is the display function code :
void display (void) {
glClearColor (0.0,0.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
// background render
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f); // window size is 1024x512
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture );
glBegin (GL_QUADS);
glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
glEnd(); // here I get the texture properly displayed in window
glDisable(GL_TEXTURE_2D);
// foreground render
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,100.0);
glVertex2d(700.0,500.0);
glEnd(); // now instead of a rendered blue quad I get my texture coloured in blue
glutSwapBuffers(); }
I have already tried with many modifications, but since I am just beginning with OpenGL and don't yet understand a lot of it, my attempts failed. For example, I tried with pushing and popping matrices before and after drawing the quad, clearing the depth buffer, changing parameters in gluPerspective etc.
How do I have to modify my code so it will render the quad properly on top of the background texture image of my 2D scene ? Being a beginner, extra explanations of the modifications ( as well as mistakes in the present code ) and principles in general will be greatly appreciated.
EDIT - after answer by Reto Koradi :
I have tried to follow the instructions, and the modified code now looks like :
// foreground render
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS); // same from here on
Now I can see the blue "quad", but it is not displayed properly, it looks something like this .
Beside that, the whole scene is flashing really quickly.
What do I have to change in my code so that quad will get displayed properly and screen won't be flashing ?
You are setting up a perspective transformation before rendering the blue quad:
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
The way gluPerspective() is defined, it sets up a transformation that looks from the origin down the negative z-axis, with the near and far values specifying the distance range that will be visible. With this transformation, z-values from -1.0 to -100.0 will be visible. Which does not include your quad at z = 0.0.
If you want to draw your quad in 2D coordinate space, the easiest solution is to not use gluPerspective() at all. Just use a glOrtho() type transformation like you did for your initial drawing.
If you want perspective, you will need a GL_MODELVIEW transformation as well. You can start with a translation in the negative z-direction, within a range of 1.0 to 100.0. You may have to adjust your coordinates for the different coordinate system as well, or use additional transformations that also translate in xy-direction, and possibly scale.
The code also has the coordinates in the wrong order for drawing the blue quad. You either have to change the draw call to GL_TRIANGLE_STRIP (recommended because it at least gets you one step closer to using features that are not deprecated), or swap the order of the last two vertices:
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,500.0);
glVertex2d(700.0,100.0);
glEnd(GL_QUADS);

What could globally affect texture coordinate values in OpenGL?

I'm writing a plugin for an application called Autodesk MotionBuilder, which has an OpenGL renderer, and I'm trying to render textured geometry into the scene. I have a window with a 3D View embedded in it, and every time my window is rendered, this is (in a nutshell) what happens:
I tell the renderer that I'm about to draw into a region with a given size
I tell the renderer to draw the MotionBuilder scene in that region
I draw some additional stuff into and/or on top of the scene
The challenge here is that I'm inheriting some arbitrary OpenGL state from MotionBuilder's renderer, which varies depending on what it's drawing and what's present in the scene. I've been dealing with this fine so far, but there's one thing I can't figure out. The way that OpenGL interprets my UV coordinates seems to change based on whatever MotionBuilder is doing behind my back.
Here's my rendering code. If there's no textured geometry in the scene, meaning MotionBuilder hasn't yet fiddled with any texture-related attributes, it works as expected.
// Tell MotionBuilder's renderer to draw the scene
RenderScene();
// Clear whatever arbitrary state MotionBuilder left for us
InitializeAttributes(); // includes glPushAttrib(GL_ALL_ATTRIB_BITS)
InitializePerspective(); // projects into the scene / loads matrices
// Enable texturing, bind to our texture, and draw a triangle into the scene
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, mTexture);
glBegin(GL_TRIANGLES);
glColor4f(1.0, 1.0, 1.0, 0.5f);
glTexCoord2f(1.0, 0.0); glVertex3f(128.0, 0.0, 0.0);
glTexCoord2f(0.0, 1.0); glVertex3f( 0.0, 128.0, 0.0);
glTexCoord2f(0.0, 0.0); glVertex3f( 0.0, 0.0, 0.0);
glEnd();
// Clean up so we don't confound MotionBuilder's initial expectations
RestoreState(); // includes glPopAttrib()
Now, if I bring in some meshes with textures, something odd happens. My texture coordinates get scaled way up. Here's a before and after:
(source: awforsythe.com)
As you can see from the close-up on the right, when MotionBuilder is asked to render a texture whose file it can't find, it instead loads this small question mark texture and tiles it across the geometry. My only hypothesis is that MotionBuilder is changing some global texture coordinate scalar so that, for example, glTexCoord2f(0.5, 1.0) will instead be interpreted as if it were (50.0, 100.0). Is there such a feature in OpenGL? Any idea what I need to modify in order to preserve my texture coordinates as I've entered them?
Since typing the above and after doing a bit of research, I have discovered that there's a GL_TEXTURE matrix that's used to this effect. Neat! And indeed, when I get the value of this matrix initially, it's the good ol' identity matrix:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
When I check it again after MotionBuilder fudges up my texture coordinates:
16 0 0 0
0 16 0 0
0 0 1 0
0 0 0 1
How telling! But here's a slight problem: if I try to explicitly set the texture matrix before doing my own drawing, regardless of what MotionBuilder is doing, it seems like my texture coordinates have no effect and it simply samples the lower-left corner of the texture (0.0, 0.0) for every vertex.
Here's the attempted fix, placed after RenderScene in the code posted above:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
I can verify that the value of GL_TEXTURE_MATRIX is now the identity matrix, but no matter what coordinates I specify in glTexCoord2f, it's always drawn as if the coordinates for each vertex were (0.0, 0.0):
(source: awforsythe.com)
Any idea what else could be affecting how OpenGL interprets my texture coordinates?
Aha! These calls:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
...have to be made after GL_TEXTURE_2D is enabled.
...should be followed up by setting the matrix mode back to GL_MODELVIEW. It turns out, apparently, that some functions I was calling immediately after resetting the texture matrix (glViewport and/or gluPerspective?) affect the current matrix stack. So those calls were affecting the texture matrix, causing my texture coordinates to be transformed in unexpected ways.
I think I've got it now.

OpenGL cube not rendering properly

I have a problem when rendering cubes in OpenGL.I am drawing two cubes, one is a wire cube and is centered around the origin, while the other is offset from the origin and is solid. I have mapped some keys to rotate the objects by some degrees wrt to the origin, so the whole scene can rotate around the origin.
The problem is, when I render the scene, when the wire cube is supposed to be infront of the other solid cube, it does not display itself correctly.
In the image above, the colored cube is supposed to be behind the wire cube. i.e. the green wire cube should be on top.
Also the cube is not behaving properly.
After I rotate it a little bit around the x axis (current horizontal line).
The cube has missing faces and is not rendering correctly.
What am I doing wrong?
I have coded the following
Note that rotateX,rotateY,rotateZ are mapped to keys, and are my global rotation variables.
//The Initialize function, called once:
void Init(){
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Depth Buffer Setup // Enables Depth Testing
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glEnable(GL_LIGHTING);
}
void draw(){
//The main draw function
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity ();
gluPerspective(45, 640/480.0, .5, 100);
glMatrixMode(GL_MODELVIEW); //select the modelview matrix.
glLoadIdentity ();
gluLookAt(0,0,5,
0,0,0,
0,1,0);
glRotatef(rotateX,1,0,0);
glRotatef(rotateY,0,1,0);
glRotatef(rotateZ,0,0,1);
drawScene(); // this just draws the main axis lines,
glutWireCube(1);
glPopMatrix();
glPushMatrix();
glTranslatef(-2,1,0);
drawNiceCube();
glPopMatrix();
glutSwapBuffers();
}
The code for the drawNiceCube() is just using GL_QUADS, while the drawWireCube is built in in GLUT.
EDIT:
I have posted the full code at http://pastebin.com/p1kwPjEM, sorry if it is not well documented.
Did you also request a window with a depth buffer?
glutInitDisplayMode( ... | GLUT_DEPTH | ...);
Update:
Did you somewhere enable face culling?
glEnable(GL_CULL_FACE);
This is may be cause of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here
datenwolf solved my problem. I quote him:
"#JonathanSimbahan: Parts of your code are redundant, but something is missing: You forgot to call Init(); after creating your GLUT window, hence depth testing and all the other state never get enabled. I for one suggest you don't use Init at all and move it's code into the drawing code, where it actually belongs."

Playing Card "3D" rotation in OpenGL - perspective?

(Mac OS X, Cocoa, C/C++, OpenGL, somewhat mathematically challenged.)
Project is a real-time (30fps) poker game, with a 2D view. The OpenGL view is set up for Orthographic projection using glOrtho; the camera is always looking directly down at the cards/table. To date, it's essentially been a 2D image composition engine.
My playing cards are pre-built textures (with alpha), rendered thusly:
glBegin(GL_TRIANGLE_STRIP);
glTexCoord2f(0.0, 0.0);
glVertex3d(startDraw_X, startDraw_Y, 0.0);
glTexCoord2f(1.0, 0.0);
glVertex3d(endDraw_X, startDraw_Y, 0.0);
glTexCoord2f(0.0, 1.0);
glVertex3d(startDraw_X, endDraw_Y, 0.0);
glTexCoord2f(1.0, 1.0);
glVertex3d (endDraw_X, endDraw_Y, 0.0);
glEnd();
This works well, and provides 1:1 pixel mapping, for high image quality.
QUESTION: how can I (cheaply, quickly) "flip" or distort my card using OpenGL? The rotation part isn't hard, but I'm having trouble with the perspective aspect.
Specifically, I have in mind the horizontal flipping action of UIViews in iPhone applications (e.g. Music). The UIView will appear to flip by shrinking horizontally, with the appearance of perspective (the "rear" coordinates of the view will shrink vertically.)
Thanks for any help or tips.
Instead of using glOrtho to set up an orthographic projection, use glFrustum to set up a perspective projection.
Since you game is 2D, I assume it is a top-view camera, right ?
In this case, you can setup your projection matrix as a projection one, put your camera at the center of the table, at a decent height (> 1.2 x width of table depending on the FoV) and not change anything else.
You may have to set your card's (and other game items) height a lot closer from the table than what they may currently be (since in ortho, a delta of 10 meters along Z won't be seen).
Mixing ortho and perspective is technically possible, but I don't quite see the advantage. However, you may have special assets in your game that require special treatments...