i'm completely don't understanding opengl+glut works....
PLEASE explain why he does that? =(
I have simple code
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
ChessboardSurrogate.Draw(6,8,50, 0,0); // draw chessboard(6x8) in (0,0,0) edge per cell 50
glPopMatrix();
glPushMatrix();
//glutSolidCube(100); //!!!!!
glPopMatrix();
glLoadIdentity();
gluLookAt( 0.0, 0.0, -testSet::znear*2,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);
glMultMatrixf(_data->m); // my transformation matrix
glutSwapBuffers();
}
And i get the expected result. screenshot #1
Then I uncomment glutSolidCube(100). In any case, I even do push/pop current matrix, and later override it by identity matrix.... i think that i would see the same result image with cude... BUT! i see THIS screenshot #2 What the..... &*^##$% Why?
If i add code
glRotatef(angleX,1.0,0.0,0.0);
glRotatef(angleY,0.0,1.0,0.0);
before glutSwapBuffers, than I'll see that the chessboard on the spot .....
screenshot #3
This is probably only half the answer, but why on earth are you setting your matrices AFTER drawing ? What do you expect it to do ?
so :
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt( 0.0, 0.0, -testSet::znear*2,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);
glMultMatrixf(_data->m); // my transformation matrix
glPushMatrix();
ChessboardSurrogate.Draw(6,8,50, 0,0); // draw chessboard(6x8) in (0,0,0) edge per cell 50
glPopMatrix();
glPushMatrix();
//glutSolidCube(100); //!!!!!
glPopMatrix();
glutSwapBuffers();
moreover, always make sure you're setting the right matrix :
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt()...
Lastly, unless your Draw() method changes the modelview matrix, your Push/PopMatrix is useless and should be avoided (for performance and portability reasons since it's deprecated)
Related
I am using C++, OpenGL and glut. I am trying to make 5 houses that are rotated properly like this:
However, whenever I try to implement the glRotatef function, I seem to not be able to either get the proper coordinates or something is off somewhere in my code. Furthermore, I set the background color to white but it's still all black, how come? For now I have the houses set to white to counter this for now. Here is my code:
#include <GL/glut.h>
typedef int vert2D[2];
void initialize()
{
glClearColor(1.0, 1.0, 1.0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(10.0, 215.0, 0.0, 250.0);
glMatrixMode(GL_MODELVIEW);
}
void drawHouse(vert2D* sq, vert2D* tri)
{
glColor3f(1.0, 1.0, 1.0);
glBegin(GL_LINE_LOOP);
glVertex2iv(sq[0]);
glVertex2iv(sq[1]);
glVertex2iv(sq[2]);
glVertex2iv(sq[3]);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex2iv(tri[0]);
glVertex2iv(tri[1]);
glVertex2iv(tri[2]);
glEnd();
}
void render()
{
vert2D sqPts[4] = { {115, 150}, {115, 125}, {100,125}, {100,150} };
vert2D triPts[3] = { {120, 150}, {95,150}, {108,160} };
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glMatrixMode(GL_MODELVIEW);
drawHouse(sqPts, triPts);
glPushMatrix();
glTranslatef(1.0, 0.0, 0.0);
glRotatef(-10.0, 0.0, 0.0, 1.0);
drawHouse(sqPts, triPts);
glTranslatef(1.0, 0.0, 0.0);
glRotatef(-10.0, 0.0, 0.0, -1.0);
drawHouse(sqPts, triPts);
glPopMatrix();
glPushMatrix();
glTranslatef(-1.0, 0.0, 0.0);
glRotatef(10.0, 0.0, 0.0, 1.0);
drawHouse(sqPts, triPts);
glTranslatef(-1.0, 0.0, 0.0);
glRotatef(10.0, 0.0, 0.0, 1.0);
drawHouse(sqPts, triPts);
glPopMatrix();
glFlush();
}
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowPosition(100, 100);
glutInitWindowSize(640, 480);
glutCreateWindow("TestMeOut");
initialize();
glutDisplayFunc(render);
glutMainLoop();
}
Let's answer the simpler question of why your background is still black, first:
You simply never glClear(GL_COLOR_BUFFER_BIT) the color buffer. You tell OpenGL "hey, the next time I call glClear with (at least) the GL_COLOR_BUFFER_BIT, I want the color buffer to be cleared to white." but you never actually clear the buffer.
Now, onto how we can draw the houses with their correct locations and orientations:
You should first start by defining your house's vertices in a sensible local coordinate system/frame that is suitable for transforming them in further steps. Currently, with how you define your house's vertices, it is hard to do any transformations on those (mainly because linear transformations like rotation are always relative to the coordinate system's origin).
So, let's change that. Let's define the origin (0, 0) for your house to be the center of the bottom/base line of the house. And let's also define that your house's quad has a side length of 10 "units":
vert2D sqPts[4] = {
{-5, 0}, // <- bottom left
{ 5, 0}, // <- bottom right
{ 5,10}, // <- top right
{-5,10} // <- top left
};
Now, for the roof of the house, we assume the same coordinate system (with (0, 0) being the center of the house's base/bottom line), so we start at Y=10:
vert2D triPts[3] = {
{-6, 10}, // <- left
{ 6, 10}, // <- right
{ 0, 12} // <- top
};
Next, we need to define where (0, 0) should be in our "world", so to speak. One definition could be: (0, 0) should be the center of the bottom of the viewport/screen and the viewport should have a length of 100 "units". Right now, we don't care about a correct aspect ratio when the viewport's width does not equal the viewport's height. This can be added later.
Starting from the clip space coordinate system, we can transform this clip space into our own "world space" by using these transformations:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glTranslatef(0.0, -1.0, 0.0); // <- move the origin down to the bottom of the viewport
glScalef(1.0 / 50.0, 1.0 / 50.0, 1.0); // <- "scale down" the clip space to cover more space in the viewport
Now, the above part is essentially what gluOrtho2D() does as well, but highlighting the actual coordinate system transformation steps is useful here.
Now that we defined our house's local coordinate system and our "world" coordinate system, we can rotate and translate the world coordinate system such that the houses appear at their correct locations and orientations in our world.
In order to draw 5 houses, we just use a for-loop:
glMatrixMode(GL_MODELVIEW);
for (int i = -2; i <= 2; i++) { // <- 5 steps
glPushMatrix();
glRotatef(i * 20.0, 0.0, 0.0, 1.0);
glTranslatef(0.0, 50.0, 0.0);
drawHouse(sqPts, triPts);
glPopMatrix();
}
So, starting from our world coordinate system, we transform it by rotating the appropriate amount around its origin (0, 0) for the house with index i to have the correct rotation, and then translate the coordinate system by 50 units along its (now rotated) Y axis.
These two transformations will now result in a house to be drawn at the desired location. So, repeat that 5 times in total with differing rotation angles, and you're done.
I'm trying to move a cube drawn wit OpenGL in a QGLWidget. Here is part of the code:
void Hologram::paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawBox();
}
void Hologram::drawBox() {
for (int i = 0; i < 6; i++) {
glBegin(GL_QUADS);
glNormal3fv(&n[i][0]);
glVertex3fv(&v[faces[i][0]][0]);
glVertex3fv(&v[faces[i][1]][0]);
glVertex3fv(&v[faces[i][2]][0]);
glVertex3fv(&v[faces[i][3]][0]);
glEnd();
}
qDebug() << "drawing box";
}
void Hologram::initializeGL() {
/* Enable a single OpenGL light. */
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHTING);
/* Use depth buffering for hidden surface elimination. */
glEnable(GL_DEPTH_TEST);
/* Setup the view of the cube. */
glMatrixMode(GL_PROJECTION);
gluPerspective( /* field of view in degree */ 40.0,
/* aspect ratio */ 1.0,
/* Z near */ 1.0, /* Z far */ 10.0);
glMatrixMode(GL_MODELVIEW);
// qDebug() << "vx:" << vx << "vy:" << vy << "vz:" << vz;
gluLookAt(0.0, 0.0, 5.0, // eye position
0.0, 0.0, 0.0, // center
0.0, 1.0, 0.); // up vector
// Adjust cube position
glTranslatef(0.0, 0.0, -1.0);
}
void Hologram::animate() {
glTranslatef(0.0, 0.0, -0.1);
updateGL();
}
I've implemented a timer that periodically calls animate() via a signal&slot connection. Shouldn't it add -0.1 to the z-coordinate every time when animate() is executed and therefore move the cube in z-direction? Depending on the value I choose, the cube is either not moving (for values approx. <-0.3) or I cannot see it at all (values approx. >0.4). First I thought it either might move very fast and disappear or it moves very slow and therefore I couldn't see any changes. But playing around with z value always ended up in one of the above mentioned cases... Isn't that strange? What am I doing wrong?
Problem solved: The correct way to do it is
void Hologram::paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0, 0.0, -dz);
drawBox();
}
The reason why I got the strange behaviour described above is due to the fact that the object was clipped by the near/far planes of the perspective set with gluPerspective()...
I'm having problems in OpenGL getting my object (a planet) to rotate relative to the current camera rotation. It seems to work at first, but then after rotating a bit, the rotations are no longer correct/relative to the camera.
I'm calculating a delta (difference) in mouseX and mouseY movements on the screen. The rotation is stored in a Vector3D called 'planetRotation'.
Here is my code to calculate the rotation relative to the planetRotation:
Vector3D rotateAmount;
rotateAmount.x = deltaY;
rotateAmount.y = deltaX;
rotateAmount.z = 0.0;
glPushMatrix();
glLoadIdentity();
glRotatef(-planetRotation.z, 0.0, 0.0, 1.0);
glRotatef(-planetRotation.y, 0.0, 1.0, 0.0);
glRotatef(-planetRotation.x, 1.0, 0.0, 0.0);
GLfloat rotMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, rotMatrix);
glPopMatrix();
Vector3D transformedRot = vectorMultiplyWithMatrix(rotateAmount, rotMatrix);
planetRotation = vectorAdd(planetRotation, transformedRot);
In theory - what this does is, sets up a rotation in the 'rotateAmount' variable. It then gets this into model space, by multiplying this vector with the inverse model transform matrix (rotMatrix).
This transformed rotation is then added to the current rotation.
To render this is the transform being setup:
glPushMatrix();
glRotatef(planetRotation.x, 1.0, 0.0, 0.0);
glRotatef(planetRotation.y, 0.0, 1.0, 0.0);
glRotatef(planetRotation.z, 0.0, 0.0, 1.0);
//render stuff here
glPopMatrix();
The camera sort of wobbles around, the rotation I'm trying to perform, doesn't seem relative to the current transform.
What am I doing wrong?
GAH! Don't do that:
glPushMatrix();
glLoadIdentity();
glRotatef(-planetRotation.z, 0.0, 0.0, 1.0);
glRotatef(-planetRotation.y, 0.0, 1.0, 0.0);
glRotatef(-planetRotation.x, 1.0, 0.0, 0.0);
GLfloat rotMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, rotMatrix);
glPopMatrix();
OpenGL is not a math library. There are proper linear algebra libraries for that kind of job.
As for your problems. A vector is not fit to store a rotation. You need at least a Vector (axis of rotation) and the angle itself, or better yet a Quaternion.
Also rotations don't add. They're no commutative, however addition is a commutative operation. Rotations in fact multiply.
How to fix your code: Rewrite it from scratch using the proper mathematical methods. For this please read up the topics of "Rotation matrices" and "Quaternions" (Wikipedia has them).
in the Draw() function of a GLForm (openGL with borland builder),
I first draw an image with a SDK function called capture_card::paintGL().
And this function compells to have this projection before being called :
glViewport(0, 0, (GLsizei)newSize.width, (GLsizei)newSize.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
And on foreground of this painted image, i have to draw an other layer which has been allready coded for another viewport and another glortho projection :
(in the MainResizeGL() called on "onResize" events) :
glViewport(-width, -height, width * 2, height * 2);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(double(-width)*viewport_ratio, double(width)*viewport_ratio, double(-height), double(height), 1000.0, 100000.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
(and in the MainDraw() called by a "Ontimer") :
glLoadIdentity();
glTranslatef(0.0, 0.0, -50000.0);
mpGLDrawScene->DrawScene(); //(this calls a doDrawScene() I don't understand exactly how it draws : with calling this->parent, etc.)
glFlush();
SwapBuffers(ghDC);
So I transformed the MainDraw() to the following :
// viewport and projection needed for the paintGL
glViewport(0, 0, (GLsizei)newSize.width, (GLsizei)newSize.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
// call of the paintGL
if(capture_button_clicked) capture_card::paintGL();
// content of the ResizeGL in order to get back to the projection desired and to matrixmode modelview
glViewport(-width, -height, width * 2, height * 2);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(double(-width)*viewport_ratio, double(width)*viewport_ratio, double(-height), double(height), 1000.0, 100000.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// original drawscene call
glLoadIdentity();
glTranslatef(0.0, 0.0, -50000.0);
mpGLDrawScene->DrawScene(); //(this calls a doDrawScene() I don't understand exactly how it draws : with calling this->parent, etc.)
glFlush();
SwapBuffers(ghDC);
The result is that I see the ancient project's "drawscene" items but when I click on the "capture_button" the paintGL remains invisible and the items drawn turn into something like an alpha-channel canva.
I tried to add glScalef(width, height, 1) after the paintGL, changed the glTranslatef(0,0,50000) with the result that I saw a small amount of pixel with the colors of the paintGL, but the overlay items then disappear.
How could I get these two different viewport to superimpose (i.e. the drawscene above the paintGL) ?
Thanks in advance,
cheers,
Arnaud.
Use scissor test together with the viewport
glEnable(GL_SCISSOR_TEST);
glScissor(viewport.x, viewport.y, viewport.width, viewport.height)
glViewport(viewport.x, viewport.y, viewport.width, viewport.height)
and clear the depth buffer (only the depth buffer) between the different rendering stages.
My instructor says they can be used interchangeably to affect the projection matrix. He argues that gluLookAt() is preferably used because of its 'relative simplicity due to its ability to define the viewing angle'. Is this true? I've seen code examples using both gluLookAt() and glFrustum() and I'm wondering why would the programmmer mix them.
Like in the cube.c example of the redbook appears:
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
glLoadIdentity();
gluLookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
**//why isn't this a call to glFrustum?**
glScalef(1.0, 2.0, 1.0);
glutWireCube(1.0);
glFlush();
}
void reshape(int w, int h)
{
glViewport(0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-1.0, 1.0, -1.0, 1.0, 1.5, 20.0);
//why isn't this a call a call to gluLookAt()?
glMatrixMode(GL_MODELVIEW);
}
Both glFrustum and gluLookAt perform simple matrix multiplication. Check the man pages for equations for those matrices:
gluLookAt
glFrustum
Both of those can be replaced by a glMultMatrix* call.
The most important difference is that glFrustum is used most of the time to establish a perspective projection matrix (used internally by gluPerspective), and gluLookAt is a convenience method for specifying model-view matrices (usually: implementing a camera).
gluLookAt affects the ModelView matrix whereas, glFrustum/gluPerspective affect the Projection matrix. These are entirely two different matrices in the pipeline. gluLookAt is a utility method which is a combination of glRotate and glTranslate calls to move the "camera" to the appropriate point in space and point it to the desired location. Whereas, glFrustum/gluPerspective are methods which define a perspective viewing volume. Check out for more details in the OpenGL Red Book.