I need to replace the gluLookAt() operate by glRotatef() and glTranslatef() operations.
I mange to do that with gluLookAt(1,1,0,0,0,0,0,1,0); but I just guessed some number utill i got the same picture.
what is the math behind that? how can i take some random gluLookAt(eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upZ, upY, upZ) and change it to rotate and translate and get the same picture?
gluLookAt (...) sets up the axes and origin of your view-space.
It is far easier, in my opinion, to consider what each individual column represents than to turn this operation into a series of translations and rotations (even though fundamentally that is what happens).
To setup the orientation of your view-space, gluLookAt (...) basically computes the individual axes. At no point does it ever deal with any Euler angles like glRotatef (...) would require you to do. Given the fact that the view-space gluLookAt (...) produces is orthogonal, computing the third axis from only two is trivial.
The Z-axis (forward) is computed as the direction from eyexyz to centerxyz
The Y-axis (up) is given directly as upxyz
The X-axis (right) is computed as the cross-product between the up and forward vectors
These three axes are then stored in columns 1-3 of the matrix, X, Y, Z.
Last, since the viewing transform has position in addition to rotation, the camera's eye point is stored in the 4th column of your matrix.
Consider the following diagram from Song Ho Ahn (안성호), OpenGL Transformation:
Honestly, it is more convenient just to setup the matrix yourself instead of dealing with Euler angles and glRotatef (...).
I agree with Andon that finding the whole matrix is much easier. But if you still need to find the decomposition to glRotate and glTranslate, then you could follow these steps, at least for this specific setting.
We need to find a transformation that would convert the representation of a point in the xyz coordinate system into its representation in the x'y'z' coordinate system (different from transforming a point). This is the same as aligning the orange system with the blue one.
We can say two things about this transformation:
The y'z'=Uz' plane is transformed into the yz plane.
The z' axis is transformed into the z axis.
To make the transformation we do the following:
Translate the origin E to point C.
Rotate around the y' axis 90° clockwise to meet condition (2).
Rotate around the z=z' axis 45° clockwise to meet condition (1).
This can be summed up as:
glRotatef(45, 0, 0, 1);
glRotatef(90, 0, 1, 0);
glTranslatef(-1, -1, 0);
Related
I have a generic OpenGL 3D world, centered on (0,0,0) at start. I implemented a standard trackball, based on this code. This implements rotations as small increments/transformations to the current modelview matrix,
// We need to apply the rotation as the last transformation.
// 1. Get the current matrix and save it.
// 2. Set the matrix to the identity matrix (clear it).
// 3. Apply the trackball rotation.
// 4. Pre-multiply it by the saved matrix.
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat *)objectXform);
glLoadIdentity();
glRotatef(rot_angle, rotAxis.x, rotAxis.y, rotAxis.z);
glMultMatrixf((GLfloat *)objectXform);
This part works perfectly. But then I wantes to implement translations, and I am doing this also as small increments to the modelview matrix,
glTranslatef(-dx, -dy, 0.f);
This also works as expected (no matter how the world is rotated, the translation goes along with the mouse, i.e., the model goes behind the mouse.
The problem comes when I try to rotate after the translation: I want the rotation to be around the world center, but that will not happen after user translations. I tried to store the absolute translation and compensate for it, but obviously it does not work. I did it as folows:
// Translation part, store absolute translation
m_mouseInfo.m_fTotalTranslationX -= dx;
m_mouseInfo.m_fTotalTranslationY -= dy;
glTranslatef(-dx, -dy, 0.f);
...
// Rotation, try to apply the rotation around (0,0,0)
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat *)objectXform);
glLoadIdentity();
// Try to compensate for the translation and do the rotation aroun (0,0,0) but won't work
glTranslatef(m_mouseInfo.m_fTotalTranslationX, m_mouseInfo.m_fTotalTranslationY, 0.f);
glRotatef(rot_angle, rotAxis.x, rotAxis.y, rotAxis.z);
glTranslatef(-m_mouseInfo.m_fTotalTranslationX, -m_mouseInfo.m_fTotalTranslationY, 0.f);
glMultMatrixf((GLfloat *)objectXform);
How can I store the absolute translation to compensate for it when I apply the rotation and therefore rotate the scene around the origin?
Or, in other words, how can I just rotate the world around the origin when I have cumulative transfromations?
To translate around a point (x,y), first translate by (x,y), then rotate, then translate by -(x,y).
Now, if your world has been transformed by M (some matrix), then the origin of the world before that transformation is located at M^-1 (0,0).
Suppose your world transformation from the original is M, and you want to perform some rotation R, but that rotation should be around the original origin, but the rotation matrix R is expressed in terms rotation around the point (0,0) (as is the style).
Then R' = M R M^-1 will generate a new matrix R' that consists of rotating by R around the original (0,0). Then M' = R' M is the matrix that represents starting with nothing, and then doing M, then doing R around the origin.
If you are doing cumulative transformations on some model, simply keep track of the product of said transformations, along side modifying the scene.
Alternatively, store the original scene, and instead of doing cumulative transformations on it, always apply M to get the current scene.
I want to rotate a 3d object in a roll pitch yaw fashion.
I store the pose of the object in a matrix, and perform roll, pitch and yaw rotations multiplying the pose matrix by one of the standard rotation matrices about x, y, or z axis.
Then I display it using these lines of code.
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(-5, 5, 2, 0, 0, 0, 0, 1, 0);
glPushMatrix();
ship.Draw();
glPopMatrix();
glutSwapBuffers();
}
void Spacecraft::Draw(){
glMultMatrixf(pose);
model.Draw();
return;
}
It works, but the object, instead to rotate around their own axis it rotates around the world axis...
Where am I wrong?
Rotations are always about the origin of the local coordinate system. So you first have to translate the coordinate system so that the center of the desired rotation happens to be at the local coordinate system origin. It's hard to tell where and which transformations to apply without seeing the rest of your code.
The rest of the drawing code, which is not shown, also includes the contents of the model. Draw function. Most likely you've got a translations somewhere in there. This translation moves the object away from the coordinate system center. I think what you're missing (not about OpenGL but linear algebra in general) is, that matrix multiplication is non-commutative, i.e. order of operations matter and that for column major matrices the order of operations is last to first (or right to left if written in normal notation). So the rotation must be the last thing you multiply on the matrix, not the first.
I want to rotate my model Matix in x, y, and z direction, but it rotates in an unexpected way.
I use Qt.
QMatrix4x4 mMatrix;
mMatrix.setToIdentity();
mMatrix.rotate(yAngle, QVector3D(0, 1, 0));
mMatrix.rotate(zAngle, QVector3D(0, 0, 1));
mMatrix.translate(cube->getPosition());
After the first rotation the followed rotations rotate around the base of the new model Matrix, while I want the followed rotations to rotate around the origin.
I drew a little sketch so my problem might be clearer (black shows how it is right now, green is how i want it to be):
The green arrow shows how i want it to be.
I think you will get the desired result if you simply reverse the order of calls you make on the QMatrix4x4 class. To combine the two rotations:
QMatrix4x4 mMatrix;
mMatrix.setToIdentity();
mMatrix.rotate(zAngle, QVector3D(0, 0, 1));
mMatrix.rotate(yAngle, QVector3D(0, 1, 0));
The documentation is kind of lacking, but from looking at the QMatrix4x4 source code, I'm getting the impression that it applies matrix operations the way it's more or less standard with matrix libraries that are used for OpenGL, and the way the OpenGL fixed pipeline used to work.
This means that when you combine matrices, the new matrix is multiplied from the right. As a result, when the combined matrix is applied to vectors, the last specified sub-transformation is applied to the vectors first. Or putting it differently, you specify transformations in the reverse of the order you want them applied.
In your example, it you want to rotate around the y-axis first, then around the z-axis, you need to specify the z-rotation first, then the y-rotation.
I got figured it out by myself, each object needs to have its own rotation Matrix / Quaterunion.
A new rotation around the world origin ist done by creating a rotation matrix with the wanted rotation and right multiply the exiting rotation matrix of the object.
Well, if you want the green arrow thing to happen, then it means that you want a rotation around the x-axis, 90 degrees in the negative direction (or 270 degrees in the positive direction).
To make things simple, think of it like this:
Which of the axes there remains the same? x, right? Since x doesn't change, it looks like you're rotating around the x, and you are.
Now, point your thumb towards the direction that positive x is directed at and casually close the rest of your fingers like a cylinder. The direction that those rest of your fingers curling at is the direction of the rotation.
Since you want a rotation of 90 degrees in the opposite direction that those 4 fingers of yours are curling at, or 270 in the same direction... yeah.
(1, 0, 0) would be the direction vector pointing towards the positive x. While I don't know so much about the functions you're using, my guess is that the second call should be:
mMatrix.rotate(270, QVector3D(1, 0, 0));
Instead, or optionally -90 if it supports negative values.
I want to test if any given point in the world is on a quad/plane? The quad/plane can be translated/rotated/scaled by any values but it still should be able to detect if the given point is on it. I also need to get the location where the point should have been, if the quad was not applied any rotation/scale/translation.
For example, consider a quad at 0, 0, 0 with size 100x100, rotated at an angle of 45 degrees along z axis. If my mouse location in the world is at ( x, y, 0, ), I need to know if that point falls on that quad in its current transformation? If yes, then I need to know if no transformations were applied to the quad, where that point would have been on it? Any code sample would be of great help
A ray-casting approach is probably simplest:
Use gluUnProject() to get the world-space direction of the ray to cast into the scene. The ray's origin is the camera position.
Put this ray into object space by transforming it by the inverse of your rectangle's transform. Note that you need to transform both the ray's origin point and direction vector.
Compute the intersection point between this ray and the XY plane with a standard ray-plane intersection test.
Check that the intersection point's x and y values are within your rectangle's bounds, if they are then that's your desired result.
A math library such as GLM will be very helpful if you aren't confident about some of the math involved here, it has corresponding functions such as glm::unProject() as well as functions to invert matrices and do all the other transformations you'd need.
Original Question/Code
I am fine tuning the rendering for a 3D object and attempting to implement a camera following the object using gluLookAt because the object's center y position constantly increases once it reaches it's maximum height. Below is the section of code where I setup the ModelView and Projection matrices:
float diam = std::max(_framesize, _maxNumRows);
float centerX = _framesize / 2.0f;
float centerY = _maxNumRows / 2.0f + _cameraOffset;
float centerZ = 0.0f;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(centerX - diam,
centerX + diam,
centerY - diam,
centerY + diam,
diam,
40 * diam);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0., 0., 2. * diam, centerX, centerY, centerZ, 0, 1.0, 0.0);
Currently the object displays very far away and appears to move further back into the screen (-z) and down (-y) until it eventually disappears.
What am I doing wrong? How can I get my surface to appear in the center of the screen, taking up the full view, and the camera moving with the object as it is updated?
Updated Code and Current Issue
This is my current code, which is now putting the object dead center and filling up my window.
float diam = std::max(_framesize, _maxNumRows);
float centerX = _framesize / 2.0f;
float centerY = _maxNumRows / 2.0f + _cameraOffset;
float centerZ = 0.0f;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(centerX - diam,
centerX,
centerY - diam,
centerY,
1.0,
1.0 + 4 * diam);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(centerX, _cameraOffset, diam, centerX, centerY, centerZ, 0, 1.0, 0.0);
I still have one problem when the object being viewed starts moving it does not stay perfectly centered. It appears to almost jitter up by a pixel and then down by 2 pixels when it updates. Eventually the object leaves the current view. How can I solve this jitter?
Your problem is with the understanding what the projection does. In your case glFrustum. I think the best way to explain glFrustum is by a picture (I just drew -- by hand). You start of a space called Eye Space. It's the space your vertices are in after they have been transformed by the modelview matrix. This space needs to be transformed to a space called Normalized Device Coordinates space. This happens in a two fold process:
The Eye Space is transformed to Clip Space by the projection (matrix)
The perspective divide {X,Y,Z} = {x,y,z}/w is applied, taking it into Normalized Device Coordinate space.
The visible effect of this is that of kind of a "lens" of OpenGL. In the below picture you can see a green highlighted area (technically it's a 3 volume) in eye space that, is the NDC space backprojected into it. In the upper case the effect of a symmetric frustum, i.e. left = -right, top = -bottom is shown. In the bottom picture an asymmetric frustum, i.e. left ≠ -right, top ≠ -bottom is shown.
Take note, that applying such an asymmetry (by your center offset) will not turn, i.e. rotate your frustum, but skew it. The "camera" however will stay at the origin, still pointing down the -Z axis. Of course the center of image projection will shift, but that's not what you want in your case.
Skewing the frustum like that has applications. Most importantly it's the correct method to implement the different views of left and right eye an a stereoscopic rendering setup.
The answer by Nicol Bolas pretty much tells what you're doing wrong so I'll skip that. You are looking for an solution rather than telling you what is wrong, so let's step right into it.
This is code I use for projection matrix:
glViewport(0, 0, mySize.x, mySize.y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(fovy, (float)mySize.x/(float)mySize.y, nearPlane, farPlane);
Some words to describe it:
glViewport sets the size and position of display place for openGL inside window. Dunno why, I alsways include this for projection update. If you use it like me, where mySize is 2D vector specifying window dimensions, openGL render region will ocuppy whole window. You should be familiar with 2 next calls and finaly that gluPerspective. First parameter is your "field of view on Y axis". It specifies the angle in degrees how much you will see and I never used anything else than 45. It can be used for zooming though, but I prefer to leave that to camera operating. Second parameter is aspect. It handles that if you render square and your window sizes aren't in 1:1 ratio, it will be still square. Third is near clipping plane, geometry closer than this to camera won't get rendered, same with farPlane but on contrary it sets maximum distance in what geometry gets rendered.
This is code for modelview matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt( camera.GetEye().x,camera.GetEye().y,camera.GetEye().z,
camera.GetLookAt().x,camera.GetLookAt().y,camera.GetLookAt().z,
camera.GetUp().x,camera.GetUp().y,camera.GetUp().z);
And again something you should know: Again, you can use first 2 calls so we skip to gluLookAt. I have camera class that handles all the movement, rotations, things like that. Eye, LookAt and Up are 3D vectors and these 3 are really everything that camera is specified by. Eye is the position of camera, where in space it is. LookAt is the position of object you're looking at or better the point in 3D space at which you're looking because it can be really anywhere not just center object. And if you are worried about what's Up vector, it's really simple. It's vector perpedicular to vector(LookAt-Eye), but becuase there's infinite number of such vectors, you must specify one. If your camera is at (0,0,0) and you are looking at (0,0,-1) and you want to be standing on your legs, up vector will be (0,1,0). If you'd like to stand on your head instead, use (0,-1,0). If you don't get the idea, just write in comment.
As you don't have any camera class, you need to store these 3 vectors separately by yourself. I believe you have something like center of 3D object you're moving. Set that position as LookAt after every update. Also in initialization stage(when you're making the 3D object) choose position of camera and up vector. After every update to object position, update the camera position the same way. If you move your object 1 point up at Y axis, do the same to camera position. The up vectors remains constant if you don't want to rotate camera. And after every such update, call gluLookAt with updated vectors.
For updated post:
I don't really get what's happening without bigger frame of reference (but I don't want to know it anyway). There are few things I get curious about. If center is 3D vector that stores your object position, why are you setting the center of this object to be in right top corner of your window? If it's center, you should have those +diam also in 2nd and 4th parameter of glOrtho, and if things get bad by doing this, you are using wrong names for variables or doing something somewhere before this wrong. You're setting the LookAt position right in your updated post, but I don't find why you are using those parameters for Eye. You should have something more like: centerX, centerY, centerZ-diam as first 3 parameters in gluLookAt. That gives you the camera on the same X and Y position as your object, but you will be looking on it along Z axis from distance diam
The perspective projection matrix generated by glFrustum defines a camera space (the space of vertices that it takes as input) with the camera at the origin. You are trying to create a perspective matrix with a camera that is not at the origin. glFrustum can't do that, so the way you're attempting to do it simply will not work.
There are ways to generate a perspective matrix where the camera is not at the origin. But there's really no point in doing that.
The way a moving camera is generally handled is by simply adding a transform from the world space to the camera space of the perspective projection. This just rotates and translates the world to be relative to the camera. That's the job of gluLookAt. But your parameters to that are wrong too.
The first three values are the world space location of the camera. The next three should be the world-space location that the camera should look at (the position of your object).