I'm writing a project in OpenGL and I've encountered a problem with determining the position of object after translating and rotating the Model-View Matrix.
Just to visualize this, imagine how Earth is rotating around Sun, basically, I need to determine postion of Earth at runtime.
I'll divide my code into a few steps, let's assume we are at starting position of (0,0,0) and our rotation is equal to 0.
while(true)
{
modelViewMatrix.PushMatrix(); //
modelViewMatrix.Translate(1, 1, 0); // 1
modelViewMatrix.Rotate(k++, 0, 1, 0); // 2
object.Draw(); // 3
modelViewMatrix.PopMatrix(); //
}
1 - At this point determining position is easy, it's (1, 1, 0)
2 - Now we are rotating object over some constantly incrementing value to keep it moving around position (0, 0, 0)
3 - Drawing the object
Now I know that modelViewMatrix stores information like rotation and position but I don't know how to utilize this to find out the actual position of my object after translating and rotating it.
Here's my try at drawing what I'm talking about, the red question mark (?) indicates an example position of object that I'm trying to find.
You should be able to create Vec3 at (0,0,0) and transform it by your matrix. That will give you the position of your 'Earth' - Your object probably already has a position, so you really should be using your matrix to transform your object's actual position rather than changing your entire model-view matrix just to draw the object there.
If you're curious how these matrices work, google "homogeneous transformation matrix" to read up on them.
Related
Opengl superbible 4th Edition.page 164
To apply a camera transformation, we take the camera’s actor transform and flip it so that
moving the camera backward is equivalent to moving the whole world forward. Similarly,
turning to the left is equivalent to rotating the whole world to the right.
I can't understand why?
Image yourself placed within a universe that also contains all other things. In order for your viewpoint to appear to move in a forwardly direction, you have two options...
You move yourself forward.
You move everything else in the universe in the opposite direction to 1.
Because you defining everything in OpenGL in terms of the viewer (you're ultimately rendering a 2D image of a particular viewpoint of the 3D world), it can often make more sense from both a mathematical and programatic sense to take the 2nd approach.
Mathematically there is only one correct answer. It is defined that after transforming to eye-space by multiplying a world-space position by the view-matrix, the resulting vector is interpreted relative to the origin (the position in space where the camera conceptually is located relative to the aforementioned point).
What SuperBible states is mathematically just a negation of translation in some direction, which is what you will automatically get when using functions that compute a view-matrix like gluLookAt() or glmLookAt() (although GLU is a lib layered on legacy GL stuff, mathematically the two are identical).
Have a look at the API ref for gluLookAt(). You'll see that the first step is setting up an ortho-normal base of the eye-space which first results in a 4x4 matrix basically only encoding the upper 3x3 rotation matrix. The second is multiplying the former matrix by a translation matrix. In terms of legacy functions, this can be expressed as
glMultMatrixf(M); // where M encodes the eye-space basis
glTranslated(-eyex, -eyey, -eyez);
You can see, the vector (eyex, eyey, eyez) which specifies where the camera is located in world-space is simply multiplied by -1. Now assume we don't rotate the camera at all, but assume it to be located at world-space position (5, 5, 5). The appropriate view-matrix View would be
[1 0 0 -5
0 1 0 -5
0 0 1 -5
0 0 0 1]
Now take a world-space vertex position P = (0, 0, 0, 1) transformed by that matrix: P' = View * P. P' will then simply be P'=(-5, -5, -5, 1).
When thinking in world-space, the camera is at (5, 5, 5) and the vertex is at (0, 0, 0). When thinking in eye-space, the camera is at (0, 0, 0) and the vertex is at (-5, -5, -5).
So in conclusion: Conceptually, it's a matter of how you're looking at things. You can either think of it as transforming the camera relative to the world, or you think of it as transform the world relative to the camera.
Mathematically, and in terms of the OpenGL transformation pipeline, there is only one answer, and that is: the camera in eye-space (or view-space or camera-space) is always at the origin and world-space positions transformed to eye-space will always be relative to the coordinate system of the camera.
EDIT: Just to clarify, although the transformation pipeline and involved vector spaces are well defined, you can still use world-space positions of everything, even the camera, for instance in a fragment shader for lighting computation. The important thing here is to know never to mix entities from different spaces, e.g. don't compute stuff based on a world-space and and eye-space position and so on.
EDIT2: Nowadays, in a time that we all use shaders *cough and roll-eyes*, you're pretty flexible and theoretically you can pass any position you like to gl_Position in a vertex shader (or the geometry shader or tessellation stages). However, since the subsequent computations are fixed, i.e. clipping, perspective division and viewport transformation the resulting position will simply be clipped if its not inside [-gl_Position.w, gl_Position.w] in x, y and z.
There is a lot to this to really get it down. I suggest you read the entire article on the rendering pipeline in the official GL wiki.
I have some objects in a scene and I want to know how to get the world coordinates of the object after some rotations.
For example, I used this: X.matrix.multiplyByVector(X.cube.transform.matrix, 0, 0, 0); to get the world coordinates at the very begining of the rendering process. The coordinates are (201.5, -54.5, 102.5)
Then I make some rotations and then applied the formula again and it display the same coordinates as before even though the object (a cube in this example) is in another place in the scene.
Did you check the value of the X.cube.transform.matrix before and after rotation, to see if it get modified?
Moreover if your object is centered on (0, 0, 0) and if you rotate the object around its center (0, 0, 0), the position of the center will still be the same...
In this case you could try with the cube borders.
Does it make sense?
Hope this helps
I am attempting to cast a ray from the center of the screen and check for collisions with objects.
When rendering, I use these calls to set up the camera:
GL11.glRotated(mPitch, 1, 0, 0);
GL11.glRotated(mYaw, 0, 1, 0);
GL11.glTranslated(mPositionX, mPositionY, mPositionZ);
I am having trouble creating the ray, however. This is the code I have so far:
ray.origin = new Vector(mPositionX, mPositionY, mPositionZ);
ray.direction = new Vector(?, ?, ?);
My question is: what should I put in the question mark spots? I.e. how can I create the ray direction from the pitch and roll?
I answered a question not unlike your's just recently. So I suggest you read this: 3d coordinate from point and angles
This applies to your question as well, only that you don't want just a point, but a ray. Well, remember that a point can be assumed a displacement-from-origin vector and that a ray is defined as
r(t) = v*t + s
In your case, s is the camera position, and v would be a point relative to the camera's position. You figure the rest (or ask, if things are still unclear).
Trying to understand gluLookAt, especially the last 3 parameters.
Can someone please explain ?
gluLookAt(camera[0], camera[1], camera[2], /* look from camera XYZ */
0, 0, 0, /* look at the origin */
0, 1, 0); /* positive Y up vector */
What exactly does it mean by "positive Y up vector" ?
Is it possible to have the last up-vector 3 parameters as all 1s, e.g. 1, 1, 1 ? And, if it is possible, what exactly does it mean ?
Is it possible for the up vector to have value more than 1, e.g. 2, 3, 4 ?
Thanks.
Sketchup to the rescue!
Your image has an 'up' to it that can be separate from the world's up. The blue window in this image can be thought of as the 'near-plane' that your imagery is drawn on: your monitor, if you will. If all you supply is the eye-point and the at-point, that window is free to spin around. You need to give an extra 'up' direction to pin it down. OpenGL will normalize the vector that you supply if it isn't unit length. OpenGL will also project it down so that it forms a 90 degree angle with the 'z' vector defined by eye and at (unless you give an 'up' vector that is in exactly the same direction as the line from 'eye' to 'at'). Once 'in' (z) and 'up' (y) directions are defined, it's easy to calculate the 'right' or (x) direction from those two.
In this figure, the 'supplied' up vector is (0,1,0) if the blue axis is in the y direction. If you were to give (1,1,1), it would most likely rotate the image by 45 degrees because that's saying that the top of the blue window should be pointed toward that direction. Consequently the image of the guy would appear to be tipped (in the opposite direction).
The "up vector" of gluLookAt is just the way the camera is oriented. If you have a camera at a position, looking directly at an object, there is one source of freedom still remaining: rotation. Imagine a camera pointed directly at an object, fixed in place. But, your camera can still rotate, spinning your image. The way OpenGL locks this rotation in place is with the "up vector."
Imagine (0, 0, 0) is directly at your camera. Now, the "up vector" is merely a coordinate around your camera. Once you have the "up vector," though, OpenGL will spin your camera around until directly the top of your camera is facing the coordinate of the "up vector".
If the "up vector" is at (0, 1, 0), then your camera will point normally, as the up vector is directly above the camera, so the top of the camera will be at the top, so your camera is oriented correctly. Move the up vector to (1, 1, 0), though, and in order to point the top of the camera to the "up vector," your camera will need to rotate be 45 degrees, rotating the entire image by 45 degrees.
This answer was not meant as an in-depth tutorial, but rather as a way to grasp the concept of the "up vector," to help better understand the other excellent answers to your question.
first 3 parameters are camera position
next 3 parameters are target position
the last 3 parameters represent the rolling of camera.
Very important thing use gluLookAt after "glMatrixMode(GL_MODELVIEW);"
Other useful hint always specify it 0,0,1 for last 3 parameters.
In general co-ordinates are written as x,y,z. z is up vector.
the 0,1,0 leads in confusion as x,z,y.
so best thing leave it to 0,0,1.
the last vector, also known as the cameras up-vector defines the orientation of the camera.
imagine a stick attached to the top of a "real" camera. the stick's direction is the up-vector.
by changing it from (0,1,0) you can do sideways rolling.
I am trying to make a very simple object rotate around a fixed point in 3dspace.
Basically my object is created from a single D3DXVECTOR3, which indicates the current position of the object, relative to a single constant point. Lets just say 0,0,0.
I already calculate my angle based on the current in game time of the day.
But how can i apply that angle to the position, so it will rotate?
:(?
Sorry im pretty new to Directx.
So are you trying to plot the sun or the moon?
If so then one assumes your celestial object is something like a sphere that has (0,0,0) as its center point.
Probably the easiest way to rotate it into position is to do something like the following
D3DXMATRIX matRot;
D3DXMATRIX matTrans;
D3DXMatrixRotationX( &matRot, angle );
D3DXMatrixTranslation( &matTrans, 0.0f, 0.0f, orbitRadius );
D3DXMATRIX matFinal = matTrans * matRot;
Then Set that matrix as your world matrix.
What it does is it creates a rotation matrix to rotate the object by "angle" around the XAxis (ie in the Y-Z plane); It then creates a matrix that pushes it out to the appropriate place at the 0 angle (orbitRadius may be better off as the 3rd parameter in the translation call, depending on where your zero point is). The final line multiplies these 2 matrices together. Matrix multiplications are non commutative (ie M1 * M2 != M2 * M1). What the above does is move the object orbitRadius units along the Z-axis and then it rotates that around the point (0, 0, 0). You can think of rotating an object that is held in your hand. If orbitRadius is the distance from your elbow to your hand then any rotation around your elbow (at 0,0,0) is going to form an arc through the air.
I hope that helps, but I would really recommend doing some serious reading up on Linear Algebra. The more you know the easier questions like this will be to solve yourself :)