Opengl transform relative body coordinates to world coordinates - c++

I have an opengl c++ project I'm working on, where I draw a player and a ball in his right hand. At first I want to make the ball follow the player's hand movement by setting ball position. After some time I need to make the ball travel using kinematics eqations. I get the player right hand transform matrix using (I do this the moment I draw the player):
glGetFloatv(GL_MODELVIEW_MATRIX, ball->transformMatrix);
and then the moment I draw the ball I try to do:
glLoadMatrixf(ball->transformMatrix);
glTranslated(0, 0, 0);
glCallList(lists[BALL]);
this works perfect and ball drawn at place.
The problem is when I need to move the ball
x += velocityX*dt;
y += velocityY*dt + gravity*dt*dt;
z += velocityZ*dt;
velocityY += gravity*dt;
the movement is relative not to player transform but the global. I tried many ways to solve this but none works.
So how is it possible for me to calculate properly the x, y, z of the ball so I can translate the ball after it left the player hand?
Edit:
I've managed to recieve the ball coordinates by multiplying ball->transformMatrix with the vector {0, 0, 0, 1} the result storing in world vector, after that calling (the moment I draw the ball):
glGetFloatv(GL_MODELVIEW_MATRIX, currentTransformMatrix);
then inversing currentTransformMatrix and multiplying inverse with world vector. The result is the ball proper coordinates {x, y, z}.

You need to transform back to the player's hand position; invert the transformation from hand to world. Since OpenGL is not a math library, you need to either implement the matrix math yourself of use some 3rd party linear math library.
EDIT due to comment:
What you transform is the velocity and acceleration into the ball's local coordinate system. Both velocity and gravity are just "directive" vectors, so the transformation is independent of the position of the ball in the world (well the gravity may change, but that's no issue here). So you need the inverse transform from world to ball space, that is independent from position. This is the very same problem like transforming normals (jsut the other way round) => so you need to take the transposed inverse of the inverted matrix (Explained in detail here http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/ ). However you already have the inverse of that inverse; it's the ball->world matrix itself. So you just have to transpose the modelview matrix, multiply velocity and gravity vectors with that and apply them in the ball's local space.

Related

how do I maintain relative transformation b/w 2 objects after changing transformation of any one without a scenegraph?

Say I have 2 objects, a camera and a cube, both on XZ plane, the cube has some arbitrary rotation, and camera is facing the cube.
now if a transformation R is applied to the camera such that it has a new rotation and position.
I want to move the cube in front of the camera using transformation R1, such that in the view it looks exactly as before R was applied, meaning relative distance, rotation and scale b/w the 2 objects remain same after both R and R1.
Following image gives a gist of the problem.
Assume that there's no scenegraph that we can use.
I've posed the problem mainly in 2D but I'm trying to solve it in 3D, so rotations can have all yaw, pitch and roll, and translations can be anywhere in 3D space.
EDIT:
I forgot to add what I have done so far.
I figured out how to maintain relative distance b/w camera and cube, I can project cube's position to get world to screen point, then unproject the screen point in new camera position to get new world position.
However for rotation, I have tried this
I thought I can apply same rotation as R in R1, this didn't work, it appears to work if rotation happens only in one axis, if rotation happens in more than one axes, it does not work.
I thought I can take delta rotation b/w camera and cube, and simply apply camera's rotation to the cube and then multiply delta rotation, this also didn't work
Let M and V be the model and view matrices before you move the camera, M2 and V2 be the matrices after you move the camera. To be clear: model matrix transforms the coordinates from object local coordinates into world coordinates; a view matrix transforms from world coordinates into clip-space camera coordinates. Consequently V*M*p transforms the position p into clip-space.
For the position on the screen to stay constant, we need V*M*p = V2*M2*p to be true for all p (that's assuming that the FOV doesn't change). Therefore V*M = V2*M2, or
M2 = inverse(V2)*V*M
If you apply the camera transformation on the right (V2 = V*R) then the above expression for M2 can be simplified:
M2 = inverse(R)*M
(that is you apply inverse(R) on the left of the model matrix to compensate).
Alternatively, ask yourself if you really need to keep the object coordinates in the world reference frame. It may be easier to not to apply the view matrix when rendering that object at all; that would effectively keep it relative to the camera at all times without any additional tweaks. That would have better numerical stability too.

Rotating object along all 3 axes to map to the player's camera LookAt vector

I have a simple 3D LookAt vector, and I wish to rotate the player model (a simple cube) to show where the player/cube is looking at.
For sideways camera movement I've managed to figure it out and do the following:
glTranslatef(position.x, position.y, position.z);
glRotatef(atan2(lookAt.z, lookAt.x) * 180 / PI, 0, 1, 0);
Now I know that to get up-down camera movement to map to the rendered cube model, I need to rotate the cube around it's x and z axes as well, but I can't seem to figure out what formula to use for those two.
OpenGL will rotate the whole coordinate system (whole space, not only a cube) so after first rotation you just need to rotate only around z axis.
// first rotation
glRotatef(-atan2(lookAt.z, lookAt.x) * 180 / PI, 0, 1, 0);
// second rotation
float d = sqrt(pow(lookAt.x,2) + pow(lookAt.z,2));
float pitch = atan2(lookAt.y, d);
glRotatef(pitch * 180 / PI, 0, 0, 1);
First and second rotation:
I assume your model is looking along x axis (red arrow). I also assume lookAt is given relative to the position of the model.
If you're familiar with matrix math, matrices are an easier way to think about it. If you're not familiar with matrices, this series explains how to use them to solve common game development problems: https://www.youtube.com/playlist?list=PLW3Zl3wyJwWNQjMz941uyOIq3Nw6bcDYC Getting good with matrices is a good idea if you want to be a 3D game programmer.
For your problem, you want to make a translation/rotation matrix which will transform the box to the proper place for you. You can make a translation matrix and a rotation matrix individually, and then at the end take the product of the two. I'll try to break that down.
The translation matrix is simple, if your position is then your matrix will be
To construct a rotation matrix, you need to rotate the standard basis vectors the way you want. Then when you create a matrix from those rotated basis vectors, the matrix will rotate other vectors in the same way. As an example of that, take the standard basis vectors:
Now I'm going to rotate and around by 90 degrees clockwise:
Now put them into a matrix:
and you have R is a matrix that rotates things around by 90 degrees.
In your case you want to rotate stuff such that it faces a vector that you provide. That makes things easy, we can calculate our basis vectors from that vector. If your vector is then and we can solve for the other two basis vectors using cross products. You know that the character won't ever roll their view (right?) so we can use the global up vector as well. I'll call the global up vector . In your case you're using y as the "up" dimension so the global up vector will be
Then:
In the first line you do a cross product between the view vector and the up vector to get a vector orthogonal to both - this will serve as the third basis vector after it is normalized, which is the second line. In the third line another cross product generates the second basis vector. These three vectors represent what happens when the standard basis vectors are rotated the way you want them to be. Use them as the columns in a matrix like so:
Now the last step in the math is to make a final matrix that will do both translation and rotation, and this step is easy:
Then load that matrix into OpenGL with glLoadMatrix:
glLoadMatrixf(&M);
All of this gets explained in the video series I linked as well :)

OpenGL camera rotation

In OpenGL I'm trying to create a free flight camera. My problem is the rotation on the Y axis. The camera should always be rotated on the Y world axis and not on the local orientation. I have tried several matrix multiplications, but all without results. With
camMatrix = camMatrix * yrotMatrix
rotates the camera along the local axis. And with
camMatrix = yrotMatrix * camMatrix
rotates the camera along the world axis, but always around the origin. However, the rotation center should be the camera. Somebody an idea?
One of the more tricky aspects of 3D programming is getting complex transformations right.
In OpenGL, every point is transformed with the model/view matrix and then with the projection matrix.
the model view matrix takes each point and translates it to where it should be from the point of view of the camera. The projection matrix converts the point's coordinates so that the X and Y coordinates can be mapped to the window easily.
To get the mode/view matrix right, you have to start with an identity matrix (one that doesn't change the vertices), then apply the transforms for the camera's position and orientation, then for the object's position and orientation in reverse order.
Another thing you need to keep in mind is, rotations are always about an axis that is centered on the origin (0,0,0). So when you apply a rotate transform for the camera, whether you are turning it (as you would turn your head) or orbiting it around the origin (as the Earth orbits the Sun) depends on whether you have previously applied a translation transform.
So if you want to both rotate and orbit the camera, you need to:
Apply the rotation(s) to orient the camera
Apply translation(s) to position it
Apply rotation(s) to orbit the camera round the origin
(optionally) apply translation(s) to move the camera in its set orientation to move it to orbit around a point other than (0,0,0).
Things can get more complex if you, say, want to point the camera at a point that is not (0,0,0) and also orbit that point at a set distance, while also being able to pitch or yaw the camera. See here for an example in WebGL. Look for GLViewerBase.prototype.display.
The Red Book covers transforms in much more detail.
Also note gluLookAt, which you can use to point the camera at something, without having to use rotations.
Rather than doing this using matrices, you might find it easier to create a camera class which stores a position and orthonormal n, u and v axes, and rotate them appropriately, e.g. see:
https://github.com/sgolodetz/hesperus2/blob/master/Shipwreck/MapEditor/GUI/Camera.java
and
https://github.com/sgolodetz/hesperus2/blob/master/Shipwreck/MapEditor/Math/MathUtil.java
Then you write things like:
if(m_keysDown[TURN_LEFT])
{
m_camera.rotate(new Vector3d(0,0,1), deltaAngle);
}
When it comes time to set the view for the camera, you do:
gl.glLoadIdentity();
glu.gluLookAt(m_position.x, m_position.y, m_position.z,
m_position.x + m_nVector.x, m_position.y + m_nVector.y, m_position.z + m_nVector.z,
m_vVector.x, m_vVector.y, m_vVector.z);
If you're wondering how to rotate about an arbitrary axis like (0,0,1), see MathUtil.rotate_about_axis in the above code.
If you don't want to transform based on the camera from the previous frame, my suggestion might be just to throw out the matrix compounding and recalc it every frame. I don't think there's a way to do what you want with a single matrix, as that stores the translation and rotation together.
I guess if you just want a pitch/yaw camera only, just store those values as two floats, and then rebuild the matrix based on that. Maybe something like pseudocode:
onFrameUpdate() {
newPos = camMatrix * (0,0,speed) //move forward along the camera axis
pitch += mouse_move_x;
yaw += mouse_move_y;
camMatrix = identity.translate(newPos)
camMatrix = rotate(camMatrix, (0,1,0), yaw)
camMatrix = rotate(camMatrix, (1,0,0), pitch)
}
rotates the camera along the world axis, but always around the origin. However, the rotation center should be the camera. Somebody an idea?
I assume matrix stored in memory this way (number represent element index if matrix were a linear 1d array):
0 1 2 3 //row 0
4 5 6 7 //row 1
8 9 10 11 //row 2
12 13 14 15 //row 3
Solution:
Store last row of camera matrix in temporary variable.
Set last row of camera matrix to (0, 0, 0, 1)
Use camMatrix = yrotMatrix * camMatrix
Restore last row of camera matrix from temporary variable.

Glut Mouse Coordinates

I am trying to obtain the Cartesian coordinates of a point for that I am using a simple function the devolves a window coordinates, something like (100,200). I want to convert this to Cartesian coordinates, the problems is that the window size is variable so I really don't know how to implement this.
Edit:
Op:
I tried to use the guUnPorject by doing something like this
GLdouble modelMatrix[16];
glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix);
GLdouble projMatrix[16];
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
double position[3];
gluUnProject(
x,
y,
1,
modelMatrix,
projMatrix,
viewport,
&position[0], //-> pointer to your own position (optional)
&position[1], // id
&position[2] //
);
cout<<position[0];
However the position I got seems completely random
Mouse coordinates are already in cartesian coordinates, namely the coordinates of window space. I think what you're looking for is a transformation into the world space of your OpenGL scene. The 2D mouse coordinates in the window lack some information thougn: The depth.
So what you can otain is actually a ray from the "camera" into the scene. For this you need to perform the back projection from screen space to world space. For this you take the projection matrix, P and the view matrix V (what's generated by gluLookAt or similar), form the product P*V and invert it i.e. (P*V)^-1 = V^-1 * P^-1. Take notice that inversion is not transposition. Inverting a matrix is done by Gauss-Jordan elimination, preferrably with some pivoting. Any mathtextbook on linear algebra explains it.
Then you need two vectors to form a ray. For this you take two screen space positions with the same XY coordinates but differing depth (say, 0 and 1) and do the backprojection:
(P*V)^-1 * (x, y, {0, 1})
The difference of the resulting vectors gives you a ray direction.
The whole backprojection process has been wrapped up into gluUnProject. However I recommend not using it in its GLU form, but either look at the source code to learn from it, or use a modern day substitute like it's offered by GLM.

Relative rotation of OpenGL Camera

I am currently struggling in finding a formula to rotate my OpenGL "Camera" (I tried do do it via a scene rotation, but have the same issue).
Basically my Camera is at a given position, looking a given point (all indicated to gluLookAt) and I would like to rotate the camera upwards for example, and still looking at the same point.
What should be the right process ?
What input data should I take to decide the amount of movement ? 2D mouse coordinates evolution or 3D unprojected mouse coordinates evolution ?
The trick is to see that a camera-rotation is the same as a scene rotation if you do it at the correct position. Move the camera into the point around which you want to rotate, then rotate the camera, then move back out by the same distance you moved in.
The amount by which you rotate depends on your application. Take G-Earth as an example: if you are close to the surface the rotation is (absolute) small, if you are far from the surface it is large.
If you're creating orbiting(oribitng around LookAt) camera for openGL I sugest you make it with these data:
LookAtPosition- 3D vector
CamUp - 3D unit vector
RelativeCamPosition - 3D unit vector
CamDistance - decimal number
LookAtPosition is a point on which you'll be looking. CamUp is vector that points up from camera, you can see it on this image. It's best to initialize camera at no rotation, so that CamUp = [0,1,0]. Note that it's unit vector so it's magnitude/size/length is always 1. RelativeCamPosition is again unit vector. You get it by taking LookAt to Camera
vector and dividing by it's magnitude, which you'll save in CamDistance. In intialized state it might look as this:
LookAtPosition = [0,0,0]
CamUp = [0,1,0]
RelativeCamPosition = [1,0,0]
CamDistance = 10
You can now get camera position by
CamPosition = LookAtPosition + RelativeCamPosition * CamDistance
But you need to rotate that camera arround right? Well there's a reason for unit vectors - they are easy to use in calculations. I believe you use angles for rotating so you need to use only sine and cosine. Rotate function might look like this:
Rotate(angleX, angleY){
RelativeCamPosition.x = sin(angleX)*cos(angleY);
RelativeCamPosition.z = cos(angleX)*cos(angleY);
RelativeCamPosition.y = sin(angleY);
}
where angleX and angleY are absolute (NOT RELATIVE) rotations in horizontal and vertical direction. You should always use absolute roations because there can be floating point errors while adding. Anyway I just made those calculations on scrap of paper so I hope they're allright.
Edit: I've just noticed that this will work just if your intiial state is like I wrote RelativeCamPosition = [1,0,0]. However it shouldn't be hard to edit them so it works for arbirtary initial state.