rotate the camera around an object in spherical coordinates glulookat - c++

so i'm trying to rotate the camera around a cube object , using the keyboard arrows to change the y angle and the x angle , i want a result like this : this video
Well now I want to move the camera in a circular motion around the shape, so I used
the sphere parametric equation to determine the camera's x, y, and z coordinates on the parameter of the sphere using the x angle and the y angle , this is the equation : sphere parametric equation
For this moment, the equation is as following:
x = radius*cos(y_angle)*cos(x_angle)
y = radius*cos(y_angle)*sin(x_angle)
z = radius*sin(y_angle)
so the code will be :
gluLookAt(x, y, z, 0, 0, 0, 0, 1, 0); // x,y and z that we calculated above
Now it is supposed to work without problems, but the camera dose rotate in a strange way that is not as it should, I think this has something to do with the up vector in the
glulookat() function because i'm using a constant up vector like
(0,1,0)
Anyway, I can't calculate the up vector correctly, so I want help in making a circular motion around the cube like the one that appeared in the video using the glulookat , i hope i made it clear ، Because as you have noticed, I can't explain well

Related

OpenGL camera rotation using gluLookAt

I am trying to use gluLookAt to implement an FPS style camera in OpenGL fixed function pipeline. The mouse should rotate the camera in any given direction.
I store the position of the camera:
float xP;
float yP;
float zP;
I store the look at coordinates:
float xL;
float yL;
float zL;
The up vector is always set to (0,1,0)
I use this camera as follows: gluLookAt(xP,yP,zP, xL,yL,zL, 0,1,0);
I want my camera to be able to be able to move along the yaw and pitch, but not roll.
After every frame, I reset the coordinates of the mouse to the middle of the screen. From this I am able to get a change in both x and y.
How can I convert the change in x and y after each frame to appropriately change the lookat coordinates (xL, yL, zL) to rotate the camera?
Start with a set of vectors:
fwd = (0, 0, -1);
rht = (1, 0, 0);
up = (0, 1, 0);
Given that Your x and y, taken from the mouse positions You mentioned, are small enough You can take them directly as yaw and pitch rotations respectively. With yaw value rotate the rht and fwd vectors over the up vector, than rotate fwd vactor over the rht with pitch value. This way You'll have a new forward direction for Your camera (fwd vactor) from which You can derive a new look-at point (L = P + fwd in Your case).
You have to remember to restrict pitch rotation not to have fwd and up vectors parallel at some point. You can prevent that by recreating the up vector every time You do pitch rotation - simply do a cross product between rht and fwd vactors. A side-note here though - this way up will not always be (0,1,0).

Compute a RPY (roll pitch yaw) from a 3d point on a sphere

I need a method to find a set of homogenous transformation matrices that describes the position and orientation in a sphere.
The idea is that I have an object in the center of this sphere which has a radius of dz. Since I know the 3d coordinate of the object I know all the 3d coordinates of the sphere. Is it possible to determine the RPY of any point on the sphere such that the point always points toward the object in the center?
illustration:
At the origo of this sphere we have an object. The radius of the sphere is dz.
The red dot is a point on the sphere, and the vector from this point toward the object/origo.
The position should be relatively easy to extract, as a sphere can be described by a function, but how do I determine the vector, or rotation matrix that points such that it points toward origo.
You could, using the center of the sphere as the origin, compute the unit vector of the line formed by the origin to the point on the edge of the sphere, and then multiply that unit vector by -1 to obtain the vector pointing toward the center of the sphere from the point on the edge of the sphere.
Example:
vec pointToCenter(Point edge, Point origin) {
vec norm = edge - origin;
vec unitVec = norm / vecLength(norm);
return unitVec * -1;
}
Once you have the vector you can convert it to euler angles for the RPY, an example is here
Of the top of my head I would suggest using quaterneons to define the rotation of any point at the origin, relative to the point you want on the surface of the sphere:
Pick the desired point on the sphere's surface, say the north pole for example
Translate that point to the origin (assuming the radius of the sphere is known), using 3D Pythagorus: x_comp^2 + y_comp^2 + z_comp^2 = hypotenuse^2
Create a rotation that points an axis at the original surface point. This will just be a scaled multiple of the x, y and z components making up the hypotenuse. I would just make it into unit components. Capture the resulting axis and rotation in a quaterneon (q, x, y, z), where x, y, z are the components of your axis and q is the rotation about that axis. Hard code q to one. You want to use quaterneons because it will make your resulting rotation matricies easier to work with
Translate the point back to the sphere's surface and negate the values of the components of your axis, to get (q, -x, -y, -z).
This will give you your point on the surface of the sphere, with an axis pointing back to the origin. With the north pole as an example, you would have a quaternion of (1, 0, -1, 0) at point (0, radius_length, 0) on the sphere's surface. See quatrotation.c in my below github repository for the resulting rotation matrix.
I don't have time to write code for this but I wrote a little tutorial with compilable code examples in a github repository a while back, which should get you started:
https://github.com/brownwa/opengl
Do the mat_rotation tutorial first, then do the quatereons one. It's doable in a weekend, a day if you're focused.

Euler camera, rotation around x axis in camera local system

I have camera class that hold her orientation via euler angles and position. Something like that:
float m_x;
float m_y;
float m_z;
Vector4 m_pos;
And i want to move this camera free over the space.
When user move mouse up-down camera must rotate around x axis in her own coordinate system. But i want to hold only this three angles and position and nothing more.
So algorithm looks like this:
Find camera local system axis (u, v, n)
Rotate around u axis on angle alpha
Find angles around (1, 0, 0), (0, 1, 0), (0, 0, 1) that answer to rotation on angle alpha around u axis
Add them to m_x, m_y, m_z
Question is: how can i calculate rotation angles in default coordinate system (i mean in (1, 0, 0), (0, 1, 0) and (0, 0, 1)) that answer to rotation angles in local camera coordinate system?
Or may be better solution exists for this problem?
I'm answering the concise question from your comment:
how to calculate rotation in one coordinate system that answer to rotation in another coordinate system?
You can transform rotations between coordinate systems by applying a suitable transformation matrix. This in turn can be computed form the euler angles, see the Wikipedia section on conversion formulae.
Depending on your application, you might or might not have to take translations into account as well. As I understand your question, you can concentrate on the rotation part of each transformation.

OpenGL simultaneous translate and rotate around local axis

I am working on an application that has similar functionality to MotionBuilder in its viewport interactions. It has three buttons:
Button 1 rotates the viewport around X and Y depending on X/Y mouse drags.
Button 2 translates the viewport around X and Y depending on X/Y mouse drags.
Button 3 "zooms" the viewport by translating along Z.
The code is simple:
glTranslatef(posX,posY,posZ);
glRotatef(rotX, 1, 0, 0);
glRotatef(rotY, 0, 1, 0);
Now, the problem is that if I translate first, the translation will be correct but the rotation then follows the world axis. I've also tried rotating first:
glRotatef(rotX, 1, 0, 0);
glRotatef(rotY, 0, 1, 0);
glTranslatef(posX,posY,posZ);
^ the rotation works, but the translation works according to world axis.
My question is, how can I do both so I achieve the translation from code snippet one and the rotation from code snippet 2.
EDIT
I drew this rather crude image to illustrate what I mean by world and local rotations/translations. I need the camera to rotate and translate around its local axis.
http://i45.tinypic.com/2lnu3rs.jpg
Ok, the image makes things a bit clearer.
If you were just talking about an object, then your first code snippet would be fine, but for the camera it's quite different.
Since there's technically no object as a 'camera' in opengl, what you're doing when building a camera is just moving everything by the inverse of how you're moving the camera. I.e. you don't move the camera up by +1 on the Y axis, you just move the world by -1 on the y axis, which achieves the same visual effect of having a camera.
Imagine you have a camera at position (Cx, Cy, Cz), and it has x/y rotation angles (CRx, CRy). If this were just a regular object, and not the camera, you would transform this by:
glTranslate(Cx, Cy, Cz);
glRotate(CRx, 1, 0, 0);
glRotate(CRy, 0, 1, 0);
But because this is the camera, we need to do the inverse of this operation instead (we just want to move the world by (-Cx, -Cy, and -Cz) to emulate the moving of a 'camera'. To invert the matrix, you just have to do the opposite of each individual transform, and do them in reverse order.
glRotate(-CRy, 0, 1, 0);
glRotate(-CRx, 1, 0, 0);
glTranslate(-Cx, -Cy, -Cz);
I think this will give you the kind of camera you're mentioning in your image.
I suggest that you bite the apple and implement a camera class that stores the current state of the camera (position, view direction, up vector, right vector) and manipulate that state according to your control scheme. Then you can set up the projection matrix using gluLookAt(). Then, the order of operations becomes unimportant. Here is an example:
Let camPos be the current position of the camera, camView its view direction, camUp the up vector and camRight the right vector.
To translate the camera by moveDelta, simply add moveDelta to camPos. Rotation is a bit more difficult, but if you understand quaternions you'll be able to understand it quickly.
First you need to create a quaternion for each of your two rotations. I assume that your horizontal rotation is always about the positive Z axis (which points at the "ceiling" if you will). Let hQuat be the quaternion representing the horizontal rotation. I further assume that you want to rotate the camera about its right axis for your vertical rotation (creating a pitch effect). For this, you must apply the horizontal rotation to the camera's current angle. The result is the rotation axis for your vertical rotation hQuat. The total rotation quaternion is then rQuat = hQuat * vQuat. Then you apply rQuat to the camera's view direction, up, and right vectors.
Quat hRot(rotX, 0, 0, 1); // creates a quaternion that rotates by angle rotX about the positive Z axis
Vec3f vAxis = hRot * camRight; // applies hRot to the camera's right vector
Quat vRot(rotY, vAxis); // creates a quaternion that rotates by angle rotY about the rotated camera's right vector
Quat rQuat = hRot * vRot; // creates the total rotation
camUp = rQuat * camUp;
camRight = rQuat * camRight;
camView = rQuat * camView;
Hope this helps you solve your problem.
glRotate always works around the origin. If you do:
glPushMatrix();
glTranslated(x,y,z);
glRotated(theta,1,0,0);
glTranslated(-x,-y,-z);
drawObject();
glPopMatrix();
Then the 'object' is rotate around (x,y,z) instead of the origin, because you moved (x,y,z) to the origin, did the rotation, and then pushed (x,y,z) back where it started.
However, I don't think that's going to be enough to get the effect you're describing. If you always want transformations to be done with respect to the current frame of reference, then you need to keep track of the transformation matrix yourself. This why people use Quaternion based cameras.

OpenGL rotation?

I'm writing an simple application based on openGL and qt. It creates primitive (triangle for example) which rotation user can change by using one of three spinboxes
(coordinates: x, y, z). The problem is local coordinate system. It changes and works properly for axes Y and Z but not for X. Local axis X always coincides with global axis X. So when i rotate my figure by 90' around Y then axes X and Z are parallel because local Z changed (as I expected) and local X didn't change. Here is fragment of my paintGL() function:
glRotatef(rot[0], 1.0, 0, 0);
glRotatef(rot[1], 0, 1.0, 0);
glRotatef(rot[2], 0, 0, 1.0);
You should use rotation matrix which lets you specify rotation axis and angles of rotation:
http://en.wikipedia.org/wiki/Rotation_matrix#Rotation_matrix_from_axis_and_angle
Generally it is not a good idea to perform rotations with euler angles: the next problem you will come around is the 180 deg. flip in certain angle constellations. The problem is broadly known as gimbal lock with euler angles.