OpenGl rotations and translations - opengl

I am building a camera class to look arround a scene. At the moment I have 3 cubes just spread arround to have a good impression of what is going on. I have set my scroll button on a mouse to give me translation along z-axis and when I move my mouse left or right I detect this movement and rotate arround y-axis. This is just to see what happens and play arround a bit. So I succeeded in making the camera rotate by rotating the cubes arround the origin but after I rotate by some angle, lets say 90 degrees, and try to translate along z axis to my surprise I find out that my cubes are now going from left to right and not towards me or away from me. So what is going on here? It seems that z axis is rotated also. I guess the same goes for x axis. So it seems that nothing actually moved in regard to the origin, but the whole coordinate system with all the objects was just rotated. Can anyone help me here, what is going on? How coordinate system works in opengl?

You are most likely confusing local and global rotations. Usual cheap remedy is to change(reverse) order of some of your transformation. However doing this blindly is trial&error and can be frustrating. Its better to understand the math first...
Old API OpeGL uses MVP matrix which is:
MVP = Model * View * Projection
Where Model and View are already multiplied together. What you have is most likely the same. Now the problem is that Model is direct matrix, but View is Inverse.
So if you have some transform matrix representing your camera in oder to use it to transform back you need to use its inverse...
MVP = Model * Inverse(Camera) * Projection
Then you can use the same order of transformations for both Model and Camera and also use their geometric properties like basis vectors etc ... then stuff like camera local movements or camera follow are easy. Beware some tutorials use glTranspose instead of real matrix Inverse. That is correct only if the Matrix contains only unit (or equal sized) orthogonal basis vectors without any offset so no scale,skew,offset or projections just rotation and equal scale along all axises !!!
That means when you rotate Model and View in the same way the result is opposite. So in old code there is usual to have something like this:
// view part of matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotate3f(view_c,0,0,1); // ugly euler angles
glRotate3f(view_b,0,1,0); // ugly euler angles
glRotate3f(view_a,1,0,0); // ugly euler angles
glTranslatef(view_pos); // set camera position
// model part of matrix
for (i=0;i<objs;i++)
{
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(obj_pos[i]); // set camera position
glRotate3f(obj_a[i],1,0,0); // ugly euler angles
glRotate3f(obj_b[i],0,1,0); // ugly euler angles
glRotate3f(obj_c[i],0,0,1); // ugly euler angles
//here render obj[i]
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
note the order of transforms is opposite (I just wrote it here in editor so its not tested and can be opposite to native GL notation ... I do not use Euler angles) ... The order must match your convention... To know more about these (including examples) not using useless Euler angles see:
Understanding 4x4 homogenous transform matrices
Here is 4D version of what your 3D camera class should look like (just shrink the matrices to 4x4 and have just 3 rotations instead of 6):
reper4D
pay attention to difference between local lrot_?? and global grot_?? functions. Also note rotations are defined by plane not axis vector as axis vector is just human abstraction that does not really work except 2D and 3D ... planes work from 2D to ND
PS. its a good idea to have the distortions (scale,skew) separated from model and keep transform matrices representing coordinate systems orthonormal. It will ease up a lot of things latter on once you got to do advanced math on them. Resulting in:
MVP = Model * Model_distortion * Inverse(Camera) * Projection

Related

Correct way to translate in a 3D world after rotating?

I have a basic cube generated in my 3D world. I can rotate correctly around the camera, but when I translate after rotating, the translations are not correct.
For example, if I rotate 90 degrees and translate into the Z axis, it would move as if translating in the X axis.
glLoadIdentity();
glRotatef(angle,0,1,0); //Rotate around the camera.
glTranslatef(movX,movY,movZ); //Translate after rotating around the camera.
glCallList(cubes[0]);
I need some help with this. Also, I tried translating before rotating, but the rotation is not at the camera. It is at the edge of the cube.
Keep in mind that in OpenGL that the transformation is applied to the camera, not the objects rendered; so you observe the inverse of the transformation you expected.
Also in OpenGL the Y and Z axes are flipped (Y is vertical), so you observe a horizontal translation instead of a vertical one.
Also, because the object is rotated though 90 degrees about Y, the X and Z axes replace each other (one of them is reversed).
willywonkadailyblah's answer is half correct. Because you are using the old OpenGL, you're using the old matrix stack. You are modifying the modelview matrix when you're doing your glRotatef and glTranslatef calls. The modelview matrix is actually the model's matrix and the camera's view matrix precombined (already multiplied together). These matrices are what determine where your object is in 3D space and where your viewing position/direction of the world is. So you can think of your calls as moving the camera, but it's probably easier to think of them as moving and rotating the world.
These rotate and translation calls are linear transformations. This has a precise definition, but for our purposes it means that you can represent the transformation as a matrix and you multiply it with the point's coordinates to apply the transformation to a point. Now matrix multiplication is not commutative, meaning AB != BA. All this to say that when you rotate, then translate it is different than translating and rotating, which I think you know. But then when you translate, rotate, and translate again, it might be a little more difficult to follow what you're actually doing. Worse even if you throw in some scaling in there. So I would suggest learning how linear transformations work and maintaining your own matrices for the objects and camera if you're serious about learning OpenGL.
learnopengl.org is an excellent website, but it teaches you Modern OpenGL, not what you're currently using. But the lesson on transformations and on coordinate systems are probably generally helpful, even without exact code for you to follow

Rotating object along all 3 axes to map to the player's camera LookAt vector

I have a simple 3D LookAt vector, and I wish to rotate the player model (a simple cube) to show where the player/cube is looking at.
For sideways camera movement I've managed to figure it out and do the following:
glTranslatef(position.x, position.y, position.z);
glRotatef(atan2(lookAt.z, lookAt.x) * 180 / PI, 0, 1, 0);
Now I know that to get up-down camera movement to map to the rendered cube model, I need to rotate the cube around it's x and z axes as well, but I can't seem to figure out what formula to use for those two.
OpenGL will rotate the whole coordinate system (whole space, not only a cube) so after first rotation you just need to rotate only around z axis.
// first rotation
glRotatef(-atan2(lookAt.z, lookAt.x) * 180 / PI, 0, 1, 0);
// second rotation
float d = sqrt(pow(lookAt.x,2) + pow(lookAt.z,2));
float pitch = atan2(lookAt.y, d);
glRotatef(pitch * 180 / PI, 0, 0, 1);
First and second rotation:
I assume your model is looking along x axis (red arrow). I also assume lookAt is given relative to the position of the model.
If you're familiar with matrix math, matrices are an easier way to think about it. If you're not familiar with matrices, this series explains how to use them to solve common game development problems: https://www.youtube.com/playlist?list=PLW3Zl3wyJwWNQjMz941uyOIq3Nw6bcDYC Getting good with matrices is a good idea if you want to be a 3D game programmer.
For your problem, you want to make a translation/rotation matrix which will transform the box to the proper place for you. You can make a translation matrix and a rotation matrix individually, and then at the end take the product of the two. I'll try to break that down.
The translation matrix is simple, if your position is then your matrix will be
To construct a rotation matrix, you need to rotate the standard basis vectors the way you want. Then when you create a matrix from those rotated basis vectors, the matrix will rotate other vectors in the same way. As an example of that, take the standard basis vectors:
Now I'm going to rotate and around by 90 degrees clockwise:
Now put them into a matrix:
and you have R is a matrix that rotates things around by 90 degrees.
In your case you want to rotate stuff such that it faces a vector that you provide. That makes things easy, we can calculate our basis vectors from that vector. If your vector is then and we can solve for the other two basis vectors using cross products. You know that the character won't ever roll their view (right?) so we can use the global up vector as well. I'll call the global up vector . In your case you're using y as the "up" dimension so the global up vector will be
Then:
In the first line you do a cross product between the view vector and the up vector to get a vector orthogonal to both - this will serve as the third basis vector after it is normalized, which is the second line. In the third line another cross product generates the second basis vector. These three vectors represent what happens when the standard basis vectors are rotated the way you want them to be. Use them as the columns in a matrix like so:
Now the last step in the math is to make a final matrix that will do both translation and rotation, and this step is easy:
Then load that matrix into OpenGL with glLoadMatrix:
glLoadMatrixf(&M);
All of this gets explained in the video series I linked as well :)

Rotation and translation of the Earth opengl c++

I am trying to get a sphere to rotate around another simulating the orbit of the Earth.
I am able to get the Earth to orbit around the sun; however, I can't get it to rotate around itself.
This is the code I have so far:
//sun
glMaterialAmbientAndDiffuse(GLMaterialEnums::FRONT,GLColor<GLfloat,4>(1.5f,1.0f,0.0f));
glTranslate(0.0f, 0.0f, 0.0f);
glRotate(15.0, 1.0, 0.0, 0.0);
drawEllipsoid(10.0, 1.0, 4, 4);
glPushMatrix();
//Earth
glMaterialAmbientAndDiffuse(GLMaterialEnums::FRONT,GLColor<GLfloat,4>(0.5f,10.5f,10.5f));
glRotate(orbit,Vrui::Vector(0,0,1));
glTranslate(105.0, 0.0, 0.0);
drawPlanetGrid(5, 1, 4, 4, 1);
glPopMatrix();
orbit += .1;
if (orbit > 360)
{
orbit = 0;
}
Could anyone help me move in the right direction? I also needed to know how I can get the Earth to orbit around the sun in a tilted angle.
Basically, you need to manage some model matrices. The sun's model matrix (if centered in (0,0,0) has just a rotational part). The earth rotating around the sun, needs a model matrix which is first rotated and then translated to be placed in the orbit of the sun. So when calculating a new frame you increase your rotation parameter, create the rotation and then apply the translation. If you want to add a moon, you need another model matrix, which is accumulated. That is, the moon needs a separate rotation and translation (like the sun) but you have to account also for the transformation of the earth. Make sure that you understand what a transformation matrix does. In that case the transformation matrix is just a coordinate transformation. So, you have your sun, earth and moon in a local frame. The model matrices achieve the transformation from local coordinate system to the world coordinate system. The view matrix transforms world coordinates to eye coordinates. And then there is only projection left for you.
To solve this, you need to understand the idea of co-ordinate systems and how to use them within OpenGL.
A co-ordinate system is just a set of points that share the same XYZ axes. In each system, the XYZ axes do not necessarily point in the same direction, so in one system moving in positive X could move in negative Y in other system. To convert points from one system to another you use a transformation matrix.
A scene is made up of several co-ordinate systems:-
World space
Camera space (or view space)
Object space
Model space
So, your model (the Earth, say) has a transformation from its model space to object space - this is the rotation of the earth around the vertical axis. Then it has a transformation from object space to world space - this is the translation about the sun and tilting. The final transformation is from world space to camera space.
So, you need three matrices to put your Earth model into the right place on screen. this may seem like a lot of processing, but the thing about these matrices is that they can be multiplied together to form a single object->camera space matrix.
Once you've set up the scene using the various co-ordinate systems and transformations, it should work.
You may want to work with cubes rather than spheres to start with as it's easier to follow what is happening to the vertices as they're being transformed.

Relative rotation of OpenGL Camera

I am currently struggling in finding a formula to rotate my OpenGL "Camera" (I tried do do it via a scene rotation, but have the same issue).
Basically my Camera is at a given position, looking a given point (all indicated to gluLookAt) and I would like to rotate the camera upwards for example, and still looking at the same point.
What should be the right process ?
What input data should I take to decide the amount of movement ? 2D mouse coordinates evolution or 3D unprojected mouse coordinates evolution ?
The trick is to see that a camera-rotation is the same as a scene rotation if you do it at the correct position. Move the camera into the point around which you want to rotate, then rotate the camera, then move back out by the same distance you moved in.
The amount by which you rotate depends on your application. Take G-Earth as an example: if you are close to the surface the rotation is (absolute) small, if you are far from the surface it is large.
If you're creating orbiting(oribitng around LookAt) camera for openGL I sugest you make it with these data:
LookAtPosition- 3D vector
CamUp - 3D unit vector
RelativeCamPosition - 3D unit vector
CamDistance - decimal number
LookAtPosition is a point on which you'll be looking. CamUp is vector that points up from camera, you can see it on this image. It's best to initialize camera at no rotation, so that CamUp = [0,1,0]. Note that it's unit vector so it's magnitude/size/length is always 1. RelativeCamPosition is again unit vector. You get it by taking LookAt to Camera
vector and dividing by it's magnitude, which you'll save in CamDistance. In intialized state it might look as this:
LookAtPosition = [0,0,0]
CamUp = [0,1,0]
RelativeCamPosition = [1,0,0]
CamDistance = 10
You can now get camera position by
CamPosition = LookAtPosition + RelativeCamPosition * CamDistance
But you need to rotate that camera arround right? Well there's a reason for unit vectors - they are easy to use in calculations. I believe you use angles for rotating so you need to use only sine and cosine. Rotate function might look like this:
Rotate(angleX, angleY){
RelativeCamPosition.x = sin(angleX)*cos(angleY);
RelativeCamPosition.z = cos(angleX)*cos(angleY);
RelativeCamPosition.y = sin(angleY);
}
where angleX and angleY are absolute (NOT RELATIVE) rotations in horizontal and vertical direction. You should always use absolute roations because there can be floating point errors while adding. Anyway I just made those calculations on scrap of paper so I hope they're allright.
Edit: I've just noticed that this will work just if your intiial state is like I wrote RelativeCamPosition = [1,0,0]. However it shouldn't be hard to edit them so it works for arbirtary initial state.

How does zooming, panning and rotating work?

Using OpenGL I'm attempting to draw a primitive map of my campus.
Can anyone explain to me how panning, zooming and rotating is usually implemented?
For example, with panning and zooming, is that simply me adjusting my viewport? So I plot and draw all my lines that compose my map, and then as the user clicks and drags it adjusts my viewport?
For panning, does it shift the x/y values of my viewport and for zooming does it increase/decrease my viewport by some amount? What about for rotation?
For rotation, do I have to do affine transforms for each polyline that represents my campus map? Won't this be expensive to do on the fly on a decent sized map?
Or, is the viewport left the same and panning/zooming/rotation is done in some otherway?
For example, if you go to this link you'll see him describe panning and zooming exactly how I have above, by modifying the viewport.
Is this not correct?
They're achieved by applying a series of glTranslate, glRotate commands (that represent camera position and orientation) before drawing the scene. (technically, you're rotating the whole scene!)
There are utility functions like gluLookAt which sorta abstract some details about this.
To simplyify things, assume you have two vectors representing your camera: position and direction.
gluLookAt takes the position, destination, and up vector.
If you implement a vector class, destinaion = position + direction should give you a destination point.
Again to make things simple, you can assume the up vector to always be (0,1,0)
Then, before rendering anything in your scene, load the identity matrix and call gluLookAt
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt( source.x, source.y, source.z, destination.x, destination.y, destination.z, 0, 1, 0 );
Then start drawing your objects
You can let the user span by changing the position slightly to the right or to the left. Rotation is a bit more complicated as you have to rotate the direction vector. Assuming that what you're rotating is the camera, not some object in the scene.
One problem is, if you only have a direction vector "forward" how do you move it? where is the right and left?
My approach in this case is to just take the cross product of "direction" and (0,1,0).
Now you can move the camera to the left and to the right using something like:
position = position + right * amount; //amount < 0 moves to the left
You can move forward using the "direction vector", but IMO it's better to restrict movement to a horizontal plane, so get the forward vector the same way we got the right vector:
forward = cross( up, right )
To be honest, this is somewhat of a hackish approach.
The proper approach is to use a more "sophisticated" data structure to represent the "orientation" of the camera, not just the forward direction. However, since you're just starting out, it's good to take things one step at a time.
All of these "actions" can be achieved using model-view matrix transformation functions. You should read about glTranslatef (panning), glScalef (zoom), glRotatef (rotation). You also should need to read some basic tutorial about OpenGL, you might find this link useful.
Generally there are three steps that are applied whenever you reference any point in 3d space within opengl.
Given a Local point
Local -> World Transform
World -> Camera Transform
Camera -> Screen Transform (usually a projection. depends on if you're using perspective or orthogonal)
Each of these transforms is taking your 3d point, and multiplying by a matrix.
When you are rotating the camera, it is generally changing the world -> camera transform by multiplying the transform matrix by your rotation/pan/zoom affine transformation. Since all of your points are re-rendered each frame, the new matrix gets applied to your points, and it gives the appearance of a rotation.