How does zooming, panning and rotating work? - opengl

Using OpenGL I'm attempting to draw a primitive map of my campus.
Can anyone explain to me how panning, zooming and rotating is usually implemented?
For example, with panning and zooming, is that simply me adjusting my viewport? So I plot and draw all my lines that compose my map, and then as the user clicks and drags it adjusts my viewport?
For panning, does it shift the x/y values of my viewport and for zooming does it increase/decrease my viewport by some amount? What about for rotation?
For rotation, do I have to do affine transforms for each polyline that represents my campus map? Won't this be expensive to do on the fly on a decent sized map?
Or, is the viewport left the same and panning/zooming/rotation is done in some otherway?
For example, if you go to this link you'll see him describe panning and zooming exactly how I have above, by modifying the viewport.
Is this not correct?

They're achieved by applying a series of glTranslate, glRotate commands (that represent camera position and orientation) before drawing the scene. (technically, you're rotating the whole scene!)
There are utility functions like gluLookAt which sorta abstract some details about this.
To simplyify things, assume you have two vectors representing your camera: position and direction.
gluLookAt takes the position, destination, and up vector.
If you implement a vector class, destinaion = position + direction should give you a destination point.
Again to make things simple, you can assume the up vector to always be (0,1,0)
Then, before rendering anything in your scene, load the identity matrix and call gluLookAt
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt( source.x, source.y, source.z, destination.x, destination.y, destination.z, 0, 1, 0 );
Then start drawing your objects
You can let the user span by changing the position slightly to the right or to the left. Rotation is a bit more complicated as you have to rotate the direction vector. Assuming that what you're rotating is the camera, not some object in the scene.
One problem is, if you only have a direction vector "forward" how do you move it? where is the right and left?
My approach in this case is to just take the cross product of "direction" and (0,1,0).
Now you can move the camera to the left and to the right using something like:
position = position + right * amount; //amount < 0 moves to the left
You can move forward using the "direction vector", but IMO it's better to restrict movement to a horizontal plane, so get the forward vector the same way we got the right vector:
forward = cross( up, right )
To be honest, this is somewhat of a hackish approach.
The proper approach is to use a more "sophisticated" data structure to represent the "orientation" of the camera, not just the forward direction. However, since you're just starting out, it's good to take things one step at a time.

All of these "actions" can be achieved using model-view matrix transformation functions. You should read about glTranslatef (panning), glScalef (zoom), glRotatef (rotation). You also should need to read some basic tutorial about OpenGL, you might find this link useful.

Generally there are three steps that are applied whenever you reference any point in 3d space within opengl.
Given a Local point
Local -> World Transform
World -> Camera Transform
Camera -> Screen Transform (usually a projection. depends on if you're using perspective or orthogonal)
Each of these transforms is taking your 3d point, and multiplying by a matrix.
When you are rotating the camera, it is generally changing the world -> camera transform by multiplying the transform matrix by your rotation/pan/zoom affine transformation. Since all of your points are re-rendered each frame, the new matrix gets applied to your points, and it gives the appearance of a rotation.

Related

Why does the camera face the negative end of the z-axis by default?

I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.

Names for camera moves

I've got a 3D scene and want to offer an API to control the camera. The camera is currently described by its own position, a look-at point in the scene somewhere along the z axis of the camera frame of reference, an “up” vector describing the y axis of the camera frame of reference, and a field-of-view angle. I'd like to provide at least the following operations:
Two-dimensional operations (mouse drag or arrow keys)
Keep look-at point and rotate camera around that. This can also feel like rotating the object, with the look-at point describing its centre. I think that at some point I've heard this described as the camera “orbiting” around the centre of the scene.
Keep camera position, and rotate camera around that point. Colloquially I'd call this “looking around”. With a cinema camera this might perhaps be called pan and tilt, but in 3d modelling “panning” is usually something else, see below. Using aircraft principal directions, this would be a pitch-and-yaw movement of the camera.
Move camera position and look-at point in parallel. This can also feel like translating the object parallel to the view plane. As far as I know this is usually called “panning” in 3d modelling contexts.
One-dimensional operations (e.g. mouse wheel)
Keep look-at point and move camera closer to that, by a given factor. This is perhaps what most people would consider a “zoom” except for those who know about real cameras, see below.
Keep all positions, but change field-of-view angle. This is what a “real” zoom would be: changing the focal length of the lens but nothing else.
Move both look-at point and camera along the line connecting them, by a given distance. At first this feels very much like the first item above, but since it changes the look-at point, subsequent rotations will behave differently. I see this as complementing the last point of the 2d operations above, since together they allow me to move camera and look-at point together in all three directions. The cinema camera man might call this a “dolly” shot, but I guess a dolly might also be associated with the other translation moves parallel to the viewing plane.
Keep look-at point, but change camera distance from it and field-of-view angle in such a way that projected sizes in the plane of the look-at point remain unchanged. This would be a dolly zoom in cinematic contexts, but might also be used to adjust for the viewer's screen size and distance from screen, to make the field-of-view match the user's environment.
Rotate around z axis in camera frame of reference. Using aircraft principal directions, this would be a roll motion of the camera. But it could also feel like a rotation of the object within the image plane.
What would be a consistent, unambiguous, concise set of function names to describe all of the above operations? Perhaps something already established by some existing API?

OpenGL gluLookAt why the up vector instead of an angle

I understand that in OpenGL, we use the gluLookAt to place the camera relative to the world and thus define the image that would be drawn.
The "eye" specifies the position of the camera and the "center" specifies the point the camera is pointing towards. Once we know these two, we can draw a straight line from the eye to the center. The camera plane would be normal to this line.
Since we know this plane, the up vector specifies the top of the camera - basically which direction in this plane is upwards. Thus the up vector is always orthogonal to the vector joining the center and the eye.
My question is why do we need the full vector, when we just want to know the angle of rotation about the line joining "center" and "eye". Is the reason behind this a computational advantage that I am not aware of? And what if someone specifies a wrong up vector, which is not orthogonal to the direction of sight?
The up vector you pass in is used to indicate the up direction, but this is used with a cross product. I think the purpose of gluLookAt is to simplify the calculations needed by the user, so for example you would just pass (0,1,0) to indicate that the up direction you desire is along the positive Y-axis.
If you want better efficiency you could avoid gluLookAt entirely

How do I implement basic camera operations in OpenGL?

I'm trying to implement an application using OpenGL and I need to implement the basic camera movements: orbit, pan and zoom.
To make it a little clearer, I need Maya-like camera control. Due to the nature of the application, I can't use the good ol' "transform the scene to make it look like the camera moves". So I'm stuck using transform matrices, gluLookAt, and such.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt), but I'm not quite sure how to implement the other two, pan and orbit. Has anyone ever done this?
I can't use the good ol' "transform the scene to make it look like the camera moves"
OpenGL has no camera. So you'll end up doing exactly this.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt),
This is not a Zoom, this is a Dolly. Zooming means varying the limits of the projection volume, i.e. the extents of a ortho projection, or the field of view of a perspective.
gluLookAt, which you've already run into, is your solution. First three arguments are the camera's position (x,y,z), next three are the camera's center (the point it's looking at), and the final three are the up vector (usually (0,1,0)), which defines the camera's y-z plane.*
It's pretty simple: you just glLoadIdentity();, call gluLookAt(...), and then draw your scene as normally. Personally, I always do all the calculations in the CPU myself. I find that orbiting a point is an extremely common task. My template C/C++ code uses spherical coordinates and looks like:
double camera_center[3] = {0.0,0.0,0.0};
double camera_radius = 4.0;
double camera_rot[2] = {0.0,0.0};
double camera_pos[3] = {
camera_center[0] + camera_radius*cos(radians(camera_rot[0]))*cos(radians(camera_rot[1])),
camera_center[1] + camera_radius* sin(radians(camera_rot[1])),
camera_center[2] + camera_radius*sin(radians(camera_rot[0]))*cos(radians(camera_rot[1]))
};
gluLookAt(
camera_pos[0], camera_pos[1], camera_pos[2],
camera_center[0],camera_center[1],camera_center[2],
0,1,0
);
Clearly you can adjust camera_radius, which will change the "zoom" of the camera, camera_rot, which will change the rotation of the camera about its axes, or camera_center, which will change the point about which the camera orbits.
*The only other tricky bit is learning exactly what all that means. To clarify, because the internet is lacking:
The position is the (x,y,z) position of the camera. Pretty straightforward.
The center is the (x,y,z) point the camera is focusing at. You're basically looking along an imaginary ray from the position to the center.
Now, your camera could still be looking any direction around this vector (e.g., it could be upsidedown, but still looking along the same direction). The up vector is a vector, not a position. It, along with that imaginary vector from the position to the center, form a plane. This is the camera's y-z plane.

OpenGL: Understanding transformation

I was trying to understand lesson 9 from NEHEs tutorial, which is about bitmaps being moved in 3d space.
the most interesting thing here is to move 2d bitmap texture on a simple quad through 3d space and keep it facing the screen (viewer) all the time. So the bitmap looks 3d but is 2d facing the viewer all the time no matter where it is in the 3d space.
In lesson 9 a list of stars is generated moving in a circle, which looks really nice. To avoid seeing the star from its side the coder is doing some tricky coding to keep the star facing the viewer all the time.
the code for this is as follows: ( the following code is called for each star in a loop)
glLoadIdentity();
glTranslatef(0.0f,0.0f,zoom);
glRotatef(tilt,1.0f,0.0f,0.0f);
glRotatef(star[loop].angle,0.0f,1.0f,0.0f);
glTranslatef(star[loop].dist,0.0f,0.0f);
glRotatef(-star[loop].angle,0.0f,1.0f,0.0f);
glRotatef(-tilt,1.0f,0.0f,0.0f);
After the lines above, the drawing of the star begins. If you check the last two lines, you see that the transformations from line 3 and 4 are just cancelled (like undo). These two lines at the end give us the possibility to get the star facing the viewer all the time. But i dont know why this is working.
And i think this comes from my misunderstanding of how OpenGL really does the transformations.
For me the last two lines are just like undoing what is done before, which for me, doesnt make sense. But its working.
So when i call glTranslatef, i know that the current matrix of the view gets multiplied with the translation values provided with glTranslatef.
In other words "glTranslatef(0.0f,0.0f,zoom);" would move the place where im going to draw my stars into the scene if zoom is negative. OK.
but WHAT exactly is moved here? Is the viewer moved "away" or is there some sort of object coordinate system which gets moved into scene with glTranslatef? Whats happening here?
Then glRotatef, what is rotated here? Again a coordinate system, the viewer itself?
In a real world. I would place the star somewhere in the 3d space, then rotate it in the world space around my worlds origin, then do the moving as the star is moving to the origin and starts at the edge again, then i would do a rotate for the star itself so its facing to the viewer. And i guess this is done here. But how do i rotate first around the worlds origin, then around the star itself? for me it looks like opengl is switching between a world coord system and a object coord system which doesnt really happen as you see.
I dont need to add the rest of the code, because its pretty standard. Simple GL initializing for 3d drawings, the rotating stuff, and then the simple drawing of QUADS with the star texture using blending. Thats it.
Could somebody explain what im misunderstanding here?
Another way of thinking about the gl matrix stack is to walk up it, backwards, from your draw call. In your case, since your draw is the last line, let's step up the code:
1) First, the star is rotated by -tilt around the X axis, with respect to the origin.
2) The star is rotated by -star[loop].angle around the Y axis, with respect to the origin.
3) The star is moved by star[loop].dist down the X axis.
4) The star is rotated by star[loop].angle around the Y axis, with respect to the origin. Since the star is not at the origin any more due to step 3, this rotation both moves the center of the star, AND rotates it locally.
5) The star is rotated by tilt around the X axis, with respect to the origin. (Same note as 4)
6) The star is moved down the Z axis by zoom units.
The trick here is difficult to type in text, but try and picture the sequence of moves. While steps 2 and 4 may seem like they invert each other, the move in between them changes the nature of the rotation. The key phrase is that the rotations are defined around the Origin. Moving the star changes the effect of the rotation.
This leads to a typical use of stacking matrices when you want to rotate something in-place. First you move it to the origin, then you rotate it, then you move it back. What you have here is pretty much the same concept.
I find that using two hands to visualize matrices is useful. Keep one hand to represent the origin, and the second (usually the right, if you're in a right-handed coordinate system like OpenGL), represents the object. I splay my fingers like the XYZ axes to I can visualize the rotation locally as well as around the origin. Starting like this, the sequence of rotations around the origin, and linear moves, should be easier to picture.
The second question you asked pertains to how the camera matrix behaves in a typical OpenGL setup. First, understand the concept of screen-space coordinates (similarly, device-coordinates). This is the space that is actually displayed. X and Y are the vectors of your screen, and Z is depth. The space is usually in the range -1 to 1. Moving an object down Z effectively moves the object away.
The Camera (or Perspective Matrix) is typically responsible for converting 'World' space into this screen space. This matrix defines the 'viewer', but in the end it is just another matrix. The matrix is always applied 'last', so if you are reading the transforms upward as I described before, the camera is usually at the very top, just as you are seeing. In this case you could think of that last transform (translate by zoom) as a very simple camera matrix, that moves the camera back by zoom units.
Good luck. :)
The glTranslatef in the middle is affected by the rotation : it moves the star along axis x' to distance dist, and axis x' is at that time rotated by ( tilt + angle ) compared to the original x axis.
In opengl you have object coordinates which are multiplied by a (a stack of) projection matrix. So you are moving the objects. If you want to "move a camera" you have to mutiply by the inverse matrix of the camera position and axis :
ProjectedCoords = CameraMatrix^-1 . ObjectMatrix . ObjectCoord
I also found this very confusing but I just played around with some of the NeHe code to get a better understanding of glTranslatef() and glRotatef().
My current understanding is that glRotatef() actually rotates the coordinate system, such that glRotatef(90.0f, 0.0f, 0.0f, 1.0f) will cause the x-axis to be where the y-axis was previously. After this rotation, glTranslatef(1.0f, 0.0f, 0.0f) will move an object upwards on the screen.
Thus, glTranslatef() moves objects in accordance with the current rotation of the coordinate system. Therefore, the order of glTranslatef and glRotatef are important in tutorial 9.
In technical terms my description might not be perfect, but it works for me.