A vector can be rotated and scaled, since it has direction and scale. But does it mean by plotting a point. Point can only be translated. But wikipedia says "For example the matrix
R = [ cos0,-sin0]
[ sin0,cos0]
rotates points in the xy-Cartesian plane counter-clockwise through an angle θ about the origin of the Cartesian coordinate system.
Also what does it mean by "since matrix multiplication has no effect on the zero vector (the coordinates of the origin), rotation matrices can only be used to describe rotations about the origin of the coordinate system."? Does this mean I cannot perform rotation around any point other than the origin?
Absolutely, to rotate about another point than the origin, you have to create a matrix that translates your vertices from your rotation center to the origin, rotates, then translates back from the origin to your rotation center.
When describing transformations, Wikipedia and other sites often refer to the effect on "points"; however this implicitly applies to every point in the coordinate system (with explicit exceptions like rotation of the origin.) These transformations - typically rotating, translating, and scaling - apply to the entire frame of reference and any derivative frames of reference. Use of the word 'point' is in the mathematical sense, a choice of coordinates within a coordinate system, and doesn't imply anything about a point in the graphical sense, like a "plotted" or "drawn" point (although graphing a point is just a visualization of this concept, so the distinction is moot.)
While it's true that a rotation has no effect on the origin, you are free to translate the origin itself, or equivalently to translate your models relative to the origin. Once you have applied the rotation, you can reverse the translation to restore the original origin.
Related
I have a basic cube generated in my 3D world. I can rotate correctly around the camera, but when I translate after rotating, the translations are not correct.
For example, if I rotate 90 degrees and translate into the Z axis, it would move as if translating in the X axis.
glLoadIdentity();
glRotatef(angle,0,1,0); //Rotate around the camera.
glTranslatef(movX,movY,movZ); //Translate after rotating around the camera.
glCallList(cubes[0]);
I need some help with this. Also, I tried translating before rotating, but the rotation is not at the camera. It is at the edge of the cube.
Keep in mind that in OpenGL that the transformation is applied to the camera, not the objects rendered; so you observe the inverse of the transformation you expected.
Also in OpenGL the Y and Z axes are flipped (Y is vertical), so you observe a horizontal translation instead of a vertical one.
Also, because the object is rotated though 90 degrees about Y, the X and Z axes replace each other (one of them is reversed).
willywonkadailyblah's answer is half correct. Because you are using the old OpenGL, you're using the old matrix stack. You are modifying the modelview matrix when you're doing your glRotatef and glTranslatef calls. The modelview matrix is actually the model's matrix and the camera's view matrix precombined (already multiplied together). These matrices are what determine where your object is in 3D space and where your viewing position/direction of the world is. So you can think of your calls as moving the camera, but it's probably easier to think of them as moving and rotating the world.
These rotate and translation calls are linear transformations. This has a precise definition, but for our purposes it means that you can represent the transformation as a matrix and you multiply it with the point's coordinates to apply the transformation to a point. Now matrix multiplication is not commutative, meaning AB != BA. All this to say that when you rotate, then translate it is different than translating and rotating, which I think you know. But then when you translate, rotate, and translate again, it might be a little more difficult to follow what you're actually doing. Worse even if you throw in some scaling in there. So I would suggest learning how linear transformations work and maintaining your own matrices for the objects and camera if you're serious about learning OpenGL.
learnopengl.org is an excellent website, but it teaches you Modern OpenGL, not what you're currently using. But the lesson on transformations and on coordinate systems are probably generally helpful, even without exact code for you to follow
Assuming I have a camera mounted in a rail, I can move it back and forth to take photos of my scene.
Can I assume I have a Rotation Matrix equal to zero?
It depends on the coordinate system you choose. Assuming it's aligned with your camera rotation (e.g. negative Z-axis into viewing direction of the camera and positive y-axis points upwards) and you only move your camera without rotating it, then the rotation matrix, which is used to transform between these coordinate system is the Identity matrix.
A zero matrix makes no sense.
If you assume no rotation, then the rotation matrix is a 3x3 identity matrix, not zero.
Also, this may or may not be a good assumption depending on how accurate you want to be. Even if the camera is moving on a rail, there will be some small rotation.
I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.
I've got a 3D scene and want to offer an API to control the camera. The camera is currently described by its own position, a look-at point in the scene somewhere along the z axis of the camera frame of reference, an “up” vector describing the y axis of the camera frame of reference, and a field-of-view angle. I'd like to provide at least the following operations:
Two-dimensional operations (mouse drag or arrow keys)
Keep look-at point and rotate camera around that. This can also feel like rotating the object, with the look-at point describing its centre. I think that at some point I've heard this described as the camera “orbiting” around the centre of the scene.
Keep camera position, and rotate camera around that point. Colloquially I'd call this “looking around”. With a cinema camera this might perhaps be called pan and tilt, but in 3d modelling “panning” is usually something else, see below. Using aircraft principal directions, this would be a pitch-and-yaw movement of the camera.
Move camera position and look-at point in parallel. This can also feel like translating the object parallel to the view plane. As far as I know this is usually called “panning” in 3d modelling contexts.
One-dimensional operations (e.g. mouse wheel)
Keep look-at point and move camera closer to that, by a given factor. This is perhaps what most people would consider a “zoom” except for those who know about real cameras, see below.
Keep all positions, but change field-of-view angle. This is what a “real” zoom would be: changing the focal length of the lens but nothing else.
Move both look-at point and camera along the line connecting them, by a given distance. At first this feels very much like the first item above, but since it changes the look-at point, subsequent rotations will behave differently. I see this as complementing the last point of the 2d operations above, since together they allow me to move camera and look-at point together in all three directions. The cinema camera man might call this a “dolly” shot, but I guess a dolly might also be associated with the other translation moves parallel to the viewing plane.
Keep look-at point, but change camera distance from it and field-of-view angle in such a way that projected sizes in the plane of the look-at point remain unchanged. This would be a dolly zoom in cinematic contexts, but might also be used to adjust for the viewer's screen size and distance from screen, to make the field-of-view match the user's environment.
Rotate around z axis in camera frame of reference. Using aircraft principal directions, this would be a roll motion of the camera. But it could also feel like a rotation of the object within the image plane.
What would be a consistent, unambiguous, concise set of function names to describe all of the above operations? Perhaps something already established by some existing API?
I have a Plane class that holds n for normal and q for a point on the plane. I also have another point p that also lies on that plane. How do I go about rounding p to the nearest unit on that plane. Like snapping a cursor to a 3D grid but the grid can be rotating plane.
Image to explain:
Red is the current point. Green is the rounded point that I'm trying to get.
Probably the easiest way to achieve this is by taking the plane to define a rotated and shifted coordinate system. This allows you to construct the matrices for transforming a point in global coordinates into plane coordinates and back. Once you have this, you can simply transform the point into plane coordinates, perform the rounding/projection in a trivial manner, and convert back to world coordinates.
Of course, the problem is underspecified the way you pose the question: the transformation you need has six degrees of freedom, your plane equation only yields three constraints. So you need to add some more information: the location of the origin within the plane, and the rotation of your grid around the plane normal.
Personally, I would start by deriving a plane description in parametric form:
xVec = alpha*direction1 + beta*direction2 + x0
Of course, such a description contains nine variables (three vectors), but you can normalize the two direction vectors, and you can constrain the two direction vectors to be orthogonal, which reduces the amount of freedoms back to six.
The two normalized direction vectors, together with the normalized normal, are the base vectors of the rotated coordinate system, so you can simply construct the rotation matrix by putting these three vectors together. To get the inverse rotation, simply transpose the resulting matrix. Add the translation / inverse translation on the appropriate side of the rotation, and you are done.