opengl rotate tangent to a circle - opengl

I have an object right now which I have moving in a circle around the vertical (Y) axis. I want to rotate this object so it is always aligned with the tangent of the circle, how do I do this? Not sure what combination of sin/cosine/tan to use as the first argument of glRotatef...
Thanks!

The first argument of glRotatef is the angle, in radians (so 0 is no rotation, PI is flipped around end for end, and 2*PI is rotated all the way back to the original orientation).
You probably could have answered this yourself through trial and error in less time than it took to askk the question.
Note that if you choose the center of rotation to be the center of the circle instead of the center of the object, you won't need a separate translation step.

Related

How do i convert from center origin to bottom origin?

I will start by apologizing I highly doubt I will have any of the correct terminology, unfortenately after a few hours of raw testing and mashing my head against the wall I can figure this out.
I working with an engine the orients its models using a bottom aligned system. Meaning that the z axis (in a z up system), is z - radius = origin, or if the model is sitting at 0,0,0 all the tris would be in positive-z space.
I am integrating bullet which is a center aligned system, meaning that the objects origin is in the center of the mass (in this case a simple aabb cube).
I have the problem that the yaw pitch roll and origin i pass into the renderer is offset by radius in the z+ direction. Now the biggest issue comes when the pitch or roll becomes something other than 0. Because bullet's center aligned system, rotates around the pitch and roll around the center and the render rotates around the bottom. So there is a clear difference in where the model and the bounding box are lining up.
So is there an algorithm to convert from these two forms of orientation?
OK I figured out my own issue.
So For any one that stumbles upon this ill explain what exactly is happening and how to fix it.
Simply put my question is how to convert from world space (x, y, z planes) to local space (relative x y z planes). So if you were to take an arrow and face it in the direction of 0 x 0 y 0 z, where as its origin was in positive space, say 1 x 0 y 0 z your arrow is facing in world space. Meaning that if you were to move forward you would be moving along the x plane, left along the y plane, and up along the z plane.
Now if that arrow rotated along the yaw x degrees so that it was pointing at 1 x 1 y 0 z now when you move forward you are no longer moving along just the x plane. So this is whats called moving in local space. Meaning you are moving along the planes that are relative to the yaw and pitch of your node (object, model, etc).
So in my case i have bullet which is working in "world space" and i have my render that is working in "model space" so i just need to put it in a matrix that converts the two.
Two convert the spaces I need a projection matrix that converts them.
So here's the link to the source that I found that converts this and explains the relative math fairly easily. http://www.codinglabs.net/article_world_view_projection_matrix.aspx
thanx every one
chasester

Directx11 rotation around center of world?

I've been playing around for awhile now with directx and can't figure out how to rotate something without rotating it all the way around (0,0,0). The farther I get away from the center of my world the bigger circles in makes during its rotation.
Don't forget the matrix transformations apply to the coordinate system. Let's say you want to move your object upwards on an xy plane by 20 units, and rotate it by 90 degrees. If you rotate by 90 degrees first, you'll be rotating the entire plane by 90 degrees. This means 90 degrees is the new "upwards" when you translate up the y axis.
So, we translate first, so that our object's center is 0,0. Now when we rotate, we should be rotating around the center of the object. Of course, don't forgot to translate back, or clear the matrix somehow.
The order does matter when doing matrix transformations, as I'm sure you know. Usually, you should translate, scale, then rotate.
If you need a rotation α, around a point p located at (x₀,y₀,z₀), you create the matrix :
T(-x₀,-y₀,-z₀) * R(α) * T(x₀,y₀,z₀)
T means Translation and R means rotation. Also, depending on your convention such as row or column matrix, you may have to revert the order of operation.

OpenGL: Understanding transformation

I was trying to understand lesson 9 from NEHEs tutorial, which is about bitmaps being moved in 3d space.
the most interesting thing here is to move 2d bitmap texture on a simple quad through 3d space and keep it facing the screen (viewer) all the time. So the bitmap looks 3d but is 2d facing the viewer all the time no matter where it is in the 3d space.
In lesson 9 a list of stars is generated moving in a circle, which looks really nice. To avoid seeing the star from its side the coder is doing some tricky coding to keep the star facing the viewer all the time.
the code for this is as follows: ( the following code is called for each star in a loop)
glLoadIdentity();
glTranslatef(0.0f,0.0f,zoom);
glRotatef(tilt,1.0f,0.0f,0.0f);
glRotatef(star[loop].angle,0.0f,1.0f,0.0f);
glTranslatef(star[loop].dist,0.0f,0.0f);
glRotatef(-star[loop].angle,0.0f,1.0f,0.0f);
glRotatef(-tilt,1.0f,0.0f,0.0f);
After the lines above, the drawing of the star begins. If you check the last two lines, you see that the transformations from line 3 and 4 are just cancelled (like undo). These two lines at the end give us the possibility to get the star facing the viewer all the time. But i dont know why this is working.
And i think this comes from my misunderstanding of how OpenGL really does the transformations.
For me the last two lines are just like undoing what is done before, which for me, doesnt make sense. But its working.
So when i call glTranslatef, i know that the current matrix of the view gets multiplied with the translation values provided with glTranslatef.
In other words "glTranslatef(0.0f,0.0f,zoom);" would move the place where im going to draw my stars into the scene if zoom is negative. OK.
but WHAT exactly is moved here? Is the viewer moved "away" or is there some sort of object coordinate system which gets moved into scene with glTranslatef? Whats happening here?
Then glRotatef, what is rotated here? Again a coordinate system, the viewer itself?
In a real world. I would place the star somewhere in the 3d space, then rotate it in the world space around my worlds origin, then do the moving as the star is moving to the origin and starts at the edge again, then i would do a rotate for the star itself so its facing to the viewer. And i guess this is done here. But how do i rotate first around the worlds origin, then around the star itself? for me it looks like opengl is switching between a world coord system and a object coord system which doesnt really happen as you see.
I dont need to add the rest of the code, because its pretty standard. Simple GL initializing for 3d drawings, the rotating stuff, and then the simple drawing of QUADS with the star texture using blending. Thats it.
Could somebody explain what im misunderstanding here?
Another way of thinking about the gl matrix stack is to walk up it, backwards, from your draw call. In your case, since your draw is the last line, let's step up the code:
1) First, the star is rotated by -tilt around the X axis, with respect to the origin.
2) The star is rotated by -star[loop].angle around the Y axis, with respect to the origin.
3) The star is moved by star[loop].dist down the X axis.
4) The star is rotated by star[loop].angle around the Y axis, with respect to the origin. Since the star is not at the origin any more due to step 3, this rotation both moves the center of the star, AND rotates it locally.
5) The star is rotated by tilt around the X axis, with respect to the origin. (Same note as 4)
6) The star is moved down the Z axis by zoom units.
The trick here is difficult to type in text, but try and picture the sequence of moves. While steps 2 and 4 may seem like they invert each other, the move in between them changes the nature of the rotation. The key phrase is that the rotations are defined around the Origin. Moving the star changes the effect of the rotation.
This leads to a typical use of stacking matrices when you want to rotate something in-place. First you move it to the origin, then you rotate it, then you move it back. What you have here is pretty much the same concept.
I find that using two hands to visualize matrices is useful. Keep one hand to represent the origin, and the second (usually the right, if you're in a right-handed coordinate system like OpenGL), represents the object. I splay my fingers like the XYZ axes to I can visualize the rotation locally as well as around the origin. Starting like this, the sequence of rotations around the origin, and linear moves, should be easier to picture.
The second question you asked pertains to how the camera matrix behaves in a typical OpenGL setup. First, understand the concept of screen-space coordinates (similarly, device-coordinates). This is the space that is actually displayed. X and Y are the vectors of your screen, and Z is depth. The space is usually in the range -1 to 1. Moving an object down Z effectively moves the object away.
The Camera (or Perspective Matrix) is typically responsible for converting 'World' space into this screen space. This matrix defines the 'viewer', but in the end it is just another matrix. The matrix is always applied 'last', so if you are reading the transforms upward as I described before, the camera is usually at the very top, just as you are seeing. In this case you could think of that last transform (translate by zoom) as a very simple camera matrix, that moves the camera back by zoom units.
Good luck. :)
The glTranslatef in the middle is affected by the rotation : it moves the star along axis x' to distance dist, and axis x' is at that time rotated by ( tilt + angle ) compared to the original x axis.
In opengl you have object coordinates which are multiplied by a (a stack of) projection matrix. So you are moving the objects. If you want to "move a camera" you have to mutiply by the inverse matrix of the camera position and axis :
ProjectedCoords = CameraMatrix^-1 . ObjectMatrix . ObjectCoord
I also found this very confusing but I just played around with some of the NeHe code to get a better understanding of glTranslatef() and glRotatef().
My current understanding is that glRotatef() actually rotates the coordinate system, such that glRotatef(90.0f, 0.0f, 0.0f, 1.0f) will cause the x-axis to be where the y-axis was previously. After this rotation, glTranslatef(1.0f, 0.0f, 0.0f) will move an object upwards on the screen.
Thus, glTranslatef() moves objects in accordance with the current rotation of the coordinate system. Therefore, the order of glTranslatef and glRotatef are important in tutorial 9.
In technical terms my description might not be perfect, but it works for me.

OpenGL rotate around a spot

I want to rotate a gluSphere around a fixed point in a circular motion, like a planet going around the sun.
Would it be best to use glRotatef or glTranslate? If so, in which order should I call them?
You'll have to do a little of both:
Make sure the gluSphere is "facing" the fixed point, so that translating forward with respect to the sphere puts you closer to the center of its orbit
glTranslatef the gluSphere forward to the point around which you want it to rotate
glRotatef the direction you want the sphere to orbit
glTranslatef backwards just as far as you went forward
That way, your sphere stays the same distance from the center, but gets translated "around" in a nice orbit.
Translate away from the center and then rotate all the way
glRotatef will multiply the current matrix by a rotation matrix. This can (given the right vector) do what you are attempting.
glTranslatef will multiply the current matrix by a translation matrix, which would effectively "move" the object, not rotate it, so it will not be what you want.

3d Camera Position given some points

Heyo,
I'm currently working on a project where I need to place the camera such that the full motion of a character would be viewable without moving the camera. I have the position where the character starts, as well as the maximum distance that the character will travel in all three directions (X,Y, & Z). I also have the field of view (which is 90 degrees).
Is there an equation that'll figure out where I need to place the camera so it won't have to move to see the full motion?
Note: this is using OpenGL.
Clarification: The camera should be "in front" of the character that's in the motion, not above.
It'll also be moving along a ground plane.
If you make a bounding sphere of the points, all you need to do is keep the camera at a distance greater than or equal to the radius of the bounding sphere / sin(FOV/2).
For example, if you have a bounding sphere with radius Radius, and a specified Field of View FOV, your camera just needs to be at a point "Dist" away, pointing towards the center of the bounding sphere.
The equation for calculating the distance is:
Dist = Radius / sin( FOV/2 );
This will work in 3D, for a camera at any orientation.
Simply having the maximum range of (X, Y, Z) is not on its own sufficient, because the viewing port is essentially pyramid shaped, with the apex of the pyramid being at the eye position.
For the sake of argument, let's assume that all movement is in the (X, Z) plane (i.e. the ground), and the eye is directly above the origin 10m along the Y axis.
Assuming a square viewport, with your 90˚ field of view you'd be able to see from ±10m along both the X and Z axis, but only for objects who are on the ground (Y = 0). As soon as they come off the ground your view is reduced. If it's 1m of the ground then your (X, Z) extent is only ±9m.
Clearly a real camera could be placed anyway in the scene, facing any direction. Even the "roll" angle of the camera could change how much is visible. There are actually infinitely many such camera points, so you will need to constrain your criteria somewhat.
Take the line segment from the startpoint to the endpoint. Construct a plane orthogonal to this line segment through the midpoint of the line segment. Then position the camera somewhere in this plane at an distance of more than the following from the intersection point of plane and line looking at the intersection point. The up vector of the camera must be in the plane and the horizontal field of view must be 90 degrees.
distance = sqrt(dx^2 + dy^2 + dz^2) / 2
This camera positions will all have the startpoint and the endpoint on the left or right border of the view port and verticaly centered.
Another solution might be to write a function that takes the startpoint, the endpoint, and the desired position of both points on the screen. Then just solve the projection equation for the camera transformation.
It depends, for example, if the object is gonna move in a plane, you can just place the camera outside a ball circumscribed its movement area (this depends on the fact that FOV is 90, which is a fortunate angle).
If the object is gonna move in 3D, it's much more difficult. It would help if you'd specify the region where the object moves (cube vs. ball...) and the direction you want to see it from.