When I first load my object I calculate the initial AABB with the maximum and minimum (x,y,z) points. But this is in object space and the object moves around the world and more importantly, rotates.
How do I recalculate the new AABB every time the object is translated/rotated? This happens basically in every frame. Is it going to be a very intensive operation to recalculate the new AABB every frame? If so, what would be the alternative?
I know AABBs will make my collision detection less accurate, but it's easier to implement the collision detection code than OBBs and I want to take this one step at a time.
Here's my current code after some insight from the answers below:
typedef struct sAxisAlignedBoundingBox {
Vector3D bounds[8];
Vector3D max, min;
} AxisAlignedBoundingBox;
void drawAxisAlignedBoundingBox(AxisAlignedBoundingBox box) {
glPushAttrib(GL_LIGHTING_BIT | GL_POLYGON_BIT);
glEnable(GL_COLOR_MATERIAL);
glDisable(GL_LIGHTING);
glColor3f(1.0f, 1.0f, 0.0f);
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[0].x, box.bounds[0].y, box.bounds[0].z);
glVertex3f(box.bounds[1].x, box.bounds[1].y, box.bounds[1].z);
glVertex3f(box.bounds[2].x, box.bounds[2].y, box.bounds[2].z);
glVertex3f(box.bounds[3].x, box.bounds[3].y, box.bounds[3].z);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[4].x, box.bounds[4].y, box.bounds[4].z);
glVertex3f(box.bounds[5].x, box.bounds[5].y, box.bounds[5].z);
glVertex3f(box.bounds[6].x, box.bounds[6].y, box.bounds[6].z);
glVertex3f(box.bounds[7].x, box.bounds[7].y, box.bounds[7].z);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[0].x, box.bounds[0].y, box.bounds[0].z);
glVertex3f(box.bounds[5].x, box.bounds[5].y, box.bounds[5].z);
glVertex3f(box.bounds[6].x, box.bounds[6].y, box.bounds[6].z);
glVertex3f(box.bounds[1].x, box.bounds[1].y, box.bounds[1].z);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(box.bounds[4].x, box.bounds[4].y, box.bounds[4].z);
glVertex3f(box.bounds[7].x, box.bounds[7].y, box.bounds[7].z);
glVertex3f(box.bounds[2].x, box.bounds[2].y, box.bounds[2].z);
glVertex3f(box.bounds[3].x, box.bounds[3].y, box.bounds[3].z);
glEnd();
glPopAttrib();
}
void calculateAxisAlignedBoundingBox(GLMmodel *model, float matrix[16]) {
AxisAlignedBoundingBox box;
float dimensions[3];
// This will give me the absolute dimensions of the object
glmDimensions(model, dimensions);
// This calculates the max and min points in object space
box.max.x = dimensions[0] / 2.0f, box.min.x = -1.0f * box.max.x;
box.max.y = dimensions[1] / 2.0f, box.min.y = -1.0f * box.max.y;
box.max.z = dimensions[2] / 2.0f, box.min.z = -1.0f * box.max.z;
// These calculations are probably the culprit but I don't know what I'm doing wrong
box.max.x = matrix[0] * box.max.x + matrix[4] * box.max.y + matrix[8] * box.max.z + matrix[12];
box.max.y = matrix[1] * box.max.x + matrix[5] * box.max.y + matrix[9] * box.max.z + matrix[13];
box.max.z = matrix[2] * box.max.x + matrix[6] * box.max.y + matrix[10] * box.max.z + matrix[14];
box.min.x = matrix[0] * box.min.x + matrix[4] * box.min.y + matrix[8] * box.min.z + matrix[12];
box.min.y = matrix[1] * box.min.x + matrix[5] * box.min.y + matrix[9] * box.min.z + matrix[13];
box.min.z = matrix[2] * box.min.x + matrix[6] * box.min.y + matrix[10] * box.min.z + matrix[14];
/* NOTE: If I remove the above calculations and do something like this:
box.max = box.max + objPlayer.position;
box.min = box.min + objPlayer.position;
The bounding box will move correctly when I move the player, the same does not
happen with the calculations above. It makes sense and it's very simple to move
the box like this. The only problem is when I rotate the player, the box should
be adapted and increased/decreased in size to properly fit the object as a AABB.
*/
box.bounds[0] = Vector3D(box.max.x, box.max.y, box.min.z);
box.bounds[1] = Vector3D(box.min.x, box.max.y, box.min.z);
box.bounds[2] = Vector3D(box.min.x, box.min.y, box.min.z);
box.bounds[3] = Vector3D(box.max.x, box.min.y, box.min.z);
box.bounds[4] = Vector3D(box.max.x, box.min.y, box.max.z);
box.bounds[5] = Vector3D(box.max.x, box.max.y, box.max.z);
box.bounds[6] = Vector3D(box.min.x, box.max.y, box.max.z);
box.bounds[7] = Vector3D(box.min.x, box.min.y, box.max.z);
// This draw call is for testing porpuses only
drawAxisAlignedBoundingBox(box);
}
void drawObjectPlayer(void) {
static float mvMatrix[16];
if(SceneCamera.GetActiveCameraMode() == CAMERA_MODE_THIRD_PERSON) {
objPlayer.position = SceneCamera.GetPlayerPosition();
objPlayer.rotation = SceneCamera.GetRotationAngles();
objPlayer.position.y += -PLAYER_EYE_HEIGHT + 0.875f;
/* Only one of the two code blocks below should be active at the same time
Neither of them is working as expected. The bounding box doesn't is all
messed up with either code. */
// Attempt #1
glPushMatrix();
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
glCallList(gameDisplayLists.player);
glGetFloatv(GL_MODELVIEW_MATRIX, mvMatrix);
glPopMatrix();
// Attempt #2
glPushMatrix();
glLoadIdentity();
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, mvMatrix);
glPopMatrix();
calculateAxisAlignedBoundingBox(objPlayer.model, mvMatrix);
}
}
But it doesn't work as it should... What I'm doing wrong?
Simply recompute the AABB of the transformed AABB. This means transforming 8 vertices (8 vertex - matrix multiplications) and 8 vertex-vertex comparisons.
So at initialisation, you compute your AABB in model space: for each x,y,z of each vertex of the model, you check against xmin, xmax, ymin, ymax, etc.
For each frame, you generate a new transformation matrix. In OpenGL this is done with glLoadIdentity followed by glTransform/Rotate/Scale (if using the old API). This is the Model Matrix, as lmmilewski said.
You compute this transformation matrix a second time (outside OpenGL, for instance using glm). You also can get OpenGL's resulting matrix using glGet.
You multiply each of your AABB's eight vertices by this matrix. Use glm for matrix-vector multiplication. You'll get your transformed AABB (in world space). It it most probably rotated (not axis-aligned anymore).
Now your algorithm probably only work with axis-aligned stuff, hence your question. So now you approximate the new bounding box of the transformed model by takinf the bounding box of the transformed bounding box:
For each x,y,z of each vertex of the new AABB, you check against xmin, xmax, ymin, ymax, etc. This gives you an world-space AABB that you can use in your clipping algorithm.
This is not optimal (AABB-wise). You'll get lots of empty space, but performance-wise, it's much much better that recomputing the AABB of the whole mesh.
As for the transformation matrix, in drawObjectPlayer:
gLLoadIdentity();
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
glGetFloatv(GL_MODELVIEW_MATRIX, mvMatrix);
// Now you've got your OWN Model Matrix (don't trust the GL_MODELVIEW_MATRIX flag : this is a workaround, and I know what I'm doing ^^ )
gLLoadIdentity(); // Reset the matrix so that you won't make the transformations twice
gluLookAt( whatever you wrote here earlier )
glTranslatef(objPlayer.position.x, objPlayer.position.y, objPlayer.position.z);
glRotatef(objPlayer.rotation.y + 180.0f, 0.0f, 1.0f, 0.0f);
// Now OpenGL is happy, he's got his MODELVIEW matrix correct ( gluLookAt is the VIEW part; Translate/Rotate is the MODEL part
glCallList(gameDisplayLists.player); // Transformed correcty
I can't explain it further than that... as said in the comments, you had to do it twice. You wouldn't have these problems and ugly workarounds in OpenGL 3, btw, because you'd be fully responsible of your own matrices. Equivalent in OpenGL 2:
glm::mat4 ViewMatrix = glm::LookAt(...);
glm::mat4 ModelMatrix = glm::rotate() * glm::translate(...);
// Use ModelMatrix for whatever you want
glm::mat4 ModelViewMatrix = ViewMatrix * ModelMatrix;
glLoadMatrix4fv( &ModelViewMatrix[0][0] ); // In OpenGL 3 you would use an uniform instead
Much cleaner, right?
Yep, you can transform the eight corner vertices and do min/max on the results, but there is a faster way, as described by Jim Arvo from his chapter in Graphics Gems (1990).
Performance-wise, Arvo's method is roughly equivalent to two transforms instead of eight and basically goes as follows (this transforms box A into box B)
// Split the transform into a translation vector (T) and a 3x3 rotation (M).
B = zero-volume AABB at T
for each element (i,j) of M:
a = M[i][j] * A.min[j]
b = M[i][j] * A.max[j]
B.min[i] += a < b ? a : b
B.max[i] += a < b ? b : a
return B
One variation of Arvo's method uses center / extent representation rather than mix / max, which is described by Christer Ericson in Real-Time Collision Detection (photo).
Complete C code for Graphics Gems article can be found here.
To do that you have to loop over every vertex, calculate its position in the world (multiply by modelview) and find the minimum / maximum vertex coordinates within every object (just like when you compute it for the first time).
You can scale your AABB a bit, so that you don't have to recalculate it - it is enough to enlarge it by factor sqrt(2) - your rotated object then always fits in AABB.
There is also a question in which direction you rotate(?). If always in one then you can enlarge AABB only in that direction.
Optionally, you can use bounding spheres instead of AABBs. Then you don't care about rotation and scaling is not a problem.
To quote a previous response on AABB # Stack Overflow:
"Sadly yes, if your character rotates you need to recalculate your AABB . . .
Skurmedel
The respondent's suggestion, and mine, is to implement oriented bounding boxes once you have AABB working, and also to note you can make aabb's of portions of a mesh to fudge collision detection with greater accuracy than one enormous box for each object.
Why not use your GPU? Today I implimented a solution of this problem by rendening a couple of frames.
Temporary place your camera over the object, above it, pointing
down at the object.
Render only your object, with out lights or
anything.
Use orthographic projection too.
Then read the frame buffer. Rows and columns of black pixels means the model isn't there. Hit a white pixel - you hit one of the model AABB borders.
I know this isn't a solution for all the cases, but with some prior knowledge, this is very efficient.
For rendering off screen see here.
Related
I'm trying to draw only a sector/part of a circle, but currently I always get a full circle.
I use this to draw a circle:
glColor3f (0.25, 1.0, 0.25);
GLfloat angle, raioX=0.3f, raioY=0.3f;
GLfloat circle_points = 100.0f;
glBegin(GL_LINE_LOOP);
for (int i = 0; i < circle_points; i++) {
angle = 2*PI*i/circle_points;
glVertex2f(0.5+cos(angle)*raioX, 0.5+sin(angle)*raioY);
}
glEnd();
Assuming you want a sector as illustrated in the following diagram:
You will need to re-write your code this way:
glBegin (GL_LINE_LOOP);
glVertex2f (0.5f, 0.5f);
for (int i = 0; i < circle_points; i++) {
angle = 2*PI*i/circle_points;
glVertex2f (0.5+cos(angle)*raioX, 0.5+sin(angle)*raioY);
}
glEnd ();
The only thing I changed was the addition of the point 0.5,0.5 at the center of your circle. WIthout that point, you wind up drawing a segment instead of a sector.
As BDL points out, your original code drew a full circle. Your angle for 1/4 of a circle should be Pi/2 rather than 2*Pi. So at minimum, you would also need to re-write this line:
angle = PI * 0.5f * i / circle_points;
BDL's answer shows a more efficient approach to this. Though it draws an arc, which may or may not be what you want. Either way, you have enough code now to draw all three things in the diagram above.
The code you will see frequently using a cos() and sin() call for each point is correct, but very inefficient. Those are fairly expensive functions, and it's easy to write the code so that they are only needed once.
The idea is that you obtain each point from the previous point by rotating it by the angle increment. The rotation itself can be performed by a 2x2 transformation matrix. This reduced the calculation of each point to a few additions and multiplications.
The code will then look something like this:
// Calculate angle increment from point to point, and its cos/sin.
float angInc = 0.5f * PI / (circle_points - 1.0f);
float cosInc = cos(angInc);
float sinInc = sin(angInc);
// Start with vector (1.0f, 0.0f), ...
float xc = 1.0f;
float yc = 0.0f;
// ... and then rotate it by angInc for each point.
glBegin(GL_LINE_LOOP);
for (int i = 0; i < circle_points; i++) {
glVertex2f(0.5f + xc, 0.5f + yc);
float xcNew = cosInc * xc - sinInc * yc;
yc = sinInc * xc + cosInc * yc;
xc = xcNew;
}
glEnd();
As a subtle detail, note that if you want to draw a quarter circle with circle_points points, including the start and end point, you need to divide the angle range by circle_points - 1 to obtain the angle increment. It's the thing with the number of fence posts and number of gaps between them...
This will draw a circle segment. Andon already elaborated on the difference between a segment and a sector.
The above shared code with my own answer here: https://stackoverflow.com/a/25321141/3530129, which shows how to draw a circle with modern OpenGL.
When drawing a fraction of a circle, one needs to limit the angle in which the points should be placed. circle_points defines then in how many subparts this circle arc should be devided. In addition (and as pointed out by #Andon M. Coleman) using a GL_LINE_LOOP might not be the correct choice, since it will always close the line from the last to the first point.
You're code could be modified somehow like this:
glColor3f (0.25, 1.0, 0.25);
GLfloat angle, raioX=0.3f, raioY=0.3f;
GLfloat circle_points = 100;
GLfloat circle_angle = PI / 2.0f;
glBegin(GL_LINE_STRIP);
for (int i = 0; i <= circle_points; i++) {
GLfloat current_angle = circle_angle*i/circle_points;
glVertex2f(0.5+cos(current_angle)*raioX, 0.5+sin(current_angle)*raioY);
}
glEnd();
I'm creating some random vectors/directions in a loop as a dome shape like this:
void generateDome(glm::vec3 direction)
{
for(int i=0;i<1000;++i)
{
float xDir = randomByRange(-1.0f, 1.0f);
float yDir = randomByRange(0.0f, 1.0f);
float zDir = randomByRange(-1.0f, 1.0f);
auto vec = glm::vec3(xDir, yDir, zDir);
vec = glm::normalize(vec);
...
//some transformation with direction-vector
}
...
}
This creates vectors as a dome-shape in +y direction (0,1,0):
Now I want to rotate the vec-Vector by a given direction-Vector like (1,0,0).
This should rotate the "dome" to the x-direction like this:
How can I achieve this? (preferably with glm)
A rotation is generally defined using some sort of offset (axis-angle, quaternion, euler angles, etc) from a starting position. What you are looking for would be more accurately described (in my opinion) as a re-orientation. Luckily this isn't too hard to do. What you need is a change-of-basis matrix.
First, lets just define what we're working with in code:
using glm::vec3;
using glm::mat3;
vec3 direction; // points in the direction of the new Y axis
vec3 vec; // This is a randomly generated point that we will
// eventually transform using our base-change matrix
To calculate the matrix, you need to create unit vectors for each of the new axes. From the example above it becomes apparent that you want the vector provided to become the new Y-axis:
vec3 new_y = glm::normalize(direction);
Now, calculating the X and Z axes will be a tad more complicated. We know that they must be orthogonal to each other and to the Y axis calculated above. The most logical way to construct the Z axis is to assume that the rotation is taking place in the plane defined by the old Y axis and the new Y axis. By using the cross-product we can calculate this plane's normal vector, and use that for the Z axis:
vec3 new_z = glm::normalize(glm::cross(new_y, vec3(0, 1, 0)));
Technically the normalization isn't necessary here since both input vectors are already normalized, but for the sake of clarity, I've left it. Also note that there is a special case when the input vector is colinear with the Y-axis, in which case the cross product above is undefined. The easiest way to fix this is to treat it as a special case. Instead of what we have so far, we'd use:
if (direction.x == 0 && direction.z == 0)
{
if (direction.y < 0) // rotate 180 degrees
vec = vec3(-vec.x, -vec.y, vec.z);
// else if direction.y >= 0, leave `vec` as it is.
}
else
{
vec3 new_y = glm::normalize(direction);
vec3 new_z = glm::normalize(glm::cross(new_y, vec3(0, 1, 0)));
// code below will go here.
}
For the X-axis, we can cross our new Y-axis with our new Z-axis. This yields a vector perpendicular to both of the others axes:
vec3 new_x = glm::normalize(glm::cross(new_y, new_z));
Again, the normalization in this case is not really necessary, but if y or z were not already unit vectors, it would be.
Finally, we combine the new axis vectors into a basis-change matrix:
mat3 transform = mat3(new_x, new_y, new_z);
Multiplying a point vector (vec3 vec) by this yields a new point at the same position, but relative to the new basis vectors (axes):
vec = transform * vec;
Do this last step for each of your randomly generated points and you're done! No need to calculate angles of rotation or anything like that.
As a side note, your method of generating random unit vectors will be biased towards directions away from the axes. This is because the probability of a particular direction being chosen is proportional to the distance to the furthest point possible in a given direction. For the axes, this is 1.0. For directions like eg. (1, 1, 1), this distance is sqrt(3). This can be fixed by discarding any vectors which lie outside the unit sphere:
glm::vec3 vec;
do
{
float xDir = randomByRange(-1.0f, 1.0f);
float yDir = randomByRange(0.0f, 1.0f);
float zDir = randomByRange(-1.0f, 1.0f);
vec = glm::vec3(xDir, yDir, zDir);
} while (glm::length(vec) > 1.0f); // you could also use glm::length2 instead, and avoid a costly sqrt().
vec = glm::normalize(vec);
This would ensure that all directions have equal probability, at the cost that if you're extremely unlucky, the points picked may lie outside the unit sphere over and over again, and it may take a long time to generate one that's inside. If that's a problem, it could be modified to limit the iterations: while (++i < 4 && ...) or by increasing the radius at which a point is accepted every iteration. When it is >= sqrt(3), all possible points would be considered valid, so the loop would end. Both of these methods would result in a slight biasing away from the axes, but in almost any real situation, it would not be detectable.
Putting all the code above together, combined with your code, we get:
void generateDome(glm::vec3 direction)
{
// Calculate change-of-basis matrix
glm::mat3 transform;
if (direction.x == 0 && direction.z == 0)
{
if (direction.y < 0) // rotate 180 degrees
transform = glm::mat3(glm::vec3(-1.0f, 0.0f 0.0f),
glm::vec3( 0.0f, -1.0f, 0.0f),
glm::vec3( 0.0f, 0.0f, 1.0f));
// else if direction.y >= 0, leave transform as the identity matrix.
}
else
{
vec3 new_y = glm::normalize(direction);
vec3 new_z = glm::normalize(glm::cross(new_y, vec3(0, 1, 0)));
vec3 new_x = glm::normalize(glm::cross(new_y, new_z));
transform = mat3(new_x, new_y, new_z);
}
// Use the matrix to transform random direction vectors
vec3 point;
for(int i=0;i<1000;++i)
{
int k = 4; // maximum number of direction vectors to guess when looking for one inside the unit sphere.
do
{
point.x = randomByRange(-1.0f, 1.0f);
point.y = randomByRange(0.0f, 1.0f);
point.z = randomByRange(-1.0f, 1.0f);
} while (--k > 0 && glm::length2(point) > 1.0f);
point = glm::normalize(point);
point = transform * point;
// ...
}
// ...
}
You need to create a rotation matrix. Therefore you need a identity Matrix. Create it like this with
glm::mat4 rotationMat(1); // Creates a identity matrix
Now your can rotate the vectorspacec with
rotationMat = glm::rotate(rotationMat, 45.0f, glm::vec3(0.0, 0.0, 1.0));
This will rotate the vectorspace by 45.0 degrees around the z-axis (as shown in your screenshot). Now your almost done. To rotate your vec you can write
vec = glm::vec3(rotationMat * glm::vec4(vec, 1.0));
Note: Because you have a 4x4 matrix you need a vec4 to multiply it with the matrix. Generally it is a good idea always to use vec4 when working with OpenGL because vectors in smaller dimension will be converted to homogeneous vertex coordinates anyway.
EDIT: You can also try to use GTX Extensions (Experimental) by including <glm/gtx/rotate_vector.hpp>
EDIT 2: When you want to rotate the dome "towards" a given direction you can get your totation axis by using the cross-product between the direction and you "up" vector of the dome. Lets say you want to rotate the dome "toward" (1.0, 1.0, 1.0) and the "up" direction is (0.0, 1.0, 0.0) use:
glm::vec3 cross = glm::cross(up, direction);
glm::rotate(rotationMat, 45.0f, cross);
To get your rotation matrix. The cross product returns a vector that is orthogonal to "up" and "direction" and that's the one you want to rotate around. Hope this will help.
I'm trying to display a triangle on the screen and move around with keyboard+mouse, but the closer the object is to the edge of the screen, the more it stretches.
Here's the relevant code:
fieldOfView = 45;
x += mouseSpeed * deltaTime * deltaMouseX
y += mouseSpeed * deltaTime * deltaMouseY
position = glm::vec3(0,0,5);
forward = glm::vec3(cos(y) * sin (x),
sin(y),
cos(y) * cos(x));
right = glm::vec3(sin(x - 3.14f/2.0f),
0,
cos(x - 3.14f/2.0f));
up = glm::cross(right,forward);
projectionMatrix = glm::perspective(fieldOfView, 4.0f / 3.0f, 0.1f, 100.0f);
viewMatrix = glm::lookAt(position,position + forward, up);
this is updated every frame. In my vertex shader:
gl_Position = projection * view * model * vec4(vert,1)
where vert is my object coordinates, projection is my projectionMatrix, and view is my viewMatrix. I have a feeling the problem is with my viewmatrx, but I can't find anything wrong with it. Let me know if you need more code.
projectionMatrix = glm::perspective(fieldOfView, 4.0f / 3.0f, 0.1f, 100.0f);
The fieldofView specifies the field of view angle, in degrees, in the y direction (describes the angular extent of a given scene that is imaged by a camera).For a given camera–subject distance, longer lenses (small FOV) magnify the subject more. For a given subject magnification (and thus different camera–subject distances), longer lenses (less FOV) appear to compress distance; wider lenses (more FOV) appear to expand the distance between objects.
Another result of using a wide angle lens is a greater apparent perspective distortion when the camera is not aligned perpendicularly to the subject: parallel lines converge at the same rate as with a normal lens, but converge more due to the wider total field. For example, buildings appear to be falling backwards much more severely when the camera is pointed upward from ground level than they would if photographed with a normal lens at the same distance from the subject, because more of the subject building is visible in the wide-angle shot.So,the wider fov you have,the most distortion you detect.
How is yours initialized?
A normal FOV angle is 45.
It does look at the target when I move to the target to the right and looks at it up until it goes 180 degrees of the -zaxis and decides to go the other way.
Matrix4x4 camera::GetViewMat()
{
Matrix4x4 oRotate, oView;
oView.SetIdentity();
Vector3 lookAtDir = m_targetPosition - m_camPosition;
Vector3 lookAtHorizontal = Vector3(lookAtDir.GetX(), 0.0f, lookAtDir.GetZ());
lookAtHorizontal.Normalize();
float angle = acosf(Vector3(0.0f, 0.0f, -1.0f).Dot(lookAtHorizontal));
Quaternions horizontalOrient(angle, Vector3(0.0f, 1.0f, 0.0f));
ori = horizontalOrient;
ori.Conjugate();
oRotate = ori.ToMatrix();
Vector3 inverseTranslate = Vector3(-m_camPosition.GetX(), -m_camPosition.GetY(), -m_camPosition.GetZ());
oRotate.Transform(inverseTranslate);
oRotate.Set(0, 3, inverseTranslate.GetX());
oRotate.Set(1, 3, inverseTranslate.GetY());
oRotate.Set(2, 3, inverseTranslate.GetZ());
oView = oRotate;
return oView;
}
As promised, a bit of code showing the way I'd make a camera look at a specific point in space at all times.
First of all, we'd need a method to construct a quaternion from an angle and an axis, I happen to have that on pastebin, the angle input is in radians:
http://pastebin.com/vLcx4Qqh
Make sure you don't input the axis (0,0,0), which wouldn't make any sense whatsoever.
Now the actual update method, we need to get the quaternion rotating the camera from default orientation to pointing towards the target point. PLEASE note I just wrote this out of the top of my head, it probably needs a little debugging and may need a little optimization, but this should at least give you a push in the right direction.
void camera::update()
{
// First get the direction from the camera's position to the target point
vec3 lookAtDir = m_targetPoint - m_position;
// I'm going to divide the vector into two 'components', the Y axis rotation
// and the Up/Down rotation, like a regular camera would work.
// First to calculate the rotation around the Y axis, so we zero out the y
// component:
vec3 lookAtHorizontal = vec3(lookAtDir.x, 0.0f, lookAtDir.z).normalize();
// Get the quaternion from 'default' direction to the horizontal direction
// In this case, 'default' direction is along the -z axis, like most OpenGL
// programs. Make sure the projection matrix works according to this.
float angle = acos(vec3(0.0f, 0.0f, -1.0f).dot(lookAtHorizontal));
quaternion horizontalOrient(angle, vec3(0.0f, 1.0f, 0.0f));
// Since we already stripped the Y component, we can simply get the up/down
// rotation from it as well.
angle = acos(lookAtDir.normalize().dot(lookAtHorizontal));
if(angle) horizontalOrient *= quaternion(angle, lookAtDir.cross(lookAtHorizontal));
// ...
m_orientation = horizontalOrient;
}
Now to actually take m_orientation and m_position and get the world -> camera matrix
// First inverse each element (-position and inverse the quaternion),
// the position is rotated since the position within a matrix is 'added' last
// to the output vector, so it needs to account for rotation of the space.
mat3 rotationMatrix = m_orientation.inverse().toMatrix();
vec3 inverseTranslate = rotationMatrix * -m_position; // Note the minus
mat4 matrix = mat3; // just means the matrix is expanded, the last entry (bottom right of the matrix) is a 1.0f like an identity matrix would be.
// This bit is row-major in my case, you just need to set the translation of the matrix.
matrix[3] = inverseTranslate.x;
matrix[7] = inverseTranslate.y;
matrix[11] = inverseTranslate.z;
EDIT I think it should be obvious but just for completeness, .dot() takes the dot product of the vectors, .cross() takes the cross product, the object executing the method is vector A, and the parameter of the method is vector B.
I want to know how to draw a spiral.
I wrote this code:
void RenderScene(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLfloat x,y,z = -50,angle;
glBegin(GL_POINTS);
for(angle = 0; angle < 360; angle += 1)
{
x = 50 * cos(angle);
y = 50 * sin(angle);
glVertex3f(x,y,z);
z+=1;
}
glEnd();
glutSwapBuffers();
}
If I don't include the z terms I get a perfect circle but when I include z, then I get 3 dots that's it. What might have happened?
I set the viewport using glviewport(0,0,w,h)
To include z should i do anything to set viewport in z direction?
You see points because you are drawing points with glBegin(GL_POINTS).
Try replacing it by glBegin(GL_LINE_STRIP).
NOTE: when you saw the circle you also drew only points, but drawn close enough to appear as a connected circle.
Also, you may have not setup the depth buffer to accept values in the range z = [-50, 310] that you use. These arguments should be provided as zNear and zFar clipping planes in your gluPerspective, glOrtho() or glFrustum() call.
NOTE: this would explain why with z value you only see a few points: the other points are clipped because they are outside the z-buffer range.
UPDATE AFTER YOU HAVE SHOWN YOUR CODE:
glOrtho(-100*aspectratio,100*aspectratio,-100,100,1,-1); would only allow z-values in the [-1, 1] range, which is why only the three points with z = -1, z = 0 and z = 1 will be drawn (thus 3 points).
Finally, you're probably viewing the spiral from the top, looking directly in the direction of the rotation axis. If you are not using a perspective projection (but an isometric one), the spiral will still show up as a circle. You might want to change your view with gluLookAt().
EXAMPLE OF SETTING UP PERSPECTIVE
The following code is taken from the excellent OpenGL tutorials by NeHe:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glLoadIdentity(); // Reset The Projection Matrix
// Calculate The Aspect Ratio Of The Window
gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,100.0f);
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
Then, in your draw loop would look something like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity();
glTranslatef(-1.5f,0.0f,-6.0f); // Move Left 1.5 Units And Into The Screen 6.0
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd();
Of course, you should alter this example code your needs.
catchmeifyoutry provides a perfectly capable method, but will not draw a spatially accurate 3D spiral, as any render call using a GL_LINE primitive type will rasterize to fixed pixel width. This means that as you change your perspective / view, the lines will not change width. In order to accomplish this, use a geometry shader in combination with GL_LINE_STRIP_ADJACENCY to create 3D geometry that can be rasterized like any other 3D geometry. (This does require that you use the post fixed-function pipeline however)
I recommended you to try catchmeifyoutry's method first as it will be much simpler. If you are not satisfied, try the method I described. You can use the following post as guidance:
http://prideout.net/blog/?tag=opengl-tron
Here is my Spiral function in C. The points are saved into a list which can be easily drawn by OpenGL (e.g. connect adjacent points in list with GL_LINES).
cx,cy ... spiral centre x and y coordinates
r ... max spiral radius
num_segments ... number of segments the spiral will have
SOME_LIST* UniformSpiralPoints(float cx, float cy, float r, int num_segments)
{
SOME_LIST *sl = newSomeList();
int i;
for(i = 0; i < num_segments; i++)
{
float theta = 2.0f * 3.1415926f * i / num_segments; //the current angle
float x = (r/num_segments)*i * cosf(theta); //the x component
float y = (r/num_segments)*i * sinf(theta); //the y component
//add (x + cx, y + cy) to list sl
}
return sl;
}
An example image with r = 1, num_segments = 1024:
P.S. There is difference in using cos(double) and cosf(float).
You use a float variable for a double function cos.