Have a 3D object look at a 3D object matrix? - c++

What I have currently is causing my 3D object to become flat. But it is looking at my target.
Vector4 up;
newMatrix.SetIdentity();
up.set_x(0);
up.set_y(1);
up.set_z(0);
Vector4 zaxis = player_->transform().GetTranslation() - spider_->transform().GetTranslation();
zaxis.Normalise();
Vector4 xaxis = CrossProduct(up, zaxis);
xaxis.Normalise();
Vector4 yaxis = CrossProduct(zaxis, xaxis);
newMatrix.set_m(0, 0, xaxis.x()); newMatrix.set_m(0, 1, xaxis.y()); newMatrix.set_m(0, 2, xaxis.z());
newMatrix.set_m(1, 0, yaxis.x()); newMatrix.set_m(1, 1, yaxis.y()); newMatrix.set_m(1, 2, yaxis.z());
newMatrix.set_m(2, 0, zaxis.x()); newMatrix.set_m(2, 1, zaxis.y()); newMatrix.set_m(2, 2, zaxis.z());
Excuse the method for putting values into the matrix, I'm working with what my framework gives me.
Vector4 Game::CrossProduct(Vector4 v1, Vector4 v2)
{
Vector4 crossProduct;
crossProduct.set_x((v1.y() * v2.z()) - (v2.y() * v2.z()));
crossProduct.set_y((v1.z() * v2.x()) - (v1.z() * v2.x()));
crossProduct.set_z((v1.x() * v2.y()) - (v1.x() * v2.y()));
return crossProduct;
}
What am I doing wrong here?
Note that I have added the forth line before with the 1 in the corner before just in case, with no change.

you got problem when (0,1,0) is close to parallel to the direction you want to look at. Than the cross product will fail leading in one or two basis vectors to be zero which could lead to 2D appearance. But that would happen only if your objects are offset only in y axis between each other. To avoid that you can test dot product between the up and view direction and if the abs of result is bigger than 0.7 use (1,0,0) instead (as a up or right or whatever...).
Also as We know nothing about your notations we can not confirm your setting the matrix is correct (it might be or may be not, it could be transposed etc.) for more info take a look at:
Understanding 4x4 homogenous transform matrices
But most likely your matrix holds also the perspective transform and by setting its cells you are overriding it completely leading to no perspective hence 2D output. In such case you should multiply original perspective matrix with your new matrix and use the result.

Related

OpenGL - Local Up and Right From Matrix 4x4?

I have a transform matrix 4x4 built in the following way:
glm::mat4 matrix;
glm::quat orientation = toQuaternion(rotation);
matrix*= glm::mat4_cast(orientation);
matrix= glm::translate(matrix, position);
matrix= glm::scale(matrix, scale);
orientation = toQuaternion(localRotation);
matrix*= glm::mat4_cast(orientation);
matrix= glm::scale(matrix, localScale);
where rotation, position, scale, localRotation, and localScale are all vector3's. As per the advice of many different questions on many different forums, it should be possible to fish local directions out of the resulting matrix like so:
right = glm::vec3(matrix[0][0], matrix[0][1], matrix[0][2]);
up = glm::vec3(matrix[1][0], matrix[1][1], matrix[1][2]);
forward = glm::vec3(matrix[2][0], matrix[2][1], matrix[2][2]);
where all these directions are normalized. I then use these directions to get a view matrix like so:
glm::lookAt(position, position + forward, up);
It all works great - EXCEPT: when I'm flying around a scene, the right and up vectors are totally erroneous. The forward direction is always exactly as it should be. When I'm at 0, 0, -1 looking at 0,0,0, all the directions are correct. But when I look in different directions (except for the inverse of looking at the origin from 0,0,-1), right and up are inconsistent. They aren't world vectors, nor are they local vectors. They seem to be almost random. Where am I going wrong? How can I get consistent local up and local right vectors from the 4x4 transform matrix?
matrix is a local-to-world transformation, whereas a view matrix is world-to-local, which means that in this case matrix is the inverse of the view matrix. The code you currently have only works for a view matrix. Simply exchange the rows and columns:
right = glm::vec3(matrix[0][0], matrix[1][0], matrix[2][0]);
up = glm::vec3(matrix[0][1], matrix[1][1], matrix[2][1]);
forward = glm::vec3(matrix[0][2], matrix[1][2], matrix[2][2]);
A possible reason that it works for (0, -1, 0) is because the rotation/scaling part of the view matrix looks like:
1 0 0
0 0 1
0 1 0
The corresponding part of matrix is the inverse of the above, which is of course identical. Try it with another direction.

Rotating an Object Around an Axis

I have a circular shape object, which I want to rotate like a fan along it's own axis.
I can change the rotation in any direction i.e. dx, dy, dz using my transformation matrix.
The following it's the code:
Matrix4f matrix = new Matrix4f();
matrix.setIdentity();
Matrix4f.translate(translation, matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rx), new Vector3f(1,0,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(ry), new Vector3f(0,1,0), matrix, matrix);
Matrix4f.rotate((float) Math.toRadians(rz), new Vector3f(0,0,1), matrix, matrix);
Matrix4f.scale(new Vector3f(scale,scale,scale), matrix, matrix);
My vertex code:
vec4 worldPosition = transformationMatrix * vec4(position,1.0);
vec4 positionRelativeToCam = viewMatrix*worldPosition;
gl_Position = projectionMatrix *positionRelativeToCam;
Main Game Loop:
Object.increaseRotation(dxf,dyf,dzf);
But, it's not rotating along it's own axis. What am I missing here?
I want something like this. Please Help
You should Get rid of Euler angles for this.
Object/mesh geometry
You need to be aware of how your object is oriented in its local space. For example let assume this:
So in this case the main rotation is around axis z. If your mesh is defined so the rotation axis is not aligned to any of the axises (x,y or z) or the center point is not (0,0,0) than that will cause you problems. The remedy is either change your mesh geometry or create a special constant transform matrix M0 that will transform all vertexes from mesh LCS (local coordinate system) to a different one that is axis aligned and center of rotation has zero in the axis which is also the axis of rotation.
In the latter case any operation on object matrix M would be done like this:
M'=M.M0.operation.Inverse(M0)
or in reverse or in inverse (depends on your matrix/vertex multiplication and row/column order conventions). If you got your mesh already centered and axis aligned then do just this instead:
M'=M.operation
The operation is transform matrix of the change increment (for example rotation matrix). The M is the object current transform matrix from #2 and M' is its new version after applying operation.
Object transform matrix
You need single Transform matrix for each object you got. This will hold the position and orientation of your object LCS so it can be converted to world/scene GCS (global coordinate system) or its parent object LCS
rotating your object around its local axis of rotation
As in the Understanding 4x4 homogenous transform matrices is mentioned for standard OpenGL matrix convetions you need to do this:
M'=M*rotation_matrix
Where M is current object transform matrix and M' is the new version of it after rotation. This is the thing you got different. You are using Euler angles rx,ry,rz instead of accumulating the rotations incrementally. You can not do this with Euler angles in any sane and robust way! Even if many modern games and apps are still trying hard to do it (and failing for years).
So what to do to get rid of Euler angles:
You must have persistent/global/static matrix M per object
instead of local instance per render so you need to init it just once instead of clearing it on per frame basis.
On animation update apply operation you need
so:
M*=rotation_around_z(angspeed*dt);
Where angspeed is in [rad/second] or [deg/second] of your fan speed and dt is time elapsed in [seconds]. For example if you do this in timer then dt is the timer interval. For variable times you can measure the time elapsed (it is platform dependent I usually use PerformanceTimers or RDTSC).
You can stack more operations on top of itself (for example your fan can also turning back and forward around y axis to cover more area.
For object direct control (by keyboard,mouse or joystick) just add things like:
if (keys.get( 38)) { redraw=true; M*=translate_z(-pos_speed*dt); }
if (keys.get( 40)) { redraw=true; M*=translate_z(+pos_speed*dt); }
if (keys.get( 37)) { redraw=true; M*=rotation_around_y(-turn_speed*dt); }
if (keys.get( 39)) { redraw=true; M*=rotation_around_y(+turn_speed*dt); }
Where keys is my key map holding on/off state for every key in the keyboard (so I can use more keys at once). This code just control object with arrows. For more info on the subject see related QA:
Computer Graphics: Moving in the world
Preserve accuracy
With incremental changes there is a risc of loosing precision due to floating point errors. So add a counter to your matrix class which counts how many times it has been changed (incremental operation applied) and if some constant count hit (for example 128 operations) Normalize your matrix.
To do that you need to ensure orthogonormality of your matrix. So eaxh axis vector X,Y,Z must be perpendicular to the other two and its size has to be unit. I do it like this:
Choose main axis which will have unchanged direction. I am choosing Z axis as that is usually my main axis in my meshes (viewing direction, rotation axis etc). so just make this vector unit Z = Z/|Z|
exploit cross product to compute the other two axises so X = (+/-) Z x Y and Y = (+/-) Z x X and also normalize them too X = X/|X| and Y = Y/|Y|. The (+/-) is there because I do not know your coordinate system conventions and the cross product can produce opposite vector to your original direction so if the direction is opposite change the multiplication order or negate the result (this is done while coding time not in runtime!).
Here example in C++ how my orthonormal normalization is done:
void reper::orto(int test)
{
double x[3],y[3],z[3];
if ((cnt>=_reper_max_cnt)||(test)) // here cnt is the operations counter and test force normalization regardless of it
{
use_rep(); // you can ignore this
_rep=1; _inv=0; // you can ignore this
axisx_get(x);
axisy_get(y);
axisz_get(z);
vector_one(z,z);
vector_mul(x,y,z); // x is perpendicular to y,z
vector_one(x,x);
vector_mul(y,z,x); // y is perpendicular to z,x
vector_one(y,y);
axisx_set(x);
axisy_set(y);
axisz_set(z);
cnt=0;
}
}
Where axis?_get/set(a) just get/set a as axis from/to your matrix. The vector_one(a,b) returns a = b/|b| and vector_mul(a,b,c) return a = b x c

C++ opengl get new vertex position after glTranslatef

I have a plane in my 3d space and I want to move it somewhere else, so I use glTranslate to do so.
The planes vertex data is: (0,0,0), (1,0,0), (1,1,0) and (0,1,0).
I translate the object to the position of (2,0,0) through the use of glTranslatef(2.0, 0.0, 0.0).
After the translation the point data is unchanged so if I was to want to collide with my plane the visual position is not its actual position.
Is there a way to get the point data from the MODELVIEW_MATRIX or at least a way to find out what the new values are after the glTranslate?
Don't respond with just add 2.0 to the actual values to move it because what if I want to the use glRotate etc. I still want the points locations.
If you really don't want to maintain your own transformation matrix, you can get the current modelview matrix with:
GLfloat mat[16];
glGetFloatv(GL_MODELVIEW_MATRIX, mat);
You can then apply this matrix to your vertices with a standard matrix multiplication. Keep in mind that the matrix is arranged in column-major order. With an input vector xIn, the transformed vector xOut is:
xOut[0] = mat[0] * xIn[0] + mat[4] * xIn[1] + mat[8] * xIn[2] + mat[12];
xOut[1] = mat[1] * xIn[0] + mat[5] * xIn[1] + mat[9] * xIn[2] + mat[13];
xOut[2] = mat[2] * xIn[0] + mat[6] * xIn[1] + mat[10] * xIn[2] + mat[14];
Keeping track of the current transformation matrix in your own code is really a better approach, IMHO. Aside from eliminating glGet() calls, which can be harmful to performance, it gets you on a path to using modern OpenGL (Core Profile), where the matrix stack and all related calls do not exist anymore.
You can create a matrix from your translation and rotation, so that you can use the matrix to transform the coordinates.
There're many libraries to help you create such matrix and transform coordinates.

3D Geometry: Finding viewing boundaries from camera postion, min and max length of view, angle it is pointing and angle of view

I am writing software to determine the viewable locations of a camera in 3D. I have currently implement parts to find the minimum and maximum length of view based on the camera and lenses intrinsic characteristics.
I now need to work out that if the camera is placed at X,Y,Z and is pointing in a direction (two angles, one around the horizontal and one around the vertical axis) what the boundaries the camera can see at are (knowing the viewing angle). The output I would like is 4 3D locations, making a rectangle that show the minimum position, top left, top right, bottom left and bottom right. The same is also required for the maximum positions.
Can anyone help with the geometry to find these points?
Some code I have:
QVector3D CameraPerspective::GetUnitVectorOfCameraAngle()
{
QVector3D inital(0, 1, 0);
QMatrix4x4 rotation_matrix;
// rotate around z axis
rotation_matrix.rotate(_angle_around_z, 0, 0, 1);
//rotate around y axis
rotation_matrix.rotate(_angle_around_x, 1, 0, 0);
inital = inital * rotation_matrix;
return inital;
}
Coordinate CameraPerspective::GetFurthestPointInFront()
{
QVector3D camera_angle_vector = GetUnitVectorOfCameraAngle();
camera_angle_vector.normalize();
QVector3D furthest_point_infront = camera_angle_vector * _camera_information._maximum_distance_mm;
return Coordinate(furthest_point_infront + _position_of_this);
}
Thanks
A complete answer with code will be probably way too long for SO, I hope that this will be enough. In the following we work in homogeneous coordinates.
I have currently implement parts to find the minimum and maximum length of view based on the camera and lenses intrinsic characteristics.
That isn't enough to fully define your camera. You also need a field of view angle and the width/height ratio.
With all these information (near plane + far plane + fov + ratio), you can build a 4x4 matrix known as perspective matrix. Google for it or check here for some references. This matrix maps the pyramidal region of the space which your camera "sees" (usually simply called frustrum) to the [-1,1]x[-1,1]x[-1,1] cube. Call it P.
Now you need a 4x4 camera matrix which transform points in world space to points in camera space. Since you know the camera position and the camera orientation this can be constructed easily (there is no room here to full explain how transformation matrices in homogeneous coordinates work, google for it). Call this matrix C.
Now consider the matrix A = P * C.
This matrix transforms points in world coordinates to points in the perspective space. Your camera will "see" those points if they are inside the [-1,1]x[-1,1]x[-1,1] cube. But you can invert this matrix in order to map points inside the cube to points in world space. So in order to obtain the 8 points you need in world space you can simply do:
y = A^(-1) * x
Where x =
[-1,-1,-1, 1] left - bottom - near
[-1,-1, 1, 1] left - bottom - far
etc.

3d coordinate from point and angles

I'm working on a simple OpenGL world- and so far I've got a bunch of cubes randomly placed about and it's pretty fun to go zooming about. However I'm ready to move on. I would like to drop blocks in front of my camera, but I'm having trouble with the 3d angles. I'm used to 2d stuff where to find an end point we simply do something along the lines of:
endy = y + (sin(theta)*power);
endx = x + (cos(theta)*power);
However when I add the third dimension I'm not sure what to do! It seems to me that the power of the second dimensional plane would be determined by the z axis's cos(theta)*power, but I'm not positive. If that is correct, it seems to me I'd do something like this:
endz = z + (sin(xtheta)*power);
power2 = cos(xtheta) * power;
endx = x + (cos(ytheta) * power2);
endy = y + (sin(ytheta) * power2);
(where x theta is the up/down theta and y = left/right theta)
Am I even close to the right track here? How do I find an end point given a current point and an two angles?
Working with euler angles doesn't work so well in 3D environments, there are several issues and corner cases in which they simply don't work. And you actually don't even have to use them.
What you should do, is exploit the fact, that transformation matrixes are nothing else, then coordinate system bases written down in a comprehensible form. So you have your modelview matrix MV. This consists of a model space transformation, followed by a view transformation (column major matrices multiply right to left):
MV = V * M
So what we want to know is, in which way the "camera" lies within the world. That is given to you by the inverse view matrix V^-1. You can of course invert the view matrix using Gauss Jordan method, but most of the time your view matrix will consist of a 3×3 rotation matrix with a translation vector column P added.
R P
0 1
Recall that
(M * N)^-1 = N^-1 * M^-1
and also
(M * N)^T = M^T * N^T
so it seems there is some kind of relationship between transposition and inversion. Not all transposed matrices are their inverse, but there are some, where the transpose of a matrix is its inverse. Namely it are the so called orthonormal matrices. Rotations are orthonormal. So
R^-1 = R^T
neat! This allows us to find the inverse of the view matrix by the following (I suggest you try to proof it as an exersice):
V = / R P \
\ 0 1 /
V^-1 = / R^T -P \
\ 0 1 /
So how does this help us to place a new object in the scene at a distance from the camera? Well, V is the transformation from world space into camera space, so V^-1 transforms from camera to world space. So given a point in camera space you can transform it back to world space. Say you wanted to place something at the center of the view in distance d. In camera space that would be the point (0, 0, -d, 1). Multiply that with V^-1:
V^-1 * (0, 0, -d, 1) = (R^T)_z * d - P
Which is exactly what you want. In your OpenGL program you somewhere have your view matrix V, probably not properly named yet, but anyway it is there. Say you use old OpenGL-1 and GLU's gluLookAt:
void display(void)
{
/* setup viewport, clear, set projection, etc. */
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(...);
/* the modelview matrix now holds the View transform */
At this point we can extract the modelview matrix
GLfloat view[16];
glGetFloatv(GL_MODELVIEW_MATRIX, view);
Now view is in column major order. If we were to use it directly we could directly address the columns. But remember that transpose is inverse of a rotation, so we actually want the 3rd row vector. So let's assume you keep view around, so that in your event handler (outside display) you can do the following:
GLfloat z_row[3];
z_row[0] = view[2];
z_row[1] = view[6];
z_row[2] = view[10];
And we want the position
GLfloat * const p_column = &view[12];
Now we can calculate the new objects position at distance d:
GLfloat new_object_pos[3] = {
z_row[0]*d - p_column[0],
z_row[1]*d - p_column[1],
z_row[2]*d - p_column[2],
};
There you are. As you can see, nowhere you had to work with angles or trigonometry, it's just straight linear algebra.
Well I was close, after some testing, I found the correct formula for my implementation, it looks like this:
endy = cam.get_pos().y - (sin(toRad(180-cam.get_rot().x))*power1);
power2 = cos(toRad(180-cam.get_rot().x))*power1;
endx = cam.get_pos().x - (sin(toRad(180-cam.get_rot().y))*power2);
endz = cam.get_pos().z - (cos(toRad(180-cam.get_rot().y))*power2);
This takes my camera's position and rotational angles and get's the corresponding points. Works like a charm =]