Going off notes on the internet helps construct quaternion classes properly. My problem is putting them to practical use and using them for rotations in raw OpenGL.
In my input events I have the following:
Quaternion<float> qPrev = qRot;
qRot = qRot.UnitQuaternion(); // #1.
Quaternion<float> axis(5., 0., 1., 0.);
qRot = qPrev*axis;
qRot *= qPrev.conjugate();
//#2. qRot = qRot.UnitQuaternion();
If I use #1 make the rotation result unit it rotates fine for a few seconds, speeds up and then vanishes completely .
If I use #2 for a unit result the box "wobbles" and never rotates.
Alternatively, I've used this based off other implementations:
Quaternion<float> qPrev = qRot;
Quaternion<float> axis(-5./100, 0., 1., 0.); // #3.
axis = axis.UnitQuaternion();
qRot = qPrev*axis;
// #4. qRot *= qPrev.conjugate();
Where #3. makes the most sense, taking the unit of a non-unit quat and multiplying it initially with the identity quat; keeping everything in unit-quats.
Where #4. tried multiplying the conjugate based off my understanding of the equation's rotation definition.
All of these produce a small wobble and #1 is the closest I got; the box rotates, then quickly, then vanishes.
My understanding is that I have an axis I which to rotate around where w = how much (angle in radians). I simply multiply by the orientation and multiply that result by the negative of the orientation; e.g. the conjugate.
Rendering code (this could be the culprit):
// Apply some transformations
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -100.f);
glRotatef(qRot.x, 1., 0, 0);
glRotatef(qRot.y, 0, 1., 0);
glRotatef(qRot.z, 0, 0, 1.);
// Draw the cube
glDrawArrays(GL_TRIANGLES, 0, 36);
// Draw some text on top of our OpenGL object
window.pushGLStates();
I appreciate the help
Edit: Many thanks to #ybungalobill for helping out.
What I wanted from my quaternion is a very specific application in terms of quaternion math. I wanted an axis of rotation while in my example I'm creating a quaternion with the direct values. This is not how the rotation matrix is derived. I needed more steps:
void FromAxisAngle(_Tp theta, _Tp x, _Tp y, _Tp z)
{
//Angle should be in radians
this->w = cos(theta/2);
//Axes
this->x = x * sin(theta/2);
this->y = y * sin(theta/2);
this->z = z * sin(theta/2);
//Normalize
*this = this->UnitQuaternion();
}
Again, getting the rotation matrix is a very specific application with very specific steps. Quaternions don't do this 'out of the box'.
Some minor changes to the rotation logic as well:
Quaternion<float> qPrev = qRot;
Quaternion<float> axis;
axis.FromAxisAngle(-5./100, 1., 0., 0.);
qRot = qPrev*axis;
So now I've created a mathematically correct quaternion FROM my rotation axis. Then multiply.
Finally, the last thing I had to do was create a matrix from my quaternion that OpenGL could use. So we have more math to do in order to get it in a notation that rotates things:
void GetMatrix(_Tp matrix[16])
{
Quaternion<_Tp> temp(this->UnitQuaternion());
_Tp qw = temp.w;
_Tp qx = temp.x;
_Tp qy = temp.y;
_Tp qz = temp.z;
matrix[0] = 1.0f - 2.0f*qy*qy - 2.0f*qz*qz;
matrix[1] = 2.0f*qx*qy - 2.0f*qz*qw;
matrix[2] = 2.0f*qx*qz + 2.0f*qy*qw;
matrix[3] = 0;
matrix[4] = 2.0f*qx*qy + 2.0f*qz*qw;
matrix[5] = 1.0f - 2.0f*qx*qx - 2.0f*qz*qz;
matrix[6] = 2.0f*qy*qz - 2.0f*qx*qw;
matrix[7] = 0;
matrix[8] = 2.0f*qx*qz - 2.0f*qy*qw;
matrix[9] = 2.0f*qy*qz + 2.0f*qx*qw;
matrix[10] = 1.0f - 2.0f*qx*qx - 2.0f*qy*qy;
matrix[11] = 0;
matrix[12] = 0;
matrix[13] = 0;
matrix[14] = 0;
matrix[15] = 1;
}
Using glMultMatrix the cube rotates.
glRotatef(qRot.x, 1., 0, 0);
glRotatef(qRot.y, 0, 1., 0);
glRotatef(qRot.z, 0, 0, 1.);
That's not how you apply a quaternion rotation to the GL state. You have to convert it to a matrix following the formula from here and then call glMultMatrix or glMultTransposeMatrix. After that approach number #3 will work as expected.
Code that converts any non-zero quaternion to a matrix, from Stannum libs:
template<class T>
mat<T,3,3> mat_rotation(const quat<T> &x)
{
T s = 2/norm2(x); // cheap renormalization even of non-unit quaternions
T wx = x.w*x.x, wy = x.w*x.y, wz = x.w*x.z;
T xx = x.x*x.x, xy = x.x*x.y, xz = x.x*x.z;
T yy = x.y*x.y, yz = x.y*x.z;
T zz = x.z*x.z;
return mat<T,3,3>(
1 - s*(yy+zz), s*(xy-wz), s*(xz+wy),
s*(xy+wz), 1 - s*(xx+zz), s*(yz-wx),
s*(xz-wy), s*(yz+wx), 1 - s*(xx+yy)
);
}
Related
I'm trying to create a 3D viewer for a parallax barrier display, but I'm stuck with camera movements. You can see a parallax barrier display at: displayblocks.org
Multiple views are needed for this effect, this tutorial provide code for calculating the interViewpointDistance depending of the display properties and so selecting the head Position.
Here are the parts of the code involved in the matrix creation:
for (y = 0; y < viewsCountY; y++) {
for (x = 0; x <= viewsCountX; x++) {
viewMatrix = glm::mat4(1.0f);
// selection of the head Position
float cameraX = (float(x - int(viewsCountX / 2))) * interViewpointDistance;
float cameraY = (float(y - int(mviewsCountY / 2))) * interViewpointDistance;
camera.Position = glm::vec3(camera.Position.x + cameraX, camera.Position.y + cameraY, camera.Position.z);
// Move the apex of the frustum to the origin.
viewMatrix = glm::translate(viewMatrix -camera.Position);
projectionMatrix = get_off_Axis_Projection_Matrix();
// render's stuff
// (...)
// glfwSwapBuffers();
}
}
The following code is the projection matrix function. I use the Robert Kooima's paper generalized perspective projection.
glm::mat4 get_off_Axis_Projection_Matrix() {
glm::vec3 Pe = camera.Position;
// space corners coordinates (space points)
glm::vec3 Pa = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pb = glm::vec3(screenSizeX, -screenSizeY, 0.0);
glm::vec3 Pc = glm::vec3(screenSizeX, screenSizeY, 0.0);
// Compute an orthonormal basis for the screen.
glm::vec3 Vr = Pb - Pa;
Vr = glm::normalize(Vr);
glm::vec3 Vu = Pc - Pa;
Vu = glm::normalize(Vu);
glm::vec3 Vn = glm::cross(Vr, Vu);
Vn = glm::normalize(Vn);
// Compute the screen corner vectors.
glm::vec3 Va = Pa - Pe;
glm::vec3 Vb = Pb - Pe;
glm::vec3 Vc = Pc - Pe;
//-- Find the distance from the eye to screen plane.
float d = -glm::dot(Va, Vn);
// Find the extent of the perpendicular projection.
float left = glm::dot(Va, Vr) * const_near / d;
float right = glm::dot(Vr, Vb) * const_near / d;
float bottom = glm::dot(Vu, Va) * const_near / d;
float top = glm::dot(Vu, Vc) * const_near / d;
// Load the perpendicular projection.
return glm::frustum(left, right, bottom, top, const_near, const_far + d);
}
These two methods works, and I can see that my multiple views are well projected.
But I cant manage to make a camera that works normally, like in a FPS, with Tilt and Pan.
This code for example give me the "head tracking" effect (but with the mouse), it was handy to test projections, but this is not what I'm looking for.
float cameraX = (mouseX - windowWidth / 2) / (windowWidth * headDisplacementFactor);
float cameraY = (mouseY - windowHeight / 2) / (windowHeight * headDisplacementFactor);
camera.Position = glm::vec3(cameraX, cameraY, 60.0f);
viewMatrix = glm::translate(viewMatrix, -camera.Position);
My camera class works if viewmatrix is created with lookAt. But with the off-axis projection, using lookAt will rotate the scene, by which the correspondence between near plane and screen plane will be lost.
I may need to translate/rotate the space corners coordinates Pa, Pb, Pc, used to create the frustum, but I don't know how.
I try to use what many people seem to find a good way, I call gluUnproject 2 times with different z-values and then try to calculate the direction vector for the ray from these 2 vectors.
I read this question and tried to use the structure there for my own code:
glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
But this only returns the Homogeneous Clip Coordinates (I believe), since they only range from -1 to 1 on every axis.
How to actually get coordinates from which I can create a ray?
EDIT: This is how I construct the matrices:
Matrix projectionMatrix = vecmath.perspectiveMatrix(60f, aspect, 0.1f,
100f);
//The matrix of the camera = viewMatrix
setTransformation(vecmath.lookatMatrix(eye, center, up));
//And every object sets a ModelMatrix in it's display method
Matrix modelMatrix = parentMatrix.mult(vecmath
.translationMatrix(translation));
modelMatrix = modelMatrix.mult(vecmath.rotationMatrix(1, 0, 1, angle));
EDIT 2:
This is how the function looks right now:
private void calcMouseInWorldPosition(float mouseX, float mouseY, Matrix proj, Matrix view) {
Vector start = vecmath.vector(0, 0, 0);
Vector end = vecmath.vector(0, 0, 0);
FloatBuffer modelBuffer = BufferUtils.createFloatBuffer(16);
modelBuffer.put(view.asArray());
modelBuffer.rewind();
FloatBuffer projBuffer = BufferUtils.createFloatBuffer(16);
projBuffer.put(proj.asArray());
projBuffer.rewind();
FloatBuffer startBuffer = BufferUtils.createFloatBuffer(16);
FloatBuffer endBuffer = BufferUtils.createFloatBuffer(16);
IntBuffer viewBuffer = BufferUtils.createIntBuffer(16);
//The two calls for projection and modelView matrix are disabled here,
as I use my own matrices in this case
// glGetFloat(GL_MODELVIEW_MATRIX, modelBuffer);
// glGetFloat(GL_PROJECTION_MATRIX, projBuffer);
glGetInteger(GL_VIEWPORT, viewBuffer);
//I know this is really ugly and bad, but I know that the height and width is always 600
// and this is just for testing purposes
mouseY = 600 - mouseY;
gluUnProject(mouseX, mouseY, 0.0f, modelBuffer, projBuffer, viewBuffer, startBuffer);
gluUnProject(mouseX, mouseY, 1.0f, modelBuffer, projBuffer, viewBuffer, endBuffer);
start = vecmath.vector(startBuffer.get(0), startBuffer.get(1), startBuffer.get(2));
end = vecmath.vector(endBuffer.get(0), endBuffer.get(1), endBuffer.get(2));
direction = vecmath.vector(end.x()-start.x(), end.y()-start.y(), end.z()-start.z());
}
I'm trying to use my own projection and view matrix, but this only seems to give weirder results.
With the GlGet... stuff I get this for a click in the upper right corner:
start: (0.97333336, -0.98, -1.0)
end: (0.97333336, -0.98, 1.0)
When I use my own stuff I get this for the same position:
start: (-2.4399707, -0.55425626, -14.202201)
end: (-2.4399707, -0.55425626, -16.198204)
Now I actually need a modelView matrix instead of just the view matrix, but I don't know how I am supposed to get it, since it is altered and created anew in every display call of every object.
But is this really the problem? In this tutorial he says "Normally, to get into clip space from eye space we multiply the vector by a projection matrix. We can go backwards by multiplying by the inverse of this matrix." and in the next step he multiplies again by the inverse of the view matrix, so I thought this is what I should actually do?
EDIT 3:
Here I tried what user42813 suggested:
Matrix view = cam.getTransformation();
view = view.invertRigid();
mouseY = height - mouseY - 1;
//Here I only these values, because the Z and W values would be 0
//following your suggestion, so no use adding them here
float tempX = view.get(0, 0) * mouseX + view.get(1, 0) * mouseY;
float tempY = view.get(0, 1) * mouseX + view.get(1, 1) * mouseY;
float tempZ = view.get(0, 2) * mouseX + view.get(1, 2) * mouseY;
origin = vecmath.vector(tempX, tempY, tempZ);
direction = cam.getDirection();
But now the direction and origin values are always the same:
origin: (-0.04557252, -0.0020000197, -0.9989586)
direction: (-0.04557252, -0.0020000197, -0.9989586)
Ok I finally managed to work this out, maybe this will help someone.
I found some formula for this and did this with the coordinates that I was getting, which ranged from -1 to 1:
float tempX = (float) (start.x() * 0.1f * Math.tan(Math.PI * 60f / 360));
float tempY = (float) (start.y() * 0.1f * Math.tan(Math.PI * 60f / 360) * height / width);
float tempZ = -0.1f;
direction = vecmath.vector(tempX, tempY, tempZ); //create new vector with these x,y,z
direction = view.transformDirection(direction);
//multiply this new vector with the INVERSED viewMatrix
origin = view.getPosition(); //set the origin to the position values of the matrix (the right column)
I dont really use deprecated opengl but i would share my thought,
First it would be helpfull if you show us how you build your View matrix,
Second the View matrix you have is in the local space of the camera,
now typically you would multiply your mouseX and (ScreenHeight - mouseY - 1) by the View matrix (i think the inverse of that matrix sorry, not sure!) then you will have the mouse coordinates in camera space, then you will add the Forward vector to that vector created by the mouse, then you will have it, it would look something like that:
float mouseCoord[] = { mouseX, screen_heihgt - mouseY - 1, 0, 0 }; /* 0, 0 because we multipling by a matrix 4.*/
mouseCoord = multiply( ViewMatrix /*Or: inverse(ViewMatrix)*/, mouseCoord );
float ray[] = add( mouseCoord, forwardVector );
I'm writing a simple physics component in my C++ engine, using the GLM math library, and any rotations I apply are done in world space, i.e. each rotation is applied along the global X, Y, and Z axes, no matter which way the object is facing. I am applying a torque to my object, and using that to calculate a rotation amount for each axis.
I add the torque via a call to the AddTorque function, which uses the object's transform to apply it in a relative direction. For example, for pitching the object (rotation around the X axis):
AddTorque(m_transform[0] * 50.0f);
The calculation code itself, where the rotation is not local/relative (m_acceleration, m_velocity, m_torque, etc, are all of the form glm::vec3, and the transform of the type glm::mat4):
void PhysicsComponent::Update(float a_deltaTime)
{
///////////
// rotation
glm::mat4 transform = m_parent->GetTransform();
glm::vec3 rotVec = glm::vec3(0, 0, 0);
m_angularAcceleration = m_torque / m_momentOfInertia;
m_angularVelocity += m_angularAcceleration * a_deltaTime;
rotVec += m_angularVelocity * a_deltaTime;
transform = glm::rotate(transform, rotVec.x, glm::vec3(1, 0, 0));
transform = glm::rotate(transform, rotVec.y, glm::vec3(0, 1, 0));
transform = glm::rotate(transform, rotVec.z, glm::vec3(0, 0, 1));
///////////
// position
glm::vec3 position = transform[3].xyz;
m_acceleration = m_force / m_mass;
m_velocity += m_acceleration * a_deltaTime;
position += m_velocity * a_deltaTime;
transform[3].xyz = position;
m_parent->SetTransform(transform);
m_force.x = 0;
m_force.y = 0;
m_force.z = 0;
m_torque.x = 0;
m_torque.y = 0;
m_torque.z = 0;
rotVec = glm::vec3(0,0,0);
}
So I'm trying to figure out how to mannually create a camera class that creates a local frame for camera transformations. I've created a player object based on OpenGL SuperBible's GLFrame class.
I got keyboard keys mapped to the MoveUp, MoveRight and MoveForward functions and the horizontal and vertical mouse movements are mapped to the xRot variable and rotateLocalY function. This is done to create a FPS style camera.
The problem however is in the RotateLocalY. Translation works fine and so does the vertical mouse movement but the horizontal movement scales all my objects down or up in a weird way. Besides the scaling, the rotation also seems to restrict itself to 180 degrees and rotates around the world origin (0.0) instead of my player's local position.
I figured that the scaling had something to do with normalizing vectors but the GLframe class (which I used for reference) never normalized any vectors and that class works just fine. Normalizing most of my vectors only solved the scaling and all the other problems were still there so I'm figuring one piece of code is causing all these problems?
I can't seem to figure out where the problem lies, I'll post all the appropriate code here and a screenshot to show the scaling.
Player object
Player::Player()
{
location[0] = 0.0f; location[1] = 0.0f; location[2] = 0.0f;
up[0] = 0.0f; up[1] = 1.0f; up[2] = 0.0f;
forward[0] = 0.0f; forward[1] = 0.0f; forward[2] = -1.0f;
}
// Does all the camera transformation. Should be called before scene rendering!
void Player::ApplyTransform()
{
M3DMatrix44f cameraMatrix;
this->getTransformationMatrix(cameraMatrix);
glRotatef(xAngle, 1.0f, 0.0f, 0.0f);
glMultMatrixf(cameraMatrix);
}
void Player::MoveForward(GLfloat delta)
{
location[0] += forward[0] * delta;
location[1] += forward[1] * delta;
location[2] += forward[2] * delta;
}
void Player::MoveUp(GLfloat delta)
{
location[0] += up[0] * delta;
location[1] += up[1] * delta;
location[2] += up[2] * delta;
}
void Player::MoveRight(GLfloat delta)
{
// Get X axis vector first via cross product
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, forward);
location[0] += xAxis[0] * delta;
location[1] += xAxis[1] * delta;
location[2] += xAxis[2] * delta;
}
void Player::RotateLocalY(GLfloat angle)
{
// Calculate a rotation matrix first
M3DMatrix44f rotationMatrix;
// Rotate around the up vector
m3dRotationMatrix44(rotationMatrix, angle, up[0], up[1], up[2]); // Use up vector to get correct rotations even with multiple rotations used.
// Get new forward vector out of the rotation matrix
M3DVector3f newForward;
newForward[0] = rotationMatrix[0] * forward[0] + rotationMatrix[4] * forward[1] + rotationMatrix[8] * forward[2];
newForward[1] = rotationMatrix[1] * forward[1] + rotationMatrix[5] * forward[1] + rotationMatrix[9] * forward[2];
newForward[2] = rotationMatrix[2] * forward[2] + rotationMatrix[6] * forward[1] + rotationMatrix[10] * forward[2];
m3dCopyVector3(forward, newForward);
}
void Player::getTransformationMatrix(M3DMatrix44f matrix)
{
// Get Z axis (Z axis is reversed with camera transformations)
M3DVector3f zAxis;
zAxis[0] = -forward[0];
zAxis[1] = -forward[1];
zAxis[2] = -forward[2];
// Get X axis
M3DVector3f xAxis;
m3dCrossProduct(xAxis, up, zAxis);
// Fill in X column in transformation matrix
m3dSetMatrixColumn44(matrix, xAxis, 0); // first column
matrix[3] = 0.0f; // Set 4th value to 0
// Fill in the Y column
m3dSetMatrixColumn44(matrix, up, 1); // 2nd column
matrix[7] = 0.0f;
// Fill in the Z column
m3dSetMatrixColumn44(matrix, zAxis, 2); // 3rd column
matrix[11] = 0.0f;
// Do the translation
M3DVector3f negativeLocation; // Required for camera transform (right handed OpenGL system. Looking down negative Z axis)
negativeLocation[0] = -location[0];
negativeLocation[1] = -location[1];
negativeLocation[2] = -location[2];
m3dSetMatrixColumn44(matrix, negativeLocation, 3); // 4th column
matrix[15] = 1.0f;
}
Player object header
class Player
{
public:
//////////////////////////////////////
// Variables
M3DVector3f location;
M3DVector3f up;
M3DVector3f forward;
GLfloat xAngle; // Used for FPS divided X angle rotation (can't combine yaw and pitch since we'll also get a Roll which we don't want for FPS)
/////////////////////////////////////
// Functions
Player();
void ApplyTransform();
void MoveForward(GLfloat delta);
void MoveUp(GLfloat delta);
void MoveRight(GLfloat delta);
void RotateLocalY(GLfloat angle); // Only need rotation on local axis for FPS camera style. Then a translation on world X axis. (done in apply transform)
private:
void getTransformationMatrix(M3DMatrix44f matrix);
};
Applying transformations
// Clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// Apply camera transforms
player.ApplyTransform();
// Set up lights
...
// Use shaders
...
// Render the scene
RenderScene();
// Do post rendering operations
glutSwapBuffers();
and mouse
float mouseSensitivity = 500.0f;
float horizontal = (width / 2) - mouseX;
float vertical = (height / 2) - mouseY;
horizontal /= mouseSensitivity;
vertical /= (mouseSensitivity / 25);
player.xAngle += -vertical;
player.RotateLocalY(horizontal);
glutWarpPointer((width / 2), (height / 2));
Honestly I think you are taking a way to complicated approach to your problem. There are many ways to create a camera. My favorite is using a R3-Vector and a Quaternion, but you could also work with a R3-Vector and two floats (pitch and yaw).
The setup with two angles is simple:
glLoadIdentity();
glTranslatef(-pos[0], -pos[1], -pos[2]);
glRotatef(-yaw, 0.0f, 0.0f, 1.0f);
glRotatef(-pitch, 0.0f, 1.0f, 0.0f);
The tricky part now is moving the camera. You must do something along the lines of:
flaot ds = speed * dt;
position += tranform_y(pich, tranform_z(yaw, Vector3(ds, 0, 0)));
How to do the transforms, I would have to look that up, but you could to it by using a rotation matrix
Rotation is trivial, just add or subtract from the pitch and yaw values.
I like using a quaternion for the orientation because it is general and thus you have a camera (any entity that is) that independent of any movement scheme. In this case you have a camera that looks like so:
class Camera
{
public:
// lots of stuff omitted
void setup();
void move_local(Vector3f value);
void rotate(float dy, float dz);
private:
mx::Vector3f position;
mx::Quaternionf orientation;
};
Then the setup code uses shamelessly gluLookAt; you could make a transformation matrix out of it, but I never got it to work right.
void Camera::setup()
{
// projection related stuff
mx::Vector3f eye = position;
mx::Vector3f forward = mx::transform(orientation, mx::Vector3f(1, 0, 0));
mx::Vector3f center = eye + forward;
mx::Vector3f up = mx::transform(orientation, mx::Vector3f(0, 0, 1));
gluLookAt(eye(0), eye(1), eye(2), center(0), center(1), center(2), up(0), up(1), up(2));
}
Moving the camera in local frame is also simple:
void Camera::move_local(Vector3f value)
{
position += mx::transform(orientation, value);
}
The rotation is also straight forward.
void Camera::rotate(float dy, float dz)
{
mx::Quaternionf o = orientation;
o = mx::axis_angle_to_quaternion(horizontal, mx::Vector3f(0, 0, 1)) * o;
o = o * mx::axis_angle_to_quaternion(vertical, mx::Vector3f(0, 1, 0));
orientation = o;
}
(Shameless plug):
If you are asking what math library I use, it is mathex. I wrote it...
I'm new to c++ 3D, so I may just be missing something obvious, but how do I convert from 3D to 2D and (for a given z location) from 2D to 3D?
You map 3D to 2D via projection. You map 2D to 3D by inserting the appropriate value in the Z element of the vector.
It is a matter of casting a ray from the screen onto a plane which is parallel to x-y and is at the required z location. You then need to find out where on the plane the ray is colliding.
Here's one example, considering that screen_x and screen_y ranges from [0, 1], where 0 is the left-most or top-most coordinate and 1 is right-most or bottom-most, respectively:
Vector3 point_of_contact(-1.0f, -1.0f, -1.0f);
Matrix4 view_matrix = camera->getViewMatrix();
Matrix4 proj_matrix = camera->getProjectionMatrix();
Matrix4 inv_view_proj_matrix = (proj_matrix * view_matrix).inverse();
float nx = (2.0f * screen_x) - 1.0f;
float ny = 1.0f - (2.0f * screen_y);
Vector3 near_point(nx, ny, -1.0f);
Vector3 mid_point(nx, ny, 0.0f);
// Get ray origin and ray target on near plane in world space
Vector3 ray_origin, ray_target;
ray_origin = inv_view_proj_matrix * near_point;
ray_target = inv_view_proj_matrix * mid_point;
Vector3 ray_direction = ray_target - ray_origin;
ray_direction.normalise();
// Check for collision with the plane
Vector3 plane_normal(0.0f, 0.0f, 1.0f);
float denominator = plane_normal.dotProduct(ray_direction);
if (fabs(denom) >= std::numeric_limits<float>::epsilon())
{
float num = plane_normal.dotProduct(ray.getOrigin()) + Vector3(0, 0, z_pos);
float distance = -(num/denom);
if (distance > 0)
{
point_of_contact = ray_origin + (ray_direction * distance);
}
}
return point_of_contact
Disclaimer Notice: This solution was taken from bits and pieces of Ogre3D graphics library.
The simplest way is to do a divide by z. Therefore ...
screenX = projectionX / projectionZ;
screenY = projectionY / projectionZ;
That does perspective projection based on distance. Thing is it is often better to use homgeneous coordinates as this simplifies matrix transformation (everything becomes a multiply). Equally this is what D3D and OpenGL use. Understanding how to use non-homogeneous coordinates (ie an (x,y,z) coordinate triple) will be very helpful for things like shader optimisations however.
One lame solution:
^ y
|
|
| /z
| /
+/--------->x
Angle is the angle between the Ox and Oz axes (
#include <cmath>
typedef struct {
double x,y,z;
} Point3D;
typedef struct {
double x,y;
} Point2D
const double angle = M_PI/4; //can be changed
Point2D* projection(Point3D& point) {
Point2D* p = new Point2D();
p->x = point.x + point.z * sin(angle);
p->y = point.y + point.z * cos(angle);
return p;
}
However there are lots of tutorials on this on the net... Have you googled for it?