According to this site: http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-5
3DXMATRIX* D3DXMATRIXLookAtLH(D3DXMATRIX* pOut,
CONST D3DXVECTOR3* pEye,
CONST D3DXVECTOR3* pAt,
CONST D3DXVECTOR3* pUp);
D3DXMATRIX* pOut,
// We know this one. It is the pointer to the matrix we are going to fill.
CONST D3DXVECTOR3* pEye,
// This parameter is a pointer to a vector which contains the exact position
// of the camera. Considering our example above, we want to fill this struct
// with (100, 100, 100).
CONST D3DXVECTOR3* pAt,
// This vector contains the exact location the camera should look at.
// Our example is looking at (0, 0, 0), so we will fill this struct
// with those values.
CONST D3DXVECTOR3* pUp,
// This vector contains the direction of "up" for the camera. In other
// words, what direction will the top of the screen be. Usually, game
// programmers use the y-axis as the "up" direction. To set the camera
// this way, you simply need to fill this struct with (0, 1, 0), or
// 1.0f on the y-axis and 0.0f on the other two.
I would assume that all I would have to do is change the x coord of pEye, and pAt plus/minus to move left and right. However, when I do this, funky things happen. Is there anything I am doing wrong? Below is my code!
void world_view::start_cam() {
vPosition = D3DXVECTOR3 ( console_editor.window_w/2 , console_editor.window_h/2 , console_editor.window_h/2 );
vLookAt = D3DXVECTOR3 ( console_editor.window_w/2 , console_editor.window_h/2 , 0.0f );
vUp = D3DXVECTOR3 ( 0.0f , -1.0f , 0.0f );
fov = D3DXToRadian(90); // the horizontal field of view
aspectRatio = (FLOAT)console_editor.window_w / (FLOAT)console_editor.window_h; // aspect ratio
zNear = 1.0f;
zFar = console_editor.window_h/2+10;
}
void world_view:: MoveLeft(float units) {
}
void world_view:: MoveRight(float units) {
D3DXVECTOR3 vTemp = struct_world_view.vPosition;
vTemp.x = vTemp.x + units;
//vTemp.y = vTemp.y + units;
//vTemp.z = vTemp.z + units;
struct_world_view.vPosition = vTemp;
D3DXVECTOR3 vlTemp = struct_world_view.vLookAt;
vlTemp.x + units;
//vlTemp.y + units;
//vlTemp.z + units;
struct_world_view.vLookAt = vlTemp;
}
void world_view::UpdateCamera(){
D3DXMatrixLookAtLH(&struct_world_view.cam,
&struct_world_view.vPosition,
&struct_world_view.vLookAt,
&struct_world_view.vUp);
D3DXMatrixPerspectiveFovLH(&struct_world_view.cam_lens,
struct_world_view.fov,
struct_world_view.aspectRatio,
struct_world_view.zNear,
struct_world_view.zFar);
}
![Before][1]![After][2]
This is an example of Before and After moving right. Any clarification would greatly help!
Move both the position of the camera and where it's looking:
void world_view:: MoveRight(float units) {
struct_world_view.vLookAt.x += units;
struct_world_view.vPosition.x += units;
}
Edit: also - if you only want to be able to move right, you may want to check for negative units.
Related
I have been developing a 3d chessboard and I have been stuck trying to drag the pieces for a few days now.
Once I select an object using my ray-caster, I start my dragging function which calculates the difference between the current location of the mouse (in world coordinates) and its previous location, I then translate my object by the difference of these coordinates.
I debug my ray-caster by drawing lines so I am sure those coordinates are accurate.
Translating my object based on the ray-caster coordinates only moves the object a fraction of the distance it should actually go.
Am I missing a step?
-Calvin
I believe my issue is in this line of code....
glm::vec3 World_Delta = Current_World_Location - World_Start_Location;
If I multiply the equation by 20 the object start to move more like I would expect it to, but it is never completely accurate.
Below is some relevent code
RAY-CASTING:
void CastRay(int mouse_x, int mouse_y) {
int Object_Selected = -1;
float Closest_Object = -1;
//getWorldCoordinates calls glm::unproject
nearPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 0.0f));
farPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 1.0f));
glm::vec3 direction = Input_Math.normalize(farPoint - nearPoint);
//getObjectStack() Retrieves all objects in the current scene
std::vector<LoadOBJ> objectList = Object_Input_Info.getObjectStack();
for (int i = 0; i < objectList.size(); i++) {
std::vector<glm::vec3> Vertices = objectList[i].getVertices();
for(int j = 0; j < Vertices.size(); j++) {
if ( ( j + 1 ) % 3 == 0 ) {
glm::vec3 face_normal = Input_Math.normalize(Input_Math.CrossProduct(Vertices[j-1] - Vertices[j-2], Vertices[j] - Vertices[j-2]));
float nDotL = glm::dot(direction, face_normal);
if (nDotL <= 0.0f ) { //if nDotL == 0 { Perpindicular } else if nDotL < 0 { SameDirection } else { OppositeDirection }
float distance = glm::dot(face_normal, (Vertices[j-2] - nearPoint)) / nDotL;
glm::vec3 p = nearPoint + distance * direction;
glm::vec3 n1 = Input_Math.CrossProduct(Vertices[j-1] - Vertices[j-2], p - Vertices[j-2]);
glm::vec3 n2 = Input_Math.CrossProduct(Vertices[j] - Vertices[j-1], p - Vertices[j-1]);
glm::vec3 n3 = Input_Math.CrossProduct(Vertices[j-2] - Vertices[j], p - Vertices[j]);
if( glm::dot(face_normal, n1) >= 0.0f && glm::dot(face_normal, n2) >= 0.0f && glm::dot(face_normal, n3) >= 0.0f ) {
if(p.z > Closest_Object) {
//I Create this "dragplane" to be used by my dragging function.
Drag_Plane[0] = (glm::vec3(Vertices[j-2].x, Vertices[j-2].y, p.z ));
Drag_Plane[1] = (glm::vec3(Vertices[j-1].x, Vertices[j-1].y, p.z ));
Drag_Plane[2] = (glm::vec3(Vertices[j].x , Vertices[j].y , p.z ));
//This is the object the we selected in the scene
Object_Selected = i;
//These are the coordinate the ray intersected the object
World_Start_Location = p;
}
}
}
}
}
}
if(Object_Selected >= 0) { //If an object was intersected by the ray
//selectObject -> Simply sets the boolean "dragging" to true
selectObject(Object_Selected, mouse_x, mouse_y);
}
DRAGGING
void DragObject(int mouse_x, int mouse_y) {
if(dragging) {
//Finds the Coordinates where the ray intersects the "DragPlane" set by original object intersection
farPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 1.0f));
nearPoint = Input_Math.getWorldCoordinates(glm::vec3(mouse_x, Window_Input_Info.getScreenHeight()-mouse_y, 0.0f));
glm::vec3 direction = Input_Math.normalize(farPoint - nearPoint);
glm::vec3 face_normal = Input_Math.normalize(Input_Math.CrossProduct(Drag_Plane[1] - Drag_Plane[0], Drag_Plane[2] - Drag_Plane[0]));
float nDotL = glm::dot(direction, face_normal);
float distance = glm::dot(face_normal, (Drag_Plane[0] - nearPoint)) / nDotL;
glm::vec3 Current_World_Location = nearPoint + distance * direction;
//Calculate the difference between the current mouse location and its previous location
glm::vec3 World_Delta = Current_World_Location - World_Start_Location;
//Set the "start location" to the current location for the next loop
World_Start_Location = Current_World_Location;
//get the current object
Object_Input_Info = Object_Input_Info.getObject(currentObject);
//adds a translation matrix to the stack
Object_Input_Info.TranslateVertices(World_Delta.x, World_Delta.y, World_Delta.z);
//calculates the new vertices
Object_Input_Info.Load_Data();
//puts the new object back
Object_Input_Info.Update_Object_Stack(currentObject);
}
}
I have already faced similar problems to what your reporting.
Instead of keeping track of the translation during mouse movement, you can do the following:
In your mouse button callback, store a 'Delta' vector from the mouse position (in world coordinates) (P_mouse) to your object position (P_object). It would be something like:
Delta = P_object - P_mouse;
For every call of your mouse motion callback, you just need to update the object position by:
P_object = P_mouse + Delta;
Notice that Delta is constant during the whole dragging process.
So I've got a class Label that inherits from osg::Geode which I draw in the world space in OpenSceneGraph. After displaying each frame, I then want to read the screen space coordinates of
each Label, so I can find out how much they overlap in the screen space. To this end, I created a class ScreenSpace which should calculate this (the interesting function is calc_screen_coords.)
I wrote a small subroutine that dumps each frame with some extra information, including the ScreenSpace box which represents what the program thinks the screen space coordinates are:
Now in the above picture, there seems to be no problem; but if I rotate it to the other side (with my mouse), then it looks quite different:
And that is what I don't understand.
Is my world to screen space calculation wrong?
Or am I getting the wrong BoundingBox from the Drawable?
Or maybe it has something to do with the setAutoRotateToScreen(true) directive that I give the osgText::Text object?
Is there a better way to do this? Should I try to use a Billboard instead? How would I do that though? (I tried and it totally didn't work for me — I must be missing something...)
Here is the source code for calculating the screen space coordinates of a Label:
struct Pixel {
// elided methods...
int x;
int y;
}
// Forward declarations:
pair<Pixel, Pixel> calc_screen_coords(const osg::BoundingBox& box, const osg::Camera* cam);
void rearange(Pixel& left, Pixel& right);
class ScreenSpace {
public:
ScreenSpace(const Label* label, const osg::Camera* cam)
{
BoundingBox box = label->getDrawable(0)->computeBound();
tie(bottom_left_, upper_right_) = calc_screen_coords(box, cam);
rearrange(bottom_left_, upper_right_);
}
// elided methods...
private:
Pixel bottom_left_;
Pixel upper_right_;
}
pair<Pixel, Pixel> calc_screen_coords(const osg::BoundingBox& box, const osg::Camera* cam)
{
Vec4d vec (box.xMin(), box.yMin(), box.zMin(), 1.0);
Vec4d veq (box.xMax(), box.yMax(), box.zMax(), 1.0);
Matrixd transmat
= cam->getViewMatrix()
* cam->getProjectionMatrix()
* cam->getViewport()->computeWindowMatrix();
vec = vec * transmat;
vec = vec / vec.w();
veq = veq * transmat;
veq = veq / veq.w();
return make_pair(
Pixel(static_cast<int>(vec.x()), static_cast<int>(vec.y())),
Pixel(static_cast<int>(veq.x()), static_cast<int>(veq.y()))
);
}
inline void swap(int& v, int& w)
{
int temp = v;
v = w;
w = temp;
}
inline void rearrange(Pixel& left, Pixel& right)
{
if (left.x > right.x) {
swap(left.x, right.x);
}
if (left.y > right.y) {
swap(left.y, right.y);
}
}
And here is the construction of Label (I tried to abridge it a little):
// Forward declaration:
Geometry* createLeader(straph::Point pos, double height, Color color);
class Label : public osg::Geode {
public:
Label(font, fontSize, text, color, position, height, margin, bgcolor, leaderColor)
{
osgText::Text* txt = new osgText::Text;
txt->setFont(font);
txt->setColor(color.vec4());
txt->setCharacterSize(fontSize);
txt->setText(text);
// Set display properties and height
txt->setAlignment(osgText::TextBase::CENTER_BOTTOM);
txt->setAutoRotateToScreen(true);
txt->setPosition(toVec3(position, height));
// Create bounding box and leader
typedef osgText::TextBase::DrawModeMask DMM;
unsigned drawMode = DMM::TEXT | DMM::BOUNDINGBOX;
drawMode |= DMM::FILLEDBOUNDINGBOX;
txt->setBoundingBoxColor(bgcolor.vec4());
txt->setBoundingBoxMargin(margin);
txt->setDrawMode(drawMode);
this->addDrawable(txt);
Geometry* leader = createLeader(position, height, leaderColor);
this->addDrawable(leader);
}
// elided methods and data members...
}
Geometry* createLeader(straph::Point pos, double height, Color color)
{
Geometry* leader = new Geometry();
Vec3Array* array = new Vec3Array();
array->push_back(Vec3(pos.x, pos.y, height));
array->push_back(Vec3(pos.x, pos.y, 0.0f));
Vec4Array* colors = new Vec4Array(1);
(*colors)[0] = color.vec4();
leader->setColorArray(colors);
leader->setColorBinding(Geometry::BIND_OVERALL);
leader->setVertexArray(array);
leader->addPrimitiveSet(new DrawArrays(PrimitiveSet::LINES, 0, 2));
LineWidth* lineWidth = new osg::LineWidth();
lineWidth->setWidth(2.0f);
leader->getOrCreateStateSet()->setAttributeAndModes(lineWidth, osg::StateAttribute::ON);
return leader;
}
Any pointers or help?
I found a solution that works for me, but is also unsatisfying, so if you have a better solution, I'm all ears.
Basically, I take different points from the Label that I know will be at certain points,
and I calculate the screen space by combining this. For the left and right sides, I take
the bounds of the regular bounding box, and for the top and bottom, I calculate it with the
center of the bounding box and the position of the label.
ScreenSpace::ScreenSpace(const Label* label, const osg::Camera* cam)
{
const Matrixd transmat
= cam->getViewMatrix()
* cam->getProjectionMatrix()
* cam->getViewport()->computeWindowMatrix();
auto topixel = [&](Vec3 v) -> Pixel {
Vec4 vec(v.x(), v.y(), v.z(), 1.0);
vec = vec * transmat;
vec = vec / vec.w();
return Pixel(static_cast<int>(vec.x()), static_cast<int>(vec.y()));
};
// Get left right coordinates
vector<int> xs; xs.reserve(8);
vector<int> ys; ys.reserve(8);
BoundingBox box = label->getDrawable(0)->computeBound();
for (int i=0; i < 8; i++) {
Pixel p = topixel(box.corner(i));
xs.push_back(p.x);
ys.push_back(p.y);
};
int xmin = *min_element(xs.begin(), xs.end());
int xmax = *max_element(xs.begin(), xs.end());
// Get up-down coordinates
int ymin = topixel(dynamic_cast<const osgText::Text*>(label->getDrawable(0))->getPosition()).y;
int center = topixel(box.center()).y;
int ymax = center + (center - ymin);
bottom_left_ = Pixel(xmin, ymin);
upper_right_ = Pixel(xmax, ymax);
z_ = distance_from_camera(label, cam);
}
I'm working on a program with IK and have run into what I had at first thought was a trivial problem but have since had trouble solving it.
Background:
Everything is in 3d space.
I'm using 3d Vectors and Quaternions to represent transforms.
I have a limb which we will call V1.
I want to rotate it onto V2.
I was getting the angle between V1 and V2.
Then the axis for rotation by V1 cross V2.
Then making a Quaternion from the axis and angle.
I then take the limbs current Orientation and multiply it by the axis angle quaternion.
This I believe is my desired local space for the limb.
This limb is attached to a series of other links. To get the world space I traverse up to the root combining the parents local space with the child's local space until I reach the root.
This seems to work grand if the vector that I am rotating to is contained within the X and Y plane or if the body which the limb is attached to hasn't been modified. If anything has been modified, for example rotating the root node, then on the first iteration the vector will rotate very close to the desired vector. After that point though it will begin to spin all over the place and never reach the goal.
I've gone through all the math line by line and it appears to all be correct. I'm not sure if there is something that I do not know about or am simply over looking. Is my logical sound? Or am I unaware of something? Any help is greatly appreciated!
Quaternion::Quaternion(const Vector& axis, const float angle)
{
float sin_half_angle = sinf( angle / 2 );
v.set_x( axis.get_x() * sin_half_angle );
v.set_y( axis.get_y() * sin_half_angle );
v.set_z( axis.get_z() * sin_half_angle );
w = cosf( angle / 2 );
}
Quaternion Quaternion::operator* (const Quaternion& quat) const
{
Quaternion result;
Vector v1( this->v );
Vector v2( quat.v );
float s1 = this->w;
float s2 = quat.w;
result.w = s1 * s2 - v1.Dot(v2);
result.v = v2 * s1 + v1 * s2 + v1.Cross(v2);
result.Normalize();
return result;
}
Vector Quaternion::operator* (const Vector& vec) const
{
Quaternion quat_vec(vec.get_x(), vec.get_y(), vec.get_z(), 0.0f);
Quaternion rotation( *this );
Quaternion rotated_vec = rotation * ( quat_vec * rotation.Conjugate() );
return rotated_vec.v;
}
Quaternion Quaternion::Conjugate()
{
Quaternion result( *this );
result.v = result.v * -1.0f;
return result;
}
Transform Transform::operator*(const Transform tran)
{
return Transform( mOrient * transform.getOrient(), mTrans + ( mOrient * tran.getTrans());
}
Transform Joint::GetWorldSpace()
{
Transform world = local_space;
Joint* par = GetParent();
while ( par )
{
world = par->GetLocalSpace() * world;
par = par->GetParent();
}
return world;
}
void RotLimb()
{
Vector end_effector_worldspace_pos = end_effector->GetWorldSpace().get_Translation();
Vector parent_worldspace_pos = parent->GetWorldSpace().get_Translation();
Vector parent_To_end_effector = ( end_effector_worldspace_pos - parent_worldspace_pos ).Normalize();
Vector parent_To_goal = ( goal_pos - parent_worldspace_pos ).Normalize();
float dot = parent_To_end_effector.Dot( parent_To_goal );
Vector rot_axis(0.0f,0.0f,1.0f);
float angle = 0.0f;
if (1.0f - fabs(dot) > EPSILON)
{
//angle = parent_To_end_effector.Angle( parent_To_goal );
rot_axis = parent_To_end_effector.Cross( parent_To_goal ).Normalize();
parent->RotateJoint( rot_axis, acos(dot) );
}
}
void Joint::Rotate( const Vector& axis, const float rotation )
{
mLocalSpace = mlocalSpace * Quaternion( axis, rotation );
}
You are correct when you write in a comment that the axis should be computed in the local coordinate frame of the joint:
I'm wondering if this issue is occuring because I'm doing the calculations to get the axis and angle in the world space for the joint, but then applying it to the local space.
The rotation of the axis from the world frame to the joint frame will look something like this:
rot_axis = parent->GetWorldSpace().Inverse().get_Rotation() * rot_axis
There can be other issues to debug, but it's the only logical error I can see in the code that you have posted.
I have a very rudimentary camera which generates 3 vectors for use with gluLookAt(...) the problem is I'm not sure if this is correct, I adapted code from something my lecturer showed us (I think he got it from somewhere).
This actually works until you spin the mouse round in circles than camera starts to rotate around the z-axis. Which shouldn't happen as the mouse coords are only attached to the pitch and yaw not the roll.
Camera
// Camera.hpp
#ifndef MOOT_CAMERA_INCLUDE_HPP
#define MOOT_CAMERA_INCLUDE_HPP
#include <GL/gl.h>
#include <GL/glu.h>
#include <boost/utility.hpp>
#include <Moot/Platform.hpp>
#include <Moot/Vector3D.hpp>
namespace Moot
{
class Camera : public boost::noncopyable
{
protected:
Vec3f m_position, m_up, m_right, m_forward, m_viewPoint;
uint16_t m_height, m_width;
public:
Camera()
{
m_forward = Vec3f(0.0f, 0.0f, -1.0f);
m_right = Vec3f(1.0f, 0.0f, 0.0f);
m_up = Vec3f(0.0f, 1.0f, 0.0f);
}
void setup(uint16_t setHeight, uint16_t setWidth)
{
m_height = setHeight;
m_width = setWidth;
}
void move(float distance)
{
m_position += (m_forward * distance);
}
void addPitch(float setPitch)
{
m_forward = (m_forward * cos(setPitch) + (m_up * sin(setPitch)));
m_forward.setNormal();
// Cross Product
m_up = (m_forward / m_right) * -1;
}
void addYaw(float setYaw)
{
m_forward = ((m_forward * cos(setYaw)) - (m_right * sin(setYaw)));
m_forward.setNormal();
// Cross Product
m_right = m_forward / m_up;
}
void addRoll(float setRoll)
{
m_right = (m_right * cos(setRoll) + (m_up * sin(setRoll)));
m_right.setNormal();
// Cross Product
m_up = (m_forward / m_right) * -1;
}
virtual void apply() = 0;
}; // Camera
} // Moot
#endif
Snippet from update cycle
// Mouse movement
m_camera.addPitch((float)input().mouseDeltaY() * 0.001);
m_camera.addYaw((float)input().mouseDeltaX() * 0.001);
apply() in the camera class is defined in an inherited class, which is called from the draw function of the game loop.
void apply()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(40.0,(GLdouble)m_width/(GLdouble)m_height,0.5,20.0);
m_viewPoint = m_position + m_forward;
gluLookAt( m_position.getX(), m_position.getY(), m_position.getZ(),
m_viewPoint.getX(), m_viewPoint.getY(), m_viewPoint.getZ(),
m_up.getX(), m_up.getY(), m_up.getZ());
}
Don't accumulate the transforms in your vectors, store the angles and generate the vectors on-the-fly.
EDIT: Floating-point stability. Compare the output of a and b:
#include <iostream>
using namespace std;
int main()
{
const float small = 0.00001;
const unsigned int times = 100000;
float a = 0.0f;
for( unsigned int i = 0; i < times; ++i )
{
a += small;
}
cout << a << endl;
float b = 0.0f;
b = small * times;
cout << b << endl;
return 0;
}
Output:
1.00099
1
I am not sure where to start, as you are posting only small snippets, not enough to fully reproduce the problem.
In your methods you update all parameters, and your parameters are depending on previous values. I am not sure what exactly you call, because you posted that you call only these two :
m_camera.addPitch((float)input().mouseDeltaY() * 0.001);
m_camera.addYaw((float)input().mouseDeltaX() * 0.001);
You should somehow break that circle by adding new parameters, and the output should depend on the input (for example, m_position shouldn't depend on m_forward).
You should also initialize all variables in the constructor, and I see you are initializing only m_forward, m_right and m_up (by the way, use initialization list).
You might want to reconsider your approach in favor of using quaternion rotations as described in this paper. This has the advantage of representing all of your accumulated rotations as a single rotation about a single vector (only need to keep track of a single quaternion) which you can apply to the canonical orientation vectors (up, norm and right) describing the camera orientation. Furthermore, since you're using C++, you can use the Boost quaternion class to manage the math of most of it.
I have a camera object that I have put together from reading on the net that handles moving forward and backward, strafe left and right and even look around with the mouse. But when I move in any direction plus try to look around it jumps all over the place, but when I don't move and look around its fine.
I'm hoping someone can help me work out why I can move and look around at the same time?
main.h
#include "SDL/SDL.h"
#include "SDL/SDL_opengl.h"
#include <cmath>
#define CAMERASPEED 0.03f // The Camera Speed
struct tVector3 // Extended 3D Vector Struct
{
tVector3() {} // Struct Constructor
tVector3 (float new_x, float new_y, float new_z) // Init Constructor
{ x = new_x; y = new_y; z = new_z; }
// overload + operator
tVector3 operator+(tVector3 vVector) {return tVector3(vVector.x+x, vVector.y+y, vVector.z+z);}
// overload - operator
tVector3 operator-(tVector3 vVector) {return tVector3(x-vVector.x, y-vVector.y, z-vVector.z);}
// overload * operator
tVector3 operator*(float number) {return tVector3(x*number, y*number, z*number);}
// overload / operator
tVector3 operator/(float number) {return tVector3(x/number, y/number, z/number);}
float x, y, z; // 3D vector coordinates
};
class CCamera
{
public:
tVector3 mPos;
tVector3 mView;
tVector3 mUp;
void Strafe_Camera(float speed);
void Move_Camera(float speed);
void Rotate_View(float speed);
void Position_Camera(float pos_x, float pos_y,float pos_z,
float view_x, float view_y, float view_z,
float up_x, float up_y, float up_z);
};
void Draw_Grid();
camera.cpp
#include "main.h"
void CCamera::Position_Camera(float pos_x, float pos_y, float pos_z,
float view_x, float view_y, float view_z,
float up_x, float up_y, float up_z)
{
mPos = tVector3(pos_x, pos_y, pos_z);
mView = tVector3(view_x, view_y, view_z);
mUp = tVector3(up_x, up_y, up_z);
}
void CCamera::Move_Camera(float speed)
{
tVector3 vVector = mView - mPos;
mPos.x = mPos.x + vVector.x * speed;
mPos.z = mPos.z + vVector.z * speed;
mView.x = mView.x + vVector.x * speed;
mView.z = mView.z + vVector.z * speed;
}
void CCamera::Strafe_Camera(float speed)
{
tVector3 vVector = mView - mPos;
tVector3 vOrthoVector;
vOrthoVector.x = -vVector.z;
vOrthoVector.z = vVector.x;
mPos.x = mPos.x + vOrthoVector.x * speed;
mPos.z = mPos.z + vOrthoVector.z * speed;
mView.x = mView.x + vOrthoVector.x * speed;
mView.z = mView.z + vOrthoVector.z * speed;
}
void CCamera::Rotate_View(float speed)
{
tVector3 vVector = mView - mPos;
tVector3 vOrthoVector;
vOrthoVector.x = -vVector.z;
vOrthoVector.z = vVector.x;
mView.z = (float)(mPos.z + sin(speed)*vVector.x + cos(speed)*vVector.z);
mView.x = (float)(mPos.x + cos(speed)*vVector.x - sin(speed)*vVector.z);
}
and the mousemotion code
void processEvents()
{
int mid_x = screen_width >> 1;
int mid_y = screen_height >> 1;
int mpx = event.motion.x;
int mpy = event.motion.y;
float angle_y = 0.0f;
float angle_z = 0.0f;
while(SDL_PollEvent(&event))
{
switch(event.type)
{
case SDL_MOUSEMOTION:
if( (mpx == mid_x) && (mpy == mid_y) ) return;
// Get the direction from the mouse cursor, set a resonable maneuvering speed
angle_y = (float)( (mid_x - mpx) ) / 1000; //1000
angle_z = (float)( (mid_y - mpy) ) / 1000; //1000
// The higher the value is the faster the camera looks around.
objCamera.mView.y += angle_z * 2;
// limit the rotation around the x-axis
if((objCamera.mView.y - objCamera.mPos.y) > 8) objCamera.mView.y = objCamera.mPos.y + 8;
if((objCamera.mView.y - objCamera.mPos.y) <-8) objCamera.mView.y = objCamera.mPos.y - 8;
objCamera.Rotate_View(-angle_y);
SDL_WarpMouse(mid_x, mid_y);
break;
case SDL_KEYUP:
objKeyb.handleKeyboardEvent(event,true);
break;
case SDL_KEYDOWN:
objKeyb.handleKeyboardEvent(event,false);
break;
case SDL_QUIT:
quit = true;
break;
case SDL_VIDEORESIZE:
screen = SDL_SetVideoMode( event.resize.w, event.resize.h, screen_bpp, SDL_OPENGL | SDL_HWSURFACE | SDL_RESIZABLE | SDL_GL_DOUBLEBUFFER | SDL_HWPALETTE );
screen_width = event.resize.w;
screen_height = event.resize.h;
init_opengl();
std::cout << "Resized to width: " << event.resize.w << " height: " << event.resize.h << std::endl;
break;
default:
break;
}
}
}
I'm not entirely sure what you are doing above.
Personally I would just allow a simple 4x4 matrix. Any implementation will do. To rotate you, simply, need to rotate using the change of mouse x and y as euler inputs for rotation around the y and x axes. There is lots of code available all over the internet that will do this for you.
Some of those matrix libraries won't provide you with a "MoveForward()" function. If this is the case its ok, moving forward is pretty easy. The third column (or row if you are using row major matrices) is your forward vector. Extract it. Normalise it (It really should be normalised anyway so this step may not be needed). Multiply it by how much you wish to move forward and then add it to the position (the 4th column/row).
Now here is the odd part. A view matrix is a special type of matrix. The matrix above defines the view space. If you multiply your current model matrix by this matrix you will not get the answer you expect. Because you wish to transform it such that the camera is at the origin. As such you need to, effectively, undo the camera transformation to re-orient things to the view defined above. To do this you multiply your model matrix by the inverse of the view matrix.
You now have an object defined in the correct view space.
This is my very simple camera class. It does not handle the functionality you describe but hopefully will give you a few ideas on how to set up the class (Be warned, I use row major, ie DirectX style, matrices).
BaseCamera.h:
#ifndef BASE_CAMERA_H_
#define BASE_CAMERA_H_
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
#include "Maths/Vector4.h"
#include "Maths/Matrix4x4.h"
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
class BaseCamera
{
protected:
bool mDirty;
MathsLib::Matrix4x4 mCameraMat;
MathsLib::Matrix4x4 mViewMat;
public:
BaseCamera();
BaseCamera( const BaseCamera& camera );
BaseCamera( const MathsLib::Vector4& vPos, const MathsLib::Vector4& vLookAt );
BaseCamera( const MathsLib::Matrix4x4& matCamera );
bool IsDirty() const;
void SetDirty();
MathsLib::Matrix4x4& GetOrientationMatrix();
const MathsLib::Matrix4x4& GetOrientationMatrix() const;
MathsLib::Matrix4x4& GetViewMatrix();
};
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline MathsLib::Matrix4x4& BaseCamera::GetOrientationMatrix()
{
return mCameraMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline const MathsLib::Matrix4x4& BaseCamera::GetOrientationMatrix() const
{
return mCameraMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline bool BaseCamera::IsDirty() const
{
return mDirty;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline void BaseCamera::SetDirty()
{
mDirty = true;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
#endif
BaseCamera.cpp:
#include "Render/stdafx.h"
#include "BaseCamera.h"
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera() :
mDirty( true )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const BaseCamera& camera ) :
mDirty( camera.mDirty ),
mCameraMat( camera.mCameraMat ),
mViewMat( camera.mViewMat )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const MathsLib::Vector4& vPos, const MathsLib::Vector4& vLookAt ) :
mDirty( true )
{
MathsLib::Vector4 vDir = (vLookAt - vPos).Normalise();
MathsLib::Vector4 vLat = MathsLib::CrossProduct( MathsLib::Vector4( 0.0f, 1.0f, 0.0f ), vDir ).Normalise();
MathsLib::Vector4 vUp = MathsLib::CrossProduct( vDir, vLat );//.Normalise();
mCameraMat.Set( vLat, vUp, vDir, vPos );
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const MathsLib::Matrix4x4& matCamera ) :
mDirty( true ),
mCameraMat( matCamera )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
MathsLib::Matrix4x4& BaseCamera::GetViewMatrix()
{
if ( IsDirty() )
{
mViewMat = mCameraMat.Inverse();
mDirty = false;
}
return mViewMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
I agree with Goz. You need to use homegenous 4x4 matrices if you want to represent affine transformations such as rotate + translate
Assuming row major representation then if there is no scaling or shearing, your 4x4 matrix represents the following:
Rows 0 to 2 : The three basis vectors of your local co-ordinate system ( i.e x,y,z )
Row 3 : the current translation from the origin
So to move along your local x vector, as Goz says, because you can assume it's a unit vector
if there is no scale/shear you just multiply it by the move step ( +ve or -ve ) then add the resultant vector onto Row 4 in the matrix
So taking a simple example of starting at the origin with your local frame set to world frame then your matrix would look something like this
1 0 0 0 <--- x unit vector
0 1 0 0 <--- y unit vector
0 0 1 0 <--- z unit vector
0 0 0 1 <--- translation vector
In terms of a way most game cameras work then the axes map like this:
x axis <=> Camera Pan Left/Right
y axis <=> Camera Pan Up/Down
z axis <=> Camera Zoom In/Out
So if I rotate my entire frame of reference to say look at a new point LookAt then as Goz puts in his BaseCamera overloaded constructor code, you then construct a new local co-ordinate system and set this into your matrix ( all mCameraMat.Set( vLat, vUp, vDir, vPos ) does typically is set those four rows of the matrix i.e VLat would be row 0, vUp row 1, vDir row 2 and vPos row 3 )
Then to zoom in/out would just become row 3 = row 2 * stepval
Again as Goz, rightly points out, you then need to transform this back into world-space and this is done by multiplying by the inverse of the view matrix