Changing FOV(Field of view) while moving forward - c++

I have situation. I want smoothly change FOV while moving forward. in Camera settings in parent class I set default value:
FollowCamera->FieldOfView = 90.0f;
Then in MoveForward function I set it like this
void AStarMotoVehicle::MoveForward(float Axis)
{
if ((Controller != NULL) && (Axis != 0.0f) && (bDead != true))
{
//Angle of direction of camera on Yaw
const FRotator Rotation = Controller->GetControlRotation();
const FRotator YawRotation(0, Rotation.Yaw, 0);
const FVector Direction = FRotationMatrix(YawRotation).GetUnitAxis(EAxis::X);
AddMovementInput(Direction, Axis);
FollowCamera->FieldOfView = 140.0f;
}
Actually it works, so when I move forward FOV changes to 140, but it works really roughly and happens instantly. I want to do it smoothly from 90 to 140.
Can you help me with it?

FInterpTo will make this for you :
https://docs.unrealengine.com/en-US/API/Runtime/Core/Math/FMath/FInterpTo/index.html
Each times the "MoveForward" will be called, the fov will be increased until 140.
To slow/speed the transition, decrease/increase the InterpSpeed value
With your code :
void AStarMotoVehicle::MoveForward(float Axis)
{
if ((Controller != NULL) && (Axis != 0.0f) && (bDead != true))
{
//Angle of direction of camera on Yaw
const FRotator Rotation = Controller->GetControlRotation();
const FRotator YawRotation(0, Rotation.Yaw, 0);
const FVector Direction = FRotationMatrix(YawRotation).GetUnitAxis(EAxis::X);
AddMovementInput(Direction, Axis);
// could check if World is valid
UWorld *World = GetWorld();
const float CurrentFOV = FollowCamera->FieldOfView;
const float InterpSpeed = 2.0f
FollowCamera->FieldOfView = FMath::FInterpTo(CurrentFOV,
140.0f,
World->GetTimeSeconds(),
InterpSpeed);
}

Related

unreal engine C++ / Calculate Movement Angle from projectile

I want to Calculate the Movement Angle an NOT the Rotation from an Moving Actor (to be specific = from a Bullet). First of, I´ve got the Location of the Actor and the Location one Frame before, then calculating the distance between them with FVector::Dist... After that I got the height difference between the current and last Location. Then the A-Tangents from these two variables. I put all these Variables into a Raycast.
When I shoot the Actor forwards it looks good;
Straight
but when I shoot the Bullet up, the Raycast gets messed up:
UP
CODE:
`void ABullet::Tick(float DeltaTime)
{
Super::Tick(DeltaTime);
if (Counter == true)
{
FVector TraceLoc = GetActorLocation();
FRotator TraceRot = GetActorRotation() + FRotator(0.f, 90.f, 0.f);
float Distance = FVector::Dist(LastActorLoc, TraceLoc);
heightdiff = TraceLoc.Z - LastActorLoc.Z;
TraceRot.Pitch = (atan(heightdiff / Distance)) * (180.f / 3.141592f);
UE_LOG(LogTemp, Warning, TEXT("Out: %f"), TraceRot.Pitch);
FVector Start = TraceLoc;
FVector End = Start + (TraceRot.Vector() * Distance);
LastActorLoc = TraceLoc;
LastActorRot = TraceRot;
FCollisionQueryParams TraceParamsBullet;
bool bHitBullet = GetWorld()->LineTraceSingleByChannel(TraceHitResultBullet, Start, End, ECC_Visibility, TraceParamsBullet);
DrawDebugLine(GetWorld(), Start, End, FColor::Red, false, 15.f, 30.f);
}
if (Counter == false)
{
Counter = true;
LastActorLoc = GetActorLocation();
LastActorRot = GetActorRotation() + FRotator(0.f, 90.f, 0.f);
}
}`

OpenGL how to move the camera upward and then come back on the Grid

I want to implement a conventional FPS control. By pressing the spacebar the camera moves upward and then comes down back on the grid, simulating the jumping action. Right now, I am only able to move the camera upward for a distance but I don't know how to let the camera come down.
This is my camera class:
Camera::Camera(glm::vec3 cameraPosition, glm::vec3 cameraFront, glm::vec3 cameraUp){
position = glm::vec3(cameraPosition);
front = glm::vec3(cameraFront);
up = glm::vec3(cameraUp);
// Set to predefined defaults
yawAngle = -90.0f;
pitchAngle = 0.0f;
fieldOfViewAngle = 45.0f;
}
Camera::~Camera() {}
void Camera::panCamera(float yaw){
yawAngle += yaw;
updateCamera();}
void Camera::tiltCamera(float pitch){
pitchAngle += pitch;
// Ensure pitch is inbounds for both + and - values to prevent irregular behavior
pitchAngle > 89.0f ? pitchAngle = 89.0f : NULL;
pitchAngle < -89.0f ? pitchAngle = -89.0f : NULL;
updateCamera();}
void Camera::zoomCamera(float zoom){
if (fieldOfViewAngle >= MIN_ZOOM && fieldOfViewAngle <= MAX_ZOOM){
fieldOfViewAngle -= zoom * 0.1;}
// Limit zoom values to prevent irregular behavior
fieldOfViewAngle <= MIN_ZOOM ? fieldOfViewAngle = MIN_ZOOM : NULL;
fieldOfViewAngle >= MAX_ZOOM ? fieldOfViewAngle = MAX_ZOOM : NULL;}
glm::mat4 Camera::calculateViewMatrix(){
return glm::lookAt(position, position + front, up);}
void Camera::updateCamera(){
front.x = cos(glm::radians(yawAngle)) * cos(glm::radians(pitchAngle));
front.y = sin(glm::radians(pitchAngle));
front.z = sin(glm::radians(yawAngle)) * cos(glm::radians(pitchAngle));
front = glm::normalize(front);}
void Camera::moveForward(float speed){
position += speed * front;}
void Camera::moveBackward(float speed){
position -= speed * front;}
void Camera::moveLeft(float speed){
position -= glm::normalize(glm::cross(front, up)) * speed;}
void Camera::moveRight(float speed){
position += glm::normalize(glm::cross(front, up)) * speed;}
void Camera::moveUpward(float speed){
position.y += speed;}
This is how I implement it in main.cpp:
if (glfwGetKey(window, GLFW_KEY_SPACE) == GLFW_PRESS)
{
camera.moveUpward(cameraSpeed);
}
Can someone help me?
You can simulate a jump by applying an acceleration to the y-position. You'll need to add 3 new attributes to your struct: acceleration (vec3), velocity (vec3), and onGround (boolean).
void Camera::jump() {
// Apply a upwards acceleration of 10 units. You can experiment with
// this value or make it physically correct and calculate the
// best value, but this requires the rest of your program to
// be in well-defined units.
this->onGround = false;
this->acceleration.y = 10.0f;
}
void Camera::update() {
// Your other update code
// The acceleration should decrease with gravity.
// Eventually it'll become negative.
this->acceleration.y += GRAVITY; // Define 'GRAVITY' to a negative constant.
// Only update if we're not on the ground.
if (!this->onGround) {
// Add the acceleration to the velocity and the velocity to
// the position.
this->velocity.y += this->acceleration.y
this->position.y += this->velocity.y;
if (this->position.y <= 0) {
// When you're on the ground, reset the variables.
this->onGround = true;
this->acceleration.y = 0;
this->velocity.y = 0;
this->position.y = 0;
}
}
}
And in your event handling you'll call the jump method.
if (glfwGetKey(window, GLFW_KEY_SPACE) == GLFW_PRESS)
{
camera.jump();
}
However, this code probably shouldn't be on the camera, but instead on a Physics object of some sort. But it'll work.

Ray transformation in a Ray - OBB intersection test

I've implemented an algorithm that tests for a Ray - AABB intersection and it works fine. But when I try to transform Ray to the AABB's local space (making this a Ray - OBB test), I can't get correct results. I've studied several forums and other resources, but still missing something. (Some sources suggesting to apply inverted transformation to the ray origin and its end, and only then calc. direction, other - to apply transformation to origin and direction). Can someone point in the right direction (no pun intended)?
Here goes two functions responsible for the math:
1) Calculating inverses and other things to perform tests
bool Ray::intersectsMesh(const Mesh& mesh, const Transformation& transform) {
float largestNearIntersection = std::numeric_limits<float>::min();
float smallestFarIntersection = std::numeric_limits<float>::max();
glm::mat4 modelTransformMatrix = transform.modelMatrix();
Box boundingBox = mesh.boundingBox();
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 newOrigin = inverse * glm::vec4(mOrigin, 1.0);
newOrigin /= newOrigin.w;
mOrigin = newOrigin;
mDirection = glm::normalize(inverse * glm::vec4(mDirection, 0.0));
glm::vec3 xAxis = glm::vec3(glm::column(modelTransformMatrix, 0));
glm::vec3 yAxis = glm::vec3(glm::column(modelTransformMatrix, 1));
glm::vec3 zAxis = glm::vec3(glm::column(modelTransformMatrix, 2));
glm::vec3 OBBTranslation = glm::vec3(glm::column(modelTransformMatrix, 3));
printf("trans x %f y %f z %f\n", OBBTranslation.x, OBBTranslation.y, OBBTranslation.z);
glm::vec3 delta = OBBTranslation - mOrigin;
bool earlyFalseReturn = false;
calculateIntersectionDistances(xAxis, delta, boundingBox.min.x, boundingBox.max.x, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(yAxis, delta, boundingBox.min.y, boundingBox.max.y, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(zAxis, delta, boundingBox.min.z, boundingBox.max.z, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
return true;
}
2) Helper function (probably not needed here as its relates only to AABB tests and works fine)
void Ray::calculateIntersectionDistances(const glm::vec3& axis,
const glm::vec3& delta,
float minPointOnAxis,
float maxPointOnAxis,
float *largestNearIntersection,
float *smallestFarIntersection,
bool *earlyFalseRerutn)
{
float divident = glm::dot(axis, delta);
float denominator = glm::dot(mDirection, axis);
if (fabs(denominator) > 0.001f) {
float t1 = (divident + minPointOnAxis) / denominator;
float t2 = (divident + maxPointOnAxis) / denominator;
if (t1 > t2) { std::swap(t1, t2); }
*smallestFarIntersection = std::min(t2, *smallestFarIntersection);
*largestNearIntersection = std::max(t1, *largestNearIntersection);
} else if (-divident + minPointOnAxis > 0.0 || -divident + maxPointOnAxis < 0.0) {
*earlyFalseRerutn = true;
}
}
As it turned out, the ray's world -> model transformation was correct. The bug was in the intersection test. I had to completely replace the intersection code, because I wasn't able to identify the bug in the old code, unfortunately.
Ray transformation code:
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 start = inverse * glm::vec4(mOrigin, 1.0);
glm::vec4 direction = inverse * glm::vec4(mDirection, 0.0);
direction = glm::normalize(direction);
And the Ray - AABB test was stolen from here

Wierd Raytracing Artifacts

I am trying to create a ray tracer using Qt, but I have some really weird artifacts going on.
Before I implemented shading, I just had 4 spheres, 3 triangles and 2 bounded planes in my scene. They all showed up as expected and as the color expected however, for my planes, I would see dots the same color as the background. These dots would stay static from my view position, so if I moved the camera around the dots would move around as well. However they only affected the planes and triangles and would never appear on the spheres.
One I implemented shading the issue got worse. The dots now also appear on spheres in the light source, so any part affected by the diffuse.
Also, my one plane of pure blue (RGB 0,0,255) has gone straight black. Since I have two planes I switched their colors and again the blue one went black, so it's a color issue and not a plane issue.
If anyone has any suggestions as to what the problem could be or wants to see any particular code let me know.
#include "plane.h"
#include "intersection.h"
#include <math.h>
#include <iostream>
Plane::Plane(QVector3D bottomLeftVertex, QVector3D topRightVertex, QVector3D normal, QVector3D point, Material *material)
{
minCoords_.setX(qMin(bottomLeftVertex.x(),topRightVertex.x()));
minCoords_.setY(qMin(bottomLeftVertex.y(),topRightVertex.y()));
minCoords_.setZ(qMin(bottomLeftVertex.z(),topRightVertex.z()));
maxCoords_.setX(qMax(bottomLeftVertex.x(),topRightVertex.x()));
maxCoords_.setY(qMax(bottomLeftVertex.y(),topRightVertex.y()));
maxCoords_.setZ(qMax(bottomLeftVertex.z(),topRightVertex.z()));
normal_ = normal;
normal_.normalize();
point_ = point;
material_ = material;
}
Plane::~Plane()
{
}
void Plane::intersect(QVector3D rayOrigin, QVector3D rayDirection, Intersection* result)
{
if(normal_ == QVector3D(0,0,0)) //plane is degenerate
{
cout << "degenerate plane" << endl;
return;
}
float minT;
//t = -Normal*(Origin-Point) / Normal*direction
float numerator = (-1)*QVector3D::dotProduct(normal_, (rayOrigin - point_));
float denominator = QVector3D::dotProduct(normal_, rayDirection);
if (fabs(denominator) < 0.0000001) //plane orthogonal to view
{
return;
}
minT = numerator / denominator;
if (minT < 0.0)
{
return;
}
QVector3D intersectPoint = rayOrigin + (rayDirection * minT);
//check inside plane dimensions
if(intersectPoint.x() < minCoords_.x() || intersectPoint.x() > maxCoords_.x() ||
intersectPoint.y() < minCoords_.y() || intersectPoint.y() > maxCoords_.y() ||
intersectPoint.z() < minCoords_.z() || intersectPoint.z() > maxCoords_.z())
{
return;
}
//only update if closest object
if(result->distance_ > minT)
{
result->hit_ = true;
result->intersectPoint_ = intersectPoint;
result->normalAtIntersect_ = normal_;
result->distance_ = minT;
result->material_ = material_;
}
}
QVector3D MainWindow::traceRay(QVector3D rayOrigin, QVector3D rayDirection, int depth)
{
if(depth > maxDepth)
{
return backgroundColour;
}
Intersection* rayResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayOrigin, rayDirection, rayResult);
}
if(rayResult->hit_ == false)
{
return backgroundColour;
}
else
{
QVector3D intensity = QVector3D(0,0,0);
QVector3D shadowRay = pointLight - rayResult->intersectPoint_;
shadowRay.normalize();
Intersection* shadowResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayResult->intersectPoint_, shadowRay, shadowResult);
}
if(shadowResult->hit_ == true)
{
intensity += shadowResult->material_->diffuse_ * intensityAmbient;
}
else
{
intensity += rayResult->material_->ambient_ * intensityAmbient;
// Diffuse
intensity += rayResult->material_->diffuse_ * intensityLight * qMax(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay), 0.0f);
// Specular
QVector3D R = ((2*(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay))* rayResult->normalAtIntersect_) - shadowRay);
R.normalize();
QVector3D V = rayOrigin - rayResult->intersectPoint_;
V.normalize();
intensity += rayResult->material_->specular_ * intensityLight * pow(qMax(QVector3D::dotProduct(R,V), 0.0f), rayResult->material_->specularExponent_);
}
return intensity;
}
}
So I figured out my issues. They are due to float being terrible at precision, any check for < 0.0 would intermittently fail because of floats precision. I had to add an offset to all my checks so that I was checking for < 0.001.

How to fix weird camera rotation while moving camera with sdl, opengl in c++

I have a camera object that I have put together from reading on the net that handles moving forward and backward, strafe left and right and even look around with the mouse. But when I move in any direction plus try to look around it jumps all over the place, but when I don't move and look around its fine.
I'm hoping someone can help me work out why I can move and look around at the same time?
main.h
#include "SDL/SDL.h"
#include "SDL/SDL_opengl.h"
#include <cmath>
#define CAMERASPEED 0.03f // The Camera Speed
struct tVector3 // Extended 3D Vector Struct
{
tVector3() {} // Struct Constructor
tVector3 (float new_x, float new_y, float new_z) // Init Constructor
{ x = new_x; y = new_y; z = new_z; }
// overload + operator
tVector3 operator+(tVector3 vVector) {return tVector3(vVector.x+x, vVector.y+y, vVector.z+z);}
// overload - operator
tVector3 operator-(tVector3 vVector) {return tVector3(x-vVector.x, y-vVector.y, z-vVector.z);}
// overload * operator
tVector3 operator*(float number) {return tVector3(x*number, y*number, z*number);}
// overload / operator
tVector3 operator/(float number) {return tVector3(x/number, y/number, z/number);}
float x, y, z; // 3D vector coordinates
};
class CCamera
{
public:
tVector3 mPos;
tVector3 mView;
tVector3 mUp;
void Strafe_Camera(float speed);
void Move_Camera(float speed);
void Rotate_View(float speed);
void Position_Camera(float pos_x, float pos_y,float pos_z,
float view_x, float view_y, float view_z,
float up_x, float up_y, float up_z);
};
void Draw_Grid();
camera.cpp
#include "main.h"
void CCamera::Position_Camera(float pos_x, float pos_y, float pos_z,
float view_x, float view_y, float view_z,
float up_x, float up_y, float up_z)
{
mPos = tVector3(pos_x, pos_y, pos_z);
mView = tVector3(view_x, view_y, view_z);
mUp = tVector3(up_x, up_y, up_z);
}
void CCamera::Move_Camera(float speed)
{
tVector3 vVector = mView - mPos;
mPos.x = mPos.x + vVector.x * speed;
mPos.z = mPos.z + vVector.z * speed;
mView.x = mView.x + vVector.x * speed;
mView.z = mView.z + vVector.z * speed;
}
void CCamera::Strafe_Camera(float speed)
{
tVector3 vVector = mView - mPos;
tVector3 vOrthoVector;
vOrthoVector.x = -vVector.z;
vOrthoVector.z = vVector.x;
mPos.x = mPos.x + vOrthoVector.x * speed;
mPos.z = mPos.z + vOrthoVector.z * speed;
mView.x = mView.x + vOrthoVector.x * speed;
mView.z = mView.z + vOrthoVector.z * speed;
}
void CCamera::Rotate_View(float speed)
{
tVector3 vVector = mView - mPos;
tVector3 vOrthoVector;
vOrthoVector.x = -vVector.z;
vOrthoVector.z = vVector.x;
mView.z = (float)(mPos.z + sin(speed)*vVector.x + cos(speed)*vVector.z);
mView.x = (float)(mPos.x + cos(speed)*vVector.x - sin(speed)*vVector.z);
}
and the mousemotion code
void processEvents()
{
int mid_x = screen_width >> 1;
int mid_y = screen_height >> 1;
int mpx = event.motion.x;
int mpy = event.motion.y;
float angle_y = 0.0f;
float angle_z = 0.0f;
while(SDL_PollEvent(&event))
{
switch(event.type)
{
case SDL_MOUSEMOTION:
if( (mpx == mid_x) && (mpy == mid_y) ) return;
// Get the direction from the mouse cursor, set a resonable maneuvering speed
angle_y = (float)( (mid_x - mpx) ) / 1000; //1000
angle_z = (float)( (mid_y - mpy) ) / 1000; //1000
// The higher the value is the faster the camera looks around.
objCamera.mView.y += angle_z * 2;
// limit the rotation around the x-axis
if((objCamera.mView.y - objCamera.mPos.y) > 8) objCamera.mView.y = objCamera.mPos.y + 8;
if((objCamera.mView.y - objCamera.mPos.y) <-8) objCamera.mView.y = objCamera.mPos.y - 8;
objCamera.Rotate_View(-angle_y);
SDL_WarpMouse(mid_x, mid_y);
break;
case SDL_KEYUP:
objKeyb.handleKeyboardEvent(event,true);
break;
case SDL_KEYDOWN:
objKeyb.handleKeyboardEvent(event,false);
break;
case SDL_QUIT:
quit = true;
break;
case SDL_VIDEORESIZE:
screen = SDL_SetVideoMode( event.resize.w, event.resize.h, screen_bpp, SDL_OPENGL | SDL_HWSURFACE | SDL_RESIZABLE | SDL_GL_DOUBLEBUFFER | SDL_HWPALETTE );
screen_width = event.resize.w;
screen_height = event.resize.h;
init_opengl();
std::cout << "Resized to width: " << event.resize.w << " height: " << event.resize.h << std::endl;
break;
default:
break;
}
}
}
I'm not entirely sure what you are doing above.
Personally I would just allow a simple 4x4 matrix. Any implementation will do. To rotate you, simply, need to rotate using the change of mouse x and y as euler inputs for rotation around the y and x axes. There is lots of code available all over the internet that will do this for you.
Some of those matrix libraries won't provide you with a "MoveForward()" function. If this is the case its ok, moving forward is pretty easy. The third column (or row if you are using row major matrices) is your forward vector. Extract it. Normalise it (It really should be normalised anyway so this step may not be needed). Multiply it by how much you wish to move forward and then add it to the position (the 4th column/row).
Now here is the odd part. A view matrix is a special type of matrix. The matrix above defines the view space. If you multiply your current model matrix by this matrix you will not get the answer you expect. Because you wish to transform it such that the camera is at the origin. As such you need to, effectively, undo the camera transformation to re-orient things to the view defined above. To do this you multiply your model matrix by the inverse of the view matrix.
You now have an object defined in the correct view space.
This is my very simple camera class. It does not handle the functionality you describe but hopefully will give you a few ideas on how to set up the class (Be warned, I use row major, ie DirectX style, matrices).
BaseCamera.h:
#ifndef BASE_CAMERA_H_
#define BASE_CAMERA_H_
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
#include "Maths/Vector4.h"
#include "Maths/Matrix4x4.h"
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
class BaseCamera
{
protected:
bool mDirty;
MathsLib::Matrix4x4 mCameraMat;
MathsLib::Matrix4x4 mViewMat;
public:
BaseCamera();
BaseCamera( const BaseCamera& camera );
BaseCamera( const MathsLib::Vector4& vPos, const MathsLib::Vector4& vLookAt );
BaseCamera( const MathsLib::Matrix4x4& matCamera );
bool IsDirty() const;
void SetDirty();
MathsLib::Matrix4x4& GetOrientationMatrix();
const MathsLib::Matrix4x4& GetOrientationMatrix() const;
MathsLib::Matrix4x4& GetViewMatrix();
};
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline MathsLib::Matrix4x4& BaseCamera::GetOrientationMatrix()
{
return mCameraMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline const MathsLib::Matrix4x4& BaseCamera::GetOrientationMatrix() const
{
return mCameraMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline bool BaseCamera::IsDirty() const
{
return mDirty;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline void BaseCamera::SetDirty()
{
mDirty = true;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
#endif
BaseCamera.cpp:
#include "Render/stdafx.h"
#include "BaseCamera.h"
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera() :
mDirty( true )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const BaseCamera& camera ) :
mDirty( camera.mDirty ),
mCameraMat( camera.mCameraMat ),
mViewMat( camera.mViewMat )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const MathsLib::Vector4& vPos, const MathsLib::Vector4& vLookAt ) :
mDirty( true )
{
MathsLib::Vector4 vDir = (vLookAt - vPos).Normalise();
MathsLib::Vector4 vLat = MathsLib::CrossProduct( MathsLib::Vector4( 0.0f, 1.0f, 0.0f ), vDir ).Normalise();
MathsLib::Vector4 vUp = MathsLib::CrossProduct( vDir, vLat );//.Normalise();
mCameraMat.Set( vLat, vUp, vDir, vPos );
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const MathsLib::Matrix4x4& matCamera ) :
mDirty( true ),
mCameraMat( matCamera )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
MathsLib::Matrix4x4& BaseCamera::GetViewMatrix()
{
if ( IsDirty() )
{
mViewMat = mCameraMat.Inverse();
mDirty = false;
}
return mViewMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
I agree with Goz. You need to use homegenous 4x4 matrices if you want to represent affine transformations such as rotate + translate
Assuming row major representation then if there is no scaling or shearing, your 4x4 matrix represents the following:
Rows 0 to 2 : The three basis vectors of your local co-ordinate system ( i.e x,y,z )
Row 3 : the current translation from the origin
So to move along your local x vector, as Goz says, because you can assume it's a unit vector
if there is no scale/shear you just multiply it by the move step ( +ve or -ve ) then add the resultant vector onto Row 4 in the matrix
So taking a simple example of starting at the origin with your local frame set to world frame then your matrix would look something like this
1 0 0 0 <--- x unit vector
0 1 0 0 <--- y unit vector
0 0 1 0 <--- z unit vector
0 0 0 1 <--- translation vector
In terms of a way most game cameras work then the axes map like this:
x axis <=> Camera Pan Left/Right
y axis <=> Camera Pan Up/Down
z axis <=> Camera Zoom In/Out
So if I rotate my entire frame of reference to say look at a new point LookAt then as Goz puts in his BaseCamera overloaded constructor code, you then construct a new local co-ordinate system and set this into your matrix ( all mCameraMat.Set( vLat, vUp, vDir, vPos ) does typically is set those four rows of the matrix i.e VLat would be row 0, vUp row 1, vDir row 2 and vPos row 3 )
Then to zoom in/out would just become row 3 = row 2 * stepval
Again as Goz, rightly points out, you then need to transform this back into world-space and this is done by multiplying by the inverse of the view matrix