I use DirectX3D 11 to wirte an application and my Camera target vector is determined by the variables xdelta, ydelta, and zdelta.
I have to PAN my view in the XY as I move my mouse across the screen with the RMB & LMB pressed.
I figured that I need to add the delta in the mouse movement to my VIEW space, so that it pans in X and Y relative to my view, not world X and Y.
However, being new to this, I'm not sure how to convert my VIEW coordinates back to WORLD coordinates.
I hope that I am following all of the formatting rules, as I don't post here enough to remember all of them exactly.
Any help would be appreciated. below is my snippet of code. Perhaps there is also a better way.
if ( m_Input->RMBPressed() == true && m_Input->LMBPressed() == true ) // Pan View
{
if ( curX != mouseX || curY != mouseY ) {
//Get Target Coordinates in View Space
D3DXVECTOR3 targetView = (D3DXVECTOR3 ( mView(2,0), mView(2,1), mView(2,2) );
//Add mouse XY delta vector to target View Coordinates
D3DXVECTOR3 delta ( (float)curX-(float)mouseX,
(float)curY-(float)mouseY, zdelta );
targetView += delta;
//Convert back to World Coordinates
}
}
I'm Trying a different approach, that I believe is the correct approach, but it still doesn't appear to be working correctly.
I get the the delta in X and Y from the screen as my mouse moves and store them in the variables "xdelta" and "ydelta".
I then create a transformation matrix
D3DXMATRIX mTrans;
I then populate the values in the matrix
// Rotation
mTrans(0,0) = delta.x;
mTrans(1,1) = delta.y;
// Translation
mTrans(0,3) = delta.x;
mTrans(1,3) = delta.y;
Now to get their corresponding View coordinates, I think I should multiply it by the View Matrix, which I call mView.
mTrans= mTrans*mView;
Now add these translated values to my current target X and Y which is determind by the variables "target_x" and "target_y", it should move my target vector, relative to my view coordinates X and Y (i.e. orthogonal to my current view).
target_x += mTrans(0,3);
target_y += mTrans(1,3);
But it doesn't. It moves my target, along the world X and Y axis, not my View X and Y axis.
MORE SNIPPETS
I do use the D3DXMatrixLookAtLH function, but I'm trying to calculate the change in the target location based on my mouse movements to get new target to feed into that function. I added in some code snippets:
if ( m_Input->RMBPressed() == true && m_Input->LMBPressed() == true ) // Pan View
{
if ( curX != mouseX || curY != mouseY ) {
//Get mouse XY delta vector
D3DXVECTOR3 delta ( (float)curX-(float)mouseX, (float)curY-(float)mouseY, zdelta );
//Create Transformation Matrix
D3DXMATRIX mTemp;
// Rotation
mTemp (0,0) = delta.x;
mTemp (1,1) = delta.y;
mTemp (2,2) = delta.z;
// Translation
mTemp (0,3) = delta.x;
mTemp (1,3) = delta.y;
mTemp (2,3) = delta.z;
//Convert to View Coordinates
mTemp = mTemp*mView;
xdelta += mTemp(0,3);
ydelta += mTemp(1,3);
// Convert Spherical to Cartesian coordinates: mPhi measured from +y
// and mTheta measured counterclockwise from z.
float height = mRadius * cosf(mPhi);
float sideRadius = mRadius * sinf(mPhi);
float x = xdelta + sideRadius * cosf(mTheta);
float z = zdelta + sideRadius * sinf(mTheta);
float y = ydelta + height;
// Build the view matrix.
D3DXVECTOR3 pos(x, y, z);
D3DXVECTOR3 target(xdelta, ydelta, zdelta);
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f); // Y Axis is up. Looking down Z.
D3DXMatrixLookAtLH(&mView, &pos, &target, &up);
First of all, thank you for all of your information. It is helping me slowly understand DirectX matrices.
Am I correct in the logic below?
Assume my mouse change is 5.0 in X and 5.0 in Y on the screen. that is my delta. Z would always be 0.0.
I could find the view coordinates as follows.
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f); // Y Axis is up. Looking down Z.
D3DXVECTOR3 delta ( 5.0f, 5.0f, 0.0f );
D3DXVECTOR3 origin ( 0.0f. 0.0f, 0.0f );
D3DMATRIX mTemp;
D3DXMatrixIdentity (&mTemp);
D3DXMatrixLookAtLH (&mTemp, &delta, &origin, &up );
I should now have the view coordinates of the XY delta stored in the mTemp view matrix?
If so, is the best way to proceed, by adding the XY delta View coordinates to the mView XY coordinates and then translating back to world coordinates, to get the actual XY world value I have to set the target to?
I'm at a loss as to how to achieve this. It's really not all that clear to me, and all the books that I purchased on the subject are not clear either.
You can calculate world coordinates from local coordinates by multiplying you local coords by world matrix.
If you want your camera to move, than just define their current position using 3D vector, and then use D3DXMatrixLookAtLH to calculate view matrix from your current position.
Check out this tutorial for more details: http://www.braynzarsoft.net/index.php?p=D3D11WVP
Related
I am creating a 2D C++ game engine from scratch minus making calls to the OS directly. For that, I am using SFML. I am essentially only using SFML to draw to the screen and collect input, so I am not looking for help with anything related to SFML.
Right now, I can pan the camera up and down and the sprites translate from world coordinates to screen coordinates correctly. The sprites also, for the most part, translate screen coordinates from world coordinates relative to camera rotation. The problem comes in when a rendered game object has a higher y world coordinate than the camera. When this happens it would appear that the sprite is reflected on the x-axis.
I will note that this does not happen if I comment out the rotation code shown below.
//shape's position has been set relative to camera position first
Vector2 screenCenter(windowWidth / 2, windowHeight / 2);
Vector2 shapePosition = shape->GetPosition ();
//create vector from center screen to shape position
Vector2 relativeVector = shapePosition - screenCenter;
float distance = relativeVector.Magnitude ();
if ( distance == 0 ) { return; }
float angle = Vector2::AngleInRads ( Vector2 ( 1, 0 ), relativeVector );
//rotation of camera in radians
float targetRotation = camera.GetFollowTarget ()->GetTransform ().GetRotation() * (M_PI / 180);
//combine rotation of camera and relative vector
float adjustedRotation = angle + targetRotation;
//convert rotation into a unit vector
Vector2 newPos ( cos ( adjustedRotation ), sin ( adjustedRotation ) );
//extend unit vector to be the same distance away from the camera
newPos *= distance;
//return vector origin to screen center
newPos += screenCenter;
shape->SetPosition ( newPos );
Below are some screenshots.
You can consider the blue square as my camera's focus.
The purple circle is (0,0) world coordinates
The arrow is pointing up in world space and as you can see is rendered incorrectly while the camera is below it.
It would be hard to show the rotation in action so you'll have to take my word for it. The rotation works as intended at any position of the camera aside from what I've described.
View while camera is at origin
View while camera is at the mid point of arrow
View while camera is above arrow
Please let me know if there's anything else I can provide that would be helpful.
It dawned on me to google this problem as a math problem more than a programming problem.
The solution to my problem can be found here. Solution
Vector2 shapePos = shape->GetPosition ();
//subtract screen center from point
shapePos.x -= windowWidth/2;
shapePos.y -= windowHeight/2;
float angleInRadians = camera.GetFollowTarget ()->GetTransform ().GetRotation () * ( M_PI / 180 );
//Get coordinates after rotaion
float x = ( shapePos.x * cos ( angleInRadians ) ) - ( shapePos.y * sin ( angleInRadians ) );
float y = ( shapePos.x * sin ( angleInRadians ) ) + ( shapePos.y * cos ( angleInRadians ) );
//add screen center back to point
Vector2 newPos ( x + windowWidth/2, y + windowHeight/2 );
shape->SetPosition ( newPos );
I'm trying to implement a picking ray via instructions from this website.
Right now I basically only want to be able to click on the ground to order my little figure to walk towards this point.
Since my ground plane is flat , non-rotated and non-translated I'd have to find the x and z coordinate of my picking ray when y hits 0.
So far so good, this is what I've come up with:
//some constants
float HEIGHT = 768.f;
float LENGTH = 1024.f;
float fovy = 45.f;
float nearClip = 0.1f;
//mouse position on screen
float x = MouseX;
float y = HEIGHT - MouseY;
//GetView() returns the viewing direction, not the lookAt point.
glm::vec3 view = cam->GetView();
glm::normalize(view);
glm::vec3 h = glm::cross(view, glm::vec3(0,1,0) ); //cameraUp
glm::normalize(h);
glm::vec3 v = glm::cross(h, view);
glm::normalize(v);
// convert fovy to radians
float rad = fovy * 3.14 / 180.f;
float vLength = tan(rad/2) * nearClip; //nearClippingPlaneDistance
float hLength = vLength * (LENGTH/HEIGHT);
v *= vLength;
h *= hLength;
// translate mouse coordinates so that the origin lies in the center
// of the view port
x -= LENGTH / 2.f;
y -= HEIGHT / 2.f;
// scale mouse coordinates so that half the view port width and height
// becomes 1
x /= (LENGTH/2.f);
y /= (HEIGHT/2.f);
glm::vec3 cameraPos = cam->GetPosition();
// linear combination to compute intersection of picking ray with
// view port plane
glm::vec3 pos = cameraPos + (view*nearClip) + (h*x) + (v*y);
// compute direction of picking ray by subtracting intersection point
// with camera position
glm::vec3 dir = pos - cameraPos;
//Get intersection between ray and the ground plane
pos -= (dir * (pos.y/dir.y));
At this point I'd expect pos to be the point where my picking ray hits my ground plane.
When I try it, however, I get something like this:
(The mouse cursor wasn't recorded)
It's hard to see since the ground has no texture, but the camera is tilted, like in most RTS games.
My pitiful attempt to model a remotely human looking being in Blender marks the point where the intersection happened according to my calculation.
So it seems that the transformation between view and dir somewhere messed up and my ray ended up pointing in the wrong direction.
The gap between the calculated position and the actual position increases the farther I mouse my move away from the center of the screen.
I've found out that:
HEIGHT and LENGTH aren't acurate. Since Windows cuts away a few pixels for borders it'd be more accurate to use 1006,728 as window resolution. I guess that could make for small discrepancies.
If I increase fovy from 45 to about 78 I get a fairly accurate ray. So maybe there's something wrong with what I use as fovy. I'm explicitely calling glm::perspective(45.f, 1.38f, 0.1f, 500.f) (fovy, aspect ratio, fNear, fFar respectively).
So here's where I am lost. What do I have to do in order to get an accurate ray?
PS: I know that there are functions and libraries that have this implemented, but I try to stay away from these things for learning purposes.
Here's working code that does cursor to 3D conversion using depth buffer info:
glGetIntegerv(GL_VIEWPORT, #fViewport);
glGetDoublev(GL_PROJECTION_MATRIX, #fProjection);
glGetDoublev(GL_MODELVIEW_MATRIX, #fModelview);
//fViewport already contains viewport offsets
PosX := X;
PosY := ScreenY - Y; //In OpenGL Y axis is inverted and starts from bottom
glReadPixels(PosX, PosY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, #vz);
gluUnProject(PosX, PosY, vz, fModelview, fProjection, fViewport, #wx, #wy, #wz);
XYZ.X := wx;
XYZ.Y := wy;
XYZ.Z := wz;
If you do test only ray/plane intersection this is the second part without DepthBuffer:
gluUnProject(PosX, PosY, 0, fModelview, fProjection, fViewport, #x1, #y1, #z1); //Near
gluUnProject(PosX, PosY, 1, fModelview, fProjection, fViewport, #x2, #y2, #z2); //Far
//No intersection
Result := False;
XYZ.X := 0;
XYZ.Y := 0;
XYZ.Z := aZ;
if z2 < z1 then
SwapFloat(z1, z2);
if (z1 <> z2) and InRange(aZ, z1, z2) then
begin
D := 1 - (aZ - z1) / (z2 - z1);
XYZ.X := Lerp(x1, x2, D);
XYZ.Y := Lerp(y1, y2, D);
Result := True;
end;
I find it rather different from what you are doing, but maybe that will make more sense.
I'm having little trouble whit trying to compare rotated 2D Quads coordinates to rotated x and y coordinates. I'm trying to determine if mouse was clicked inside the quad.
1) the rot's are this classes objects: (note : the operator << is overloaded for the use of the rotate coords func)
class Vector{
private:
std::vector <float> Vertices;
public:
Vector(float, float);
float GetVertice(unsigned int);
void SetVertice(unsigned int, float);
std::vector<float> operator <<(double);
};
Vector::Vector(float X,float Y){
Vertices.push_back(X);
Vertices.push_back(Y);
}
float Vector::GetVertice(unsigned int Index){
return Vertices.at(Index);
}
void Vector::SetVertice(unsigned int Index,float NewVertice){
Vertices.at(Index) = NewVertice;
}
//Return rotated coords:D
std::vector <float> Vector::operator <<(double Angle){
std::vector<float> Temp;
Temp.push_back(Vertices.at(0) * cos(Angle) - Vertices.at(1) * sin(Angle));
Temp.push_back(Vertices.at(0) * sin(Angle) + Vertices.at(1) * cos(Angle));
return Temp;
}
2) Comparasion and rotation of the coordinates THE NEW VERSION
Vector Rot1(x,y),Rot3(x,y);
double Angle;
std::vector <float> result1,result3;
Rot3.SetVertice(0,NewQuads.at(Index).GetXpos() + NewQuads.at(Index).GetWidth());
Rot3.SetVertice(1,NewQuads.at(Index).GetYpos() + NewQuads.at(Index).GetHeight());
Angle = NewQuads.at(Index).GetRotation();
result1 = Rot1 << Angle; // Rotate the mouse x and y
result3 = Rot3 << Angle; // Rotate the Quad x and y
//.at(0) = x and .at(1)=y
if(result1.at(0) >= result3.at(0) - NewQuads.at(Index).GetWidth() && result1.at(0) <= result3.at(0) ){
if(result1.at(1) >= result3.at(1) - NewQuads.at(Index).GetHeight() && result1.at(1) <= result3.at(1) ){
when i run this it works perfectly at 0 angle but when you rotate the quad, it fails.
and by failing I mean the activation area seem to just disappear.
am I doing the rotation of the coordinates correctly? or is it the comparison?
if it's the comparison how would you do it properly, I have tried changing the if's but whit out any luck...
edit
the drawing of the quad(Happens before the testing):
void Quad::Render()
{
if(!CheckIfOutOfScreen()){
glPushMatrix();
glLoadIdentity();
glTranslatef(Xpos ,Ypos ,0.f);
glRotatef(Rotation,0.f,0.f,1.f); // same rotation is used for the testing later...
glBegin(GL_QUADS);
glVertex2f(Zwidth,Zheight);
glVertex2f(Width,Zheight);
glVertex2f(Width,Height);
glVertex2f(Zwidth,Height);
glEnd();
if(State != NOT_ACTIVE)
RenderShapeTools();
glPopMatrix();
}
}
basicly I'm trying to test if mouse was clicked inside this quad:
Image
There is more than one way to achieve what you want, But from the image you posted I assume you want to draw to a surface the same size as your screen (or window) using only 2D graphics.
As you know in 3D graphics we talk about 3 coordinate references. The first is the coordinate reference of the object or model to be drawn, the second is the coordinate reference of the camera or view and the third is the coordinate reference of the screen.
In OpenGL the first two coordinate references are established through the MODELVIEW matrix and the third is achieved by the PROJECTION matrix and the viewport transformation.
In your case you want to rotate a quad and place it somewhere on the screen. Your quad has it's own model coordinates. Let's assume that for this specific 2D quad the origin is at the center of the quad and it has the dimensions of 5 by 5. Also let's assume that if we look to the center of the quad then the X axis points to the RIGHT, the Y axis points UP and the Z axis points towards the viewer.
The unrotated coordinates of the quad will be (from bottom left clockwise): (-2.5,-2.5,0), (-2.5,2.5,0), (2.5,2.5,0), (2.5,-2.5,0)
Now we want to have a camera and projection matrices and viewport so to simulate a 2D surface with known dimensions.
//Assume WinW contains the window width and WinH contains the windows height
glViewport(0,0,WinW,WinH);//Set the viewport to the whole window
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho (0, WinW, WinH, 0, 0, 1);//Set the projection matrix to perform a 2D orthogonal projection
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();//Set the camera matrix to be the Identity matrix
You are now ready to draw your quad an this 2D surface with dimensions WinW, WinH. In this context if you just draw your quad using it's current vertices you will have the quad drawn with it's center at the bottom left of the window with each side measuring 5 pixels so you will actually see only quarter of a quad. If you want to rotate and move it you will do something like this:
//Prepare matrices as shown above
//Viewport coordinates range from bottom left (0,0) to top right (WinW,WinH)
float dX = CenterOfQuadInViewportCoordinatesX, dY = CenterOfQuadInViewportCoordinatesY;
float rotA = QuadRotationAngleAroundZAxisInDegrees;
float verticesX[4] = {-2.5,-2.5,2.5,2.5};
float verticesY[4] = {-2.5,2.5,2.5,-2.5};
//Remember that rotate is done first and translation second
glTranslatef(dX,dY,0);//Move the quad to the desired location in the viewport
glRotate(rotA, 0,0,1);//Rotate the quad around it's origin
glBegin(GL_QUADS);
glVertex2f(verticesX[0], veriticesY[0]);
glVertex2f(verticesX[1], veriticesY[1]);
glVertex2f(verticesX[2], veriticesY[2]);
glVertex2f(verticesX[3], veriticesY[3]);
glEnd();
Now you want to know whether the click of the mouse was within the rendered quad.
Whereas the viewport coordinates start from the bottom left the window coordinates start from the top left. So when you get the mouse coordinates you have to translate them to viewport coordinates in the following way:
float mouseViewportX = mouseX, mouseViewportY = WinH - mouseY - 1;
Once you have the mouse location in viewport coordinates you need to transform it to model coordinates in the following way (Please double check the calculations since I generally use my own matrix library for that and don't calculate it by hand):
//Translate the mouse location to model coordinates reference
mouseViewportX -= dX, mouseViewportY -= dY;
//Unrotate the mouse location
float invRotARad = -rotA*DEG_TO_RAD;
float sinRA = sin(invRotARad), cosRA = cos(invRotA);
float mouseInModelX = cosRA*mouseViewportX - sinRA*mouseViewportY;
float mouseInModelY = sinRA*mouseViewportX + cosRA*mouseViewportY;
And now you can finally check if the mouse falls within the quad - as you can see this is done in quad coordinates:
bool mouseInQuad = mouseInModelX > verticesX[0] && mouseInModelY < verticesX[1] &&
mouseInModelY > verticesY[0] && mouseInModelY < verticesY[1];
Hope I didn't make too many mistakes and this puts you on the right track. If you want to deal with more complex cases and 3D then you should have a look at gluUnproject (maybe you will want to implement your own) and for even more complex scenes you may need to use a stencil or depth buffers
I'm currently struggling with a simple task: Given a mouse position on the screen, calculate the new position which is determined by calculating the intersection of the camera plane that goes through the selected object and the ray of the mouse click.
The math involved is not that tricky but still I can't seem to find the error.
QVector3D cameraPosition = Rotation.rotatedVector(translation);
QVector3D cameraDirection = Dir;
QVector3D objectPosition = objectTranslation;
QVector3D up = Rotation.rotatedVector(QVector3D(0,1,0));
QVector3D Right = QVector3D::crossProduct(Dir, up);
As you can see, I'm using Qt to represent my data. First of all, I rotate my translation Vector by the Camera Rotation to obtain the cameraPosition. Otherwise I won't get the cameraPosition in Worldcoordinates. After that I calculate the Up and Right Vector. In order to calculate the ray-plane intersection I'm using this as a reference: http://softsurfer.com/Archive/algorithm_0104/algorithm_0104B.htm#Line-Plane Intersection
Afterwords I normalize the screen coordiantes
float screen_x = 2*(float(pos.x())/width()-0.5);
float screen_y = 2*((float(height()-1-pos.y())/height())-0.5);
screen_x*= (1.0f/height())/(1.0f/width());
Finally, the actual computation:
QVector3D P0 = cameraPosition;
QVector3D n = cameraDirection;
QVector3D V0 = objectPosition;
QVector3D u = (screen_x*Right+screen_y*up)*0.5+cameraDirection;
float s = QVector3D::dotProduct(n, V0-P0) / QVector3D::dotProduct(n, u);
objectTranslation = P0+s*u;
I guess the problem lies withing the calculation of u or something beyond me. I get the Direction of the camera by evaluation the ModelView Transformation matrix and taking out the third row:
GLdouble modelview[16];
glGetDoublev( GL_MODELVIEW_MATRIX, modelview );
QMatrix4x4 mv = QMatrix4x4(modelview);
Dir = QVector3D(mv.row(2).x(), mv.row(2).y(), mv.row(2).z()).normalized();
I'm creating the view matrix for my camera using its current orientation (quaternion) and its current position.
void Camera::updateViewMatrix()
{
view = glm::gtx::quaternion::toMat4(orientation);
// Include rotation (Free Look Camera)
view[3][0] = -glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position);
view[3][1] = -glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position);
view[3][2] = -glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position);
// Ignore rotation (FPS Camera)
//view[3][0] = -position.x;
//view[3][1] = -position.y;
//view[3][2] = -position.z;
view[3][3] = 1.0f;
}
There is a problem with this in that I do not believe the quaternion to matrix calculation is giving the correct answer. Translating the camera works as expected but rotating it causes incorrect behavior.
I am rotating the camera using the difference between the current mouse position and the the centre of the screen (resetting the mouse position each frame)
int xPos;
int yPos;
glfwGetMousePos(&xPos, &yPos);
int centreX = 800 / 2;
int centreY = 600 / 2;
rotate(xPos - centreX, yPos - centreY);
// Reset mouse position for next frame
glfwSetMousePos(800 / 2, 600 / 2);
The rotation takes place in this method
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
//pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
if (pitchAccum < -90.0f)
{
//pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
float yaw = yawDegrees * DEG2RAD;
float pitch = pitchDegrees * DEG2RAD;
glm::quat rotation;
// Rotate the camera about the world Y axis (if mouse has moved in any x direction)
rotation = glm::gtx::quaternion::angleAxis(yaw, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions
orientation = orientation * rotation;
// Rotate the camera about the world X axis (if mouse has moved in any y direction)
rotation = glm::gtx::quaternion::angleAxis(pitch, 1.0f, 0.0f, 0.0f);
// Concatenate quaternions
orientation = orientation * rotation;
}
Am I concatenating the quaternions correctly for the correct orientation?
There is also a problem with the pitch accumulation in that it restricts my view to ~±5 degrees rather than ±90. What could be the cause of that?
EDIT:
I have solved the problem for the pitch accumulation so that its range is [-90, 90]. It turns out that glm uses degrees and not vectors for axis angle and the order of multiplication for the quaternion concatenation was incorrect.
// Rotate the camera about the world Y axis
// N.B. 'angleAxis' method takes angle in degrees (not in radians)
rotation = glm::gtx::quaternion::angleAxis(yawDegrees, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref rotation, ref orientation)
orientation = orientation * rotation;
// Rotate the camera about the world X axis
rotation = glm::gtx::quaternion::angleAxis(pitchDegrees, 1.0f, 0.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref orientation, ref rotation)
orientation = rotation * orientation;
The problem that remains is that the view matrix rotation appears to rotate the drawn object and not look around like a normal FPS camera.
I have uploaded a video to YouTube to demonstrate the problem. I move the mouse around to change the camera's orientation but the triangle appears to rotate instead.
YouTube video demonstrating camera orientation problem
EDIT 2:
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
else if (pitchAccum < -90.0f)
{
pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
// 'pitchAccum' range is [-90, 90]
//printf("pitchAccum %f \n", pitchAccum);
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
else if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
orientation =
glm::gtx::quaternion::angleAxis(pitchAccum, 1.0f, 0.0f, 0.0f) *
glm::gtx::quaternion::angleAxis(yawAccum, 0.0f, 1.0f, 0.0f);
}
EDIT3:
The following multiplication order allows the camera to rotate around its own axis but face the wrong direction:
glm::mat4 translation;
translation = glm::translate(translation, position);
view = glm::gtx::quaternion::toMat4(orientation) * translation;
EDIT4:
The following will work (applying the translation matrix based on the position after then rotation)
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -position);
view *= translation;
I can't get the dot product with each orientation axis to work though
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
glm::vec3 p(
glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position),
glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position),
glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position)
);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -p);
view *= translation;
In order to give you a definite answer, I think that we would need the code that shows how you're actually supplying the view matrix and vertices to OpenGL. However, the symptom sounds pretty typical of incorrect matrix order.
Consider some variables:
V represents the inverse of the current orientation of the camera (the quaternion).
T represents the translation matrix holding the position of the camera. This should be an identity matrix with negation of the camera's position going down the fourth column (assuming that we're right-multiplying column vectors).
U represents the inverse of the change in orientation.
p represents a vertex in world space.
Note: all of the matrices are inverse matrices because the transformations will be applied to the vertex, not the camera, but the end result is the same.
By default the OpenGL camera is at the origin looking down the negative-z axis. When the view isn't changing (U==I), then the vertex's transformation from world coordinates to camera coordinates should be: p'=TVp. You first orient the camera (by rotating the world in the opposite direction) and then translate the camera into position (by shifting the world in the opposite direction).
Now there are a few places to put U. If we put U to the right of V, then we get the behavior of a first-person view. When you move the mouse up, whatever is currently in view rotates downward around the camera. When you move the mouse right, whatever is in view rotates to the left around the camera.
If we put U between T and V, then the camera turns relative to the world's axes instead of the camera's. This is strange behavior. If V happens to turn the camera off to the side, then moving the mouse up and down will make the world seem to 'roll' instead of 'pitch' or 'yaw'.
If we put U left of T, then the camera rotates around the world's axes around the world's origin. This can be even stranger because it makes the camera fly through world faster the farther the camera is from the origin. However, because the rotation is around the origin, if the camera happens to be looking at the origin, objects there will just appear to be turning around. This is sort of what you're seeing because of the dot-products that you're taking to rotate the camera's position.
You check to make sure that pitchAccum stays within [-90,90], but you've commented out the portion that would make use of that fact. This seems odd to me.
The way that you left-multiply pitch but right-multiply yaw makes it so that your quaternions aren't doing much for you. They're just holding your Euler angles. Unless orientation changes are coming in from other places, you could simply say that orientation = glm::gtx::quaternion::angleAxis(pitchAccum*DEG2RAD, 1.0f, 0.0f, 0.0f) * glm::gtx::quaternion::angleAxis(yawAccum*DEG2RAD, 0.0f, 1.0f, 0.0f); and overwrite the old orientation completely.
From what I understand in this tutorial, there might be a reason why pitch angle is restricted at 90 degrees.
Regardless of using quaternions or a look at matrix, at the end, we give an initial orientation to the Camera. In quaternions, this is the initial value of the orientation, in lookAt, it is the initial value of the up vector.
If the direction facing towards the camera is parallel to this initial vector, then the cross product of these will be zero, which means the camera might have any orientation if pitch is 90 or -90 degrees.
In the internal implementation of toMat4(orientation) this would result in one of your x_dir/y_dir/z_dir vectors to be a zero vector, which would mean that your can have any orientation. This is also discussed in this book, which says that if Y angle is 90 degrees, a degree of freedom is lost (Edward Angel and Dave Shreiner, Interactive Computer Graphics, A Top-Down Approach with WebGL, Seventh Edition, Addison-Wesley 2015.), which is discussed as Gimbal Lock.
I can see that you are aware of this problem, but in your code, the yaw angle is still set to 90 degrees if it overflows 90, leaving your Camera in an invalid state. You should consider something like this instead:
if (pitchAccum > 89.999f && pitchAccum <= 90.0f)
{
pitchAccum = 90.001f;
}
else if (pitchAccum < -89.999f && pitchAccum >= -90.0f)
{
pitchAccum = -90.001f;
}
if (pitchAccum >= 360.0f)
{
pitchAccum = 0.0f;
}
else if (pitchAccum <= -360.0f)
{
pitchAccum = 0.0f;
}
Or you can define another custom action of your choice when pitchAccum is 90 degrees.