Bullet physics to Physx Get Orientation - c++

For a game that I am developing I moved from Bullet physics to NVidia Physx. The problem that I have is the following. I used to have a translation from bullet's rigid body orientation quaternion to Front , Right and Up vector for each object on the screen. This was working fine but after I moved to Physx I noticed that there is only one quaternion in the transform of the object (probably representing rotation) and no orientation quaternion. Here is the code that I was using in Bullet to get the 3 vectors translated to Physx :
physx::PxQuat ori = mBody->getGlobalPose().q;
auto Orientation = glm::quat(ori.x, ori.y, ori.z, ori.w);
glm::quat qF = Orientation * glm::quat(0, 0, 0, 1) * glm::conjugate(Orientation);
glm::quat qUp = Orientation * glm::quat(0, 0, 1, 0) * glm::conjugate(Orientation);
front = { qF.x, qF.y, qF.z };
up = { qUp.x, qUp.y, qUp.z };
right = glm::normalize(glm::cross(front, up));
This apparently doesn't work for Physx. Is there another way to retrieve the 3 vectors from the rigid body ?

Related

Changing coordinate system causes clockwise rotations

The task
I need to convert the coordinate system to +X forward, +Y right and +Z up (left-handed, like the one in Unreal Engine). The crucial part is that I want my camera to face its forward axis (again, like in Unreal Engine). Here is how it works in Unreal:
The problem
I have managed to achieve both things: my camera now faces its forward direction and world coordinates are the same as in UE, HOWEVER, I've stumbled upon a big issue. For each object:
pitch rotation (around RightVector) is now clockwise
yaw rotation (around UpVector) is now clockwise
roll rotation (around ForwardVector) is counter-clockwise (as it was before)
I need these rotations to be all counter-clockwise (as per standard) and keep the camera facing the forward vector.
My current solution attempt
My current setup relies on rotating the projection and camera view matrices (model matrix in my code is called entity matrix):
#define GLM_FORCE_LEFT_HANDED // it's defined somewhere else but so you're aware
// Transform::VectorForward = (1, 0, 0) // these are used below
// Transform::VectorRight = (0, 1, 0)
// Transform::VectorUp = (0, 0, 1)
// The entity matrix update function which updates the model matrix for each object
virtual void UpdateEntityMatrix()
{
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(m_Transform.Rotation[2]), Transform::VectorUp); // yaw rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[1]), Transform::VectorRight); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), Transform::VectorForward); // roll rotation
m_EntityMatrix = glm::scale(m, m_Transform.Scale);
}
// The entity matrix update in my camera class (child of entity) which also updates the view matrix
void Camera::UpdateEntityMatrix()
{
SceneEntity::UpdateEntityMatrix();
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(180.f), GetRightVector()); // NOTE: I'm getting the current Forward/Right/Up vectors of the entity
m = glm::rotate(m, glm::radians(m_Transform.Rotation[2]), GetUpVector()); // yaw rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[1]), GetRightVector()); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), GetForwardVector()); // roll rotation
m = glm::scale(m, m_Transform.Scale);
m_ViewMatrix = glm::inverse(m);
m_ViewProjectionMatrix = m_ProjectionMatrix * m_ViewMatrix;
}
void PerspectiveCamera::UpdateProjectionMatrix()
{
m_ProjectionMatrix = glm::perspective(glm::radians(m_FOV), m_Width / m_Height, m_NearClip, m_FarClip);
// we want the camera to face +x:
m_ProjectionMatrix = glm::rotate(m_ProjectionMatrix, glm::radians(-90.f), Transform::VectorUp);
m_ProjectionMatrix = glm::rotate(m_ProjectionMatrix, glm::radians(90.f), Transform::VectorRight);
}
This is my result so far (the camera visualization thing shows the rotation of currently used camera):
What else I tried
I tried modifying the model matrix (note the -):
virtual void UpdateEntityMatrix()
{
auto m = glm::translate(glm::mat4(1.f), m_Transform.Location);
m = glm::rotate(m, glm::radians(-m_Transform.Rotation[2]), Transform::VectorUp); // yaw rotation
m = glm::rotate(m, glm::radians(-m_Transform.Rotation[1]), Transform::VectorRight); // pitch rotation
m = glm::rotate(m, glm::radians(m_Transform.Rotation[0]), Transform::VectorForward); // roll rotation
m_EntityMatrix = glm::scale(m, m_Transform.Scale);
}
But it makes my camera not face forward anymore (the view is the camera's rotation but mirrored on the Y and Z axis):
Itried to fix it by applying the same change when calculating the camera view matrix but it didn't help (still mirrored or some other issues).
On top of that, I tried experimenting with the glm::lookAt functions to create the view matrix but never achieved anything.
Update: I've found a solution
I think my problem was actually defining a counter-clockwise rotation. It turns out Unreal's rotations are not consistently counter-clockwise. I think it's best to define a counter-clockwise rotation when looking in the direction pointed by the direction arrow - as if you were behind it. Here is my conclusion:
Conclusion
Unless somebody finds an errorin the system I defined, I'll stick to it. The code is contained in the My current solution attempt section. It seems I was correct from the get-go.
However, the answer provided by #paddy does solve my original issue - converting clockwise to counter-clockwise. Upon further attempts, I was able to correctly replicate Unreal's system while keeping my camera facing forward.

OpenGL ray casting (picking): account for object's transform

For picking objects, I've implemented a ray casting algorithm similar to what's described here. After converting the mouse click to a ray (with origin and direction) the next task is to intersect this ray with all triangles in the scene to determine hit points for each mesh.
I have also implemented the triangle intersection test algorithm based on the one described here. My question is, how should we account for the objects' transforms when performing the intersection? Obviously, I don't want to apply the transformation matrix to all vertices and then do the intersection test (too slow).
EDIT:
Here is the UnProject implementation I'm using (I'm using OpenTK by the way). I compared the results, they match what GluUnProject gives me:
private Vector3d UnProject(Vector3d screen)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4d pos = new Vector4d();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4d pos2 = Vector4d.Transform(pos, Matrix4d.Invert(GetModelViewMatrix() * GetProjectionMatrix()));
Vector3d pos_out = new Vector3d(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
Then I'm using this function to create a ray (with origin and direction):
private Ray ScreenPointToRay(Point mouseLocation)
{
Vector3d near = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 0));
Vector3d far = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 1));
Vector3d origin = near;
Vector3d direction = (far - near).Normalized();
return new Ray(origin, direction);
}
You can apply the reverse transformation of each object to the ray instead.
I don't know if this is the best/most efficient approach, but I recently implemented something similar like this:
In world space, the origin of the ray is the camera position. In order to get the direction of the ray, I assumed the user had clicked on the near plane of the camera and thus applied the 'reverse transformation' - from screen space to world space - to the screen space position
( mouseClick.x, viewportHeight - mouseClick.y, 0 )
and then subtracted the origin of the ray, i.e. the camera position, from
the now transformed mouse click position.
In my case, there was no object-specific transformation, meaning I was done once I had my ray in world space. However, transforming origin & direction with the inverse model matrix would have been easy enough after that.
You mentioned that you tried to apply the reverse transformation, but that it didn't work - maybe there's a bug in there? I used a GLM - i.e. glm::unProject - for this.

Understanding Assimp's camera output

I'm writing a program to display 3d scene using Assimp3.0.
My work flow is:
Blender2.71 export fbx.
Assimp read fbx file.
The camera attribute from aiCamera is strange.
I have a camera in blender with:
(blender's coordinate)
location : (0, -5, 0)
rotation : (90, 0, 0)
This should be a simple front view camera.
Since Assimp will rotate all models -90 degree along x-axis
I suppose Assimp will change this camera to
(OpenGL's coordinate (x : right) (y : up) (z : out of screen))
postion : (0, -5, 0)
up : (0, 0, 1)
lookAt : (0, 1, 0)
But in the aiCamera struct I got:
mPosition : (5, 0, 0)
mUp : (0, 1, 0)
mLookAt : (0, 0, 1)
What's the correct way to use aiCamera?
An aiCamera lives in the aiNode graph. To quote the documentation for aiCamera and aiNode
aiCamera: Cameras have a representation in the node graph [...]. This means, any values such as the look-at vector are not absolute, they're relative to the coordinate system defined by the node which corresponds to the camera.
aiNode: Cameras and lights are assigned to a specific node name - if there are multiple nodes with this name, they're assigned to each of them.
So somewhere in your node graph there is a node with the same name as your camera. This note contains the homogeneous transformation matrix corresponding to your camera's coordinate system. The product T*v will translate a homogeneous vector v from the camera coordinate system to the world coordinate system. (Denoting the root coordinate system as the world system and assuming the parent of the camera is the root).
The mPosition, mUp, and mLookAt are given in coordinates of the camera coordinate system, so they must be transformed to the world coordinate system. It is important to differentiate between mPosition which is a point in space, and the mUp and mLookAt which are direction vectors. The transformation matrix is composed of a rotation matrix R and a translation vector t.
R | t
T = --------------
0 0 0 | 1
mPosition in world coordinates is calculated as mPositionWorld = T*mPosition, while the direction vectors are calculated as mLookAtWorld = R*mLookAt and mUpWorld = R*mUp
In c++ the transformation matrix can be found by the following (assume the aiScene 'scene' has been loaded):
//find the camera's mLookAt
aiCamera** cameraList = scene->mCameras;
aiCamera* camera = cameraList[0] //Using the first camera as an example
mVector3D camera->mLookAt;
//find the transformation matrix corresponding to the camera node
aiNode* rootNode = scene->mRootNode;
aiNode* cameraNode = rootNode->FindNode(camera->mName);
aiMatrix4x4 cameraTransformationMatrix = cameraNode->mTransformation;
The rest of the calculations can then be done using Assimp's linear algebra functions.

Combining(?) Quaterions Accurately from Keyboard/Mouse and other sources

I would like to combine mouse and keyboard inputs with the Oculus Rift to create a smooth experience for the user. The goals are:
Positional movement 100% controlled by the keyboard relative to the direction the person is facing.
Orientation controlled 100% by HMD devices like the Oculus Rift.
Mouse orbit capabilities adding to the orientation of the person using the Oculus Rift. For example, if I am looking left I can still move my mouse to "move" more leftward.
Now, I have 100% working code for when someone doesn't have an Oculus Rift, I just don't know how to combine the orientation and other elements of the Oculus Rift to my already working code to get it 100%.
Anyway, here is my working code for controlling the keyboard and mouse without the Oculus Rift:
Note that all of this code assumes a perspective mode of the camera:
/*
Variables
*/
glm::vec3 DirectionOfWhereCameraIsFacing;
glm::vec3 CenterOfWhatIsBeingLookedAt;
glm::vec3 PositionOfEyesOfPerson;
glm::vec3 CameraAxis;
glm::vec3 DirectionOfUpForPerson;
glm::quat CameraQuatPitch;
float Pitch;
float Yaw;
float Roll;
float MouseDampingRate;
float PhysicalMovementDampingRate;
glm::quat CameraQuatYaw;
glm::quat CameraQuatRoll;
glm::quat CameraQuatBothPitchAndYaw;
glm::vec3 CameraPositionDelta;
/*
Inside display update function.
*/
DirectionOfWhereCameraIsFacing = glm::normalize(CenterOfWhatIsBeingLookedAt - PositionOfEyesOfPerson);
CameraAxis = glm::cross(DirectionOfWhereCameraIsFacing, DirectionOfUpForPerson);
CameraQuatPitch = glm::angleAxis(Pitch, CameraAxis);
CameraQuatYaw = glm::angleAxis(Yaw, DirectionOfUpForPerson);
CameraQuatRoll = glm::angleAxis(Roll, CameraAxis);
CameraQuatBothPitchAndYaw = glm::cross(CameraQuatPitch, CameraQuatYaw);
CameraQuatBothPitchAndYaw = glm::normalize(CameraQuatBothPitchAndYaw);
DirectionOfWhereCameraIsFacing = glm::rotate(CameraQuatBothPitchAndYaw, DirectionOfWhereCameraIsFacing);
PositionOfEyesOfPerson += CameraPositionDelta;
CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f;
Yaw *= MouseDampingRate;
Pitch *= MouseDampingRate;
CameraPositionDelta = CameraPositionDelta * PhysicalMovementDampingRate;
View = glm::lookAt(PositionOfEyesOfPerson, CenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);
ProjectionViewMatrix = Projection * View;
The Oculus Rift provides orientation data via their SDK and can be accessed like so:
/*
Variables
*/
ovrMatrix4f OculusRiftProjection;
glm::mat4 Projection;
OVR::Quatf OculusRiftOrientation;
glm::quat CurrentOrientation;
/*
Partial Code for retrieving projection and orientation data from Oculus SDK
*/
OculusRiftProjection = ovrMatrix4f_Projection(MainEyeRenderDesc[l_Eye].Desc.Fov, 10.0f, 6000.0f, true);
for (int o = 0; o < 4; o++){
for (int i = 0; i < 4; i++) {
Projection[o][i] = OculusRiftProjection.M[o][i];
}
}
Projection = glm::transpose(Projection);
OculusRiftOrientation = PredictedPose.Orientation.Conj();
CurrentOrientation.w = OculusRiftOrientation.w;
CurrentOrientation.x = OculusRiftOrientation.x;
CurrentOrientation.y = OculusRiftOrientation.y;
CurrentOrientation.z = OculusRiftOrientation.z;
CurrentOrientation = glm::normalize(CurrentOrientation);
After that last line the glm based quaterion "CurrentOrientation" has the correct information which, if plugged straight into an existing MVP matrix structure and sent into OpenGL will allow you to move your head around in the environment without issue.
Now, my problem is how to combine the two parts together successfully.
When I have done this in the past it results in the rotation stuck in place (when you turn your head left you keep rotating left as opposed to just rotating in the amount that you turned) and the fact that I can no longer accurately determine the direction the person is facing so that my position controls work.
So at that point since I can no longer determine what is "forward" my position controls essentially become crap...
How can I successfully achieve my goals?
I've done some work on this by maintaining a 'camera' matrix which represents the position and orientation of they player, and then during rendering, composing that with the most recent orientation data collected from the headset.
I have a single interaction class which is designed to pull input from a variety of sources, including keyboard and joystick (as well as a spacemouse, or a Razer Hydra).
You'll probably find it easier to maintain the state as a single combined matrix like I do, rather than trying to compose a lookat matrix every frame.
If you look at my Rift.cpp base class for developing my examples you'll see that I capture keyboard input and accumulate it in the CameraControl instance. This is accumulated in the instance so that during the applyInteraction call later we can apply movement indicated by the keyboard, along with other inputs:
void RiftApp::onKey(int key, int scancode, int action, int mods) {
...
// Allow the camera controller to intercept the input
if (CameraControl::instance().onKey(player, key, scancode, action, mods)) {
return;
}
...
}
In my per-frame update code I query any other enabled devices and apply all the inputs to the matrix. Then I update the modelview matrix with the inverse of the player position:
void RiftApp::update() {
...
CameraControl::instance().applyInteraction(player);
gl::Stacks::modelview().top() = glm::inverse(player);
...
}
Finally, in my rendering code I have the following, which applies the headset orientation:
void RiftApp::draw() {
gl::MatrixStack & mv = gl::Stacks::modelview();
gl::MatrixStack & pr = gl::Stacks::projection();
for_each_eye([&](ovrEyeType eye) {
gl::Stacks::with_push(pr, mv, [&]{
ovrPosef renderPose = ovrHmd_BeginEyeRender(hmd, eye);
// Set up the per-eye modelview matrix
{
// Apply the head pose
glm::mat4 m = Rift::fromOvr(renderPose);
mv.preMultiply(glm::inverse(m));
// Apply the per-eye offset
glm::vec3 eyeOffset = Rift::fromOvr(erd.ViewAdjust);
mv.preMultiply(glm::translate(glm::mat4(), eyeOffset));
}
// Render the scene to an offscreen buffer
frameBuffers[eye].activate();
renderScene();
frameBuffers[eye].deactivate();
ovrHmd_EndEyeRender(hmd, eye, renderPose, &eyeTextures[eye].Texture);
});
GL_CHECK_ERROR;
});
...
}

First Person Camera movement issues

I'm implementing a first person camera using the GLM library that provides me with some useful functions that calculate perspective and 'lookAt' matrices. I'm also using OpenGL but that shouldn't make a difference in this code.
Basically, what I'm experiencing is that I can look around, much like in a regular FPS, and move around. But the movement is constrained to the three axes in a way that if I rotate the camera, I would still move in the same direction as if I had not rotated it... Let me illustrate (in 2D, to simplify things).
In this image, you can see four camera positions.
Those marked with a one are before movement, those marked with a two are after movement.
The red triangles represent a camera that is oriented straight forward along the z axis. The blue triangles represent a camera that hasbeen rotated to look backward along the x axis (to the left).
When I press the 'forward movement key', the camera moves forward along the z axis in both cases, disregarding the camera orientation.
What I want is a more FPS-like behaviour, where pressing forward moves me in the direction the camera is facing. I thought that with the arguments I pass to glm::lookAt, this would be achieved. Apparently not.
What is wrong with my calculations?
// Calculate the camera's orientation
float angleHori = -mMouseSpeed * Mouse::x; // Note that (0, 0) is the center of the screen
float angleVert = -mMouseSpeed * Mouse::y;
glm::vec3 dir(
cos(angleVert) * sin(angleHori),
sin(angleVert),
cos(angleVert) * cos(angleHori)
);
glm::vec3 right(
sin(angleHori - M_PI / 2.0f),
0.0f,
cos(angleHori - M_PI / 2.0f)
);
glm::vec3 up = glm::cross(right, dir);
// Calculate projection and view matrix
glm::mat4 projMatrix = glm::perspective(mFOV, mViewPortSizeX / (float)mViewPortSizeY, mZNear, mZFar);
glm::mat4 viewMatrix = glm::lookAt(mPosition, mPosition + dir, up);
gluLookAt takes 3 parameters: eye, centre and up. The first two are positions while the last is a vector. If you're planning on using this function it's better that you maintain only these three parameters consistently.
Coming to the issue with the calculation. I see that the position variable is unchanged throughout the code. All that changes is the look at point I.e. centre only. The right thing to do is to first do position += dir, which will move the camera (position) along the direction pointed to by dir. Now to update the centre, the second parameter can be left as-is: position + dir; this will work since the position was already updated to the new position and from there we've a point farther in dir direction to look at.
The issue was actually in another method. When moving the camera, I needed to do this:
void Camera::moveX(char s)
{
mPosition += s * mSpeed * mRight;
}
void Camera::moveY(char s)
{
mPosition += s * mSpeed * mUp;
}
void Camera::moveZ(chars)
{
mPosition += s * mSpeed * mDirection;
}
To make the camera move across the correct axes.