Cannot both rotate and translate my scene - direct3d - c++

I have drawn a cube onto the screen and I want to both rotate and translate the scene:
// Translation
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose( XMMatrixTranslation( placement->GetPosX(), placement->GetPosY(), placement->GetPosZ() ) ) );
// Rotation
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(XMMatrixRotationX( placement->GetRotX() ) ) );
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(XMMatrixRotationY( placement->GetRotY() ) ) );
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(XMMatrixRotationZ( placement->GetRotZ() ) ) );
the problem is, only the translation is working... Do I have to set something somehow before doing the rotations too.
I have used the default Windows 8 Phone Direct3D C++ Project in Visual Studio 2012 Windows Phone.
I have passed in a few more variables and thanks to intellisense, found out there was a matrixtransaltion function
I added my positioning to this matrix and also hooked up the rotation to some custom variables too
The cube will move (translate) but I am guessing I need to save this movement somehow and THEN do the rotation.
Anything I can add to this to help solve the issue?

You are overwriting the contents of m_constantBufferData.model every time. You need to call XMMatrixMultiply on the four matrices to combine the transformations into a single matrix, then store the final result. For example:
// Rotation
XMMATRIX m = XMMatrixRotationX( placement->GetRotX() );
m = XMMatrixMultiply(m, XMMatrixRotationY( placement->GetRotY() );
m = XMMatrixMultiply(m, XMMatrixRotationZ( placement->GetRotZ() );
// Translation
m = XMMatrixMultiply(m, XMMatrixTranslation( placement->GetPosX(), placement->GetPosY(), placement->GetPosZ() ) );
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose(m) );

Related

Orient object along surface normal

When the user clicks on a surface I would like place an object at this position and orient it perpendicular to the surface normal.
After the user performs a click, I read the depth of three neighboring pixels from the buffer, unproject the pixels from screen coordinates to object space and then compute the surface normal from these points in object space:
glReadPixels(mouseX, mouseY, ..., &depthCenter);
pointCenter = gluUnProject(mouseX, mouseY, depthCenter, ...);
glReadPixels(mouseX, mouseY - 1, ..., &depthUp);
pointUp = gluUnProject(mouseX, mouseY - 1, depthUp, ...);
glReadPixels(mouseX - 1, mouseY, ..., &depthLeft);
pointLeft = gluUnProject(mouseX - 1, mouseY, depthLeft, ...);
centerUpVec = norm( pointCenter - pointUp );
centerLeftVec = norm( pointCenter - pointLeft );
normalVec = norm( centerUpVec.cross(centerLeftVec) );
I know that computing the normal just from three pixels is problematic (e.g. at edges or if the three points have vastly different depth), but for my initial test on a flat surface this must suffice.
Finally, in order to orient the object along the computed normal vector I create a rotation matrix from the normal and the up vector:
upVec = vec(0.0f, 1.0f, 0.0f);
xAxis = norm( upVec.cross(normalVec) );
yAxis = norm( normalVec.cross(xAxis) );
// set orientation of model matrix
modelMat(0,0) = xAxis(0);
modelMat(1,0) = yAxis(0);
modelMat(2,0) = normalVec(0);
modelMat(0,1) = xAxis(1);
modelMat(1,1) = yAxis(1);
modelMat(2,1) = normalVec(1);
modelMat(0,2) = xAxis(2);
modelMat(1,2) = yAxis(2);
modelMat(2,2) = normalVec(2);
// set position of model matrix by using the previously computed center-point
modelMat(0,3) = pointCenter(0);
modelMat(1,3) = pointCenter(1);
modelMat(2,3) = pointCenter(2);
For testing purposes I'm placing an objects on a flat surface after each click. This works well in most cases when my camera is facing downwards the up vector.
However, once I rotate my camera the placed objects are oriented arbitrarily and I can't figure out why!
Ok, I just found a small, stupid bug in my code that was unrelated to the actual problem. Therefore, the approach stated in the question above is working correctly.
In order to avoid some pitfalls, one could of course just use a math library, such as Eigen, in order to compute the rotation between the up vector and the surface normal:
upVec = Eigen::Vector3f(0.0f, 1.0f, 0.0f);
Eigen::Quaternion<float> rotationQuat;
rotationQuat.setFromTwoVectors(upVec, normalVec);

Kinect Scaling in C++

I am experimenting with the kinect, however I am having some problems with scaling. The below is code from the kinect-kcb and although the face tracking works for the 'mesh' I am having problems returning the scaling value for my own classes. The below code returns a correct rotation and translation which function perfectly, but the scale only ever returns 1 for a long period (despite the mesh clearly changing size) and then slowly gets smaller 0.98... etc but clearly not correct scaling values.
float scale;
float rotation[ 3 ];
float translation[ 3 ];
hr = mResult->Get3DPose( &scale, rotation, translation );
if ( SUCCEEDED( hr ) ) {
Vec3f r( rotation[ 0 ], rotation[ 1 ], rotation[ 2 ] );
Vec3f t( translation[ 0 ], translation[ 1 ], translation[ 2 ] );
face.mPoseMatrix.translate( t );
face.mPoseMatrix.rotate( r );
face.mPoseMatrix.translate( -t );
face.mPoseMatrix.translate( t );
face.mPoseMatrix.scale( Vec3f::one() * scale );
}
This scale value is used repeatedly thoughout the code, but does not seem to change often enough (example functions - not in order):
hr = mModel->Get3DShape( shapeUnits, numShapeUnits, animationUnits, numAnimationUnits, scale, rotation, translation, pts, numVertices );
hr = mModel->GetProjectedShape( &mConfigColor, mSensorData.ZoomFactor, viewOffset, shapeUnits, numShapeUnits, animationUnits,
numAnimationUnits, scale, rotation, translation, pts, numVertices );
The kinect has a function FaceModel.Scale(), however this only returns a constant value which I assume is the initial scaling value for the 3D model, and then I assumed the above scaling value would change as the user moved closer and further away from the camera.
The method IFTResult::Get3DPose among other things, gives you the face scale value. If it is equal to 1.0 then the face scale is equal to the loaded 3D model (so nothing to do?).
If when reloading the 3d model, the face value is not equal to 1.0 then you need to do work on the model.
have you tried outputting some debug info of what IFTResult::Get3DPose assigns to pScale?
its also possible that the system is failing to track, you can check this with IFTResult::GetStatus.
It may be that what you are after is the magnitude of the face rectangle. This would scale with the proximity of the image subject.
Heres a relevant code project link.

c++ quaternion clarification

I'm working on a flight simulator. I've read a tutorial about quaternions, ( this one : http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-17-quaternions/ ), so It it very new for me.
From what I've understand, quaternions should rotate an object using a direction vector and make the object's orientation match the direction, and rotate the quaternion should not move the vector, but just make him turn around himself. Am I true?
If yes, this is my code :
void Plane::Update()
{
m_matrix = GLM_GTX_quaternion::toMat4( GLM_GTX_quaternion::angleAxis( radians( m_angle ), normalize( m_planeDirection ) ) );
}
When my plane model's is pointing to x vector, my plane will rotate correctly around the x vector with the angle equal to 0, but if I change the angle, it will not rotate correctly. So how can I find the angle?
Yes, you are correct - Quaternion rotates an object around a directional vector. You should also use the glm::quat typedef when working with quaternions.
#include <glm/gtc/quaternion.hpp>
//...
glm::mat4 m = glm::mat4_cast(glm::angleAxis(glm::radians(m_angle), glm::normalize(m_planeDirection)));
glm::rotate function also works with quaternions
glm::mat4 m = glm::mat4_cast(glm::rotate(glm::quat(), 45.0f, glm::vec3(1.0f)));

My matrix multiplication is not working how i expect

This is what I want:
What am I doing wrong with the following code. The output of the translation for the orbit rotation never occurs, it just ends up rotating all on the original axis.
void Renderer::UpdateAsteroid( Placement^ asteroidRotation, Placement^ sceneOrientation )
{
XMMATRIX selfRotation;
// Rotate the asteroid
selfRotation = XMMatrixRotationX( asteroidRotation->GetRotX() );
selfRotation = XMMatrixMultiply( selfRotation, XMMatrixRotationY( asteroidRotation->GetRotY() ) );
selfRotation = XMMatrixMultiply( selfRotation, XMMatrixRotationZ( asteroidRotation->GetRotZ() ) );
XMMATRIX translation;
// Move the asteroid to new location
translation = XMMatrixTranslation( sceneOrientation->GetPosX(), sceneOrientation->GetPosY(), sceneOrientation->GetPosZ() );
XMMATRIX orbitRotation;
// Rotate from moved origin
orbitRotation = XMMatrixRotationZ( sceneOrientation->GetRotZ() );
XMMATRIX outputMatrix = selfRotation * translation * orbitRotation;
// Store
XMStoreFloat4x4( &m_constantBufferData.model, XMMatrixTranspose( outputMatrix ) );
}
but it does not seem to work.. it just ends up rotating on the same point.
Matrix multiplication is not commutative, i.e. changing operands order changes result.
Typically you will want to calculate all of your matrces and then multiply together. Try this order first:
result = scale * selfRotation * translation * orbitRotation
If it's not what you want, just move matrices around to find right order.
Note, XMMath also have XMMatrixRotationRollPitchYaw() function to get rotation matrices in one call.

MVP matrix not working outside of shader?

Odd problem here, I've been converting my current project from Qt's native matrix/vector classes to Eigen's, but I've come across an issue that I can't work out.
I calculate the MVP for the shader thus:
DiagonalMatrix< double, 4 > diag( Vector4d( 1.0, 1.0, -1.0, 1.0 ) );
scrMatrix_.noalias() = projMatrix_ * diag * camMatrix_.inverse();
The diag matrix inverts the Z-axis because all my maths sees the camera's aim vector pointing into the screen, but OpenGL does the opposite. Anyway this works because the OpenGL side of the viewports appear and operate fine.
The other side of my viewport output is 2D overlay painting via Qt's paintEvent() system, grid labelling for example. So I use the same matrix to find the 3D location in the camera's clip space:
Vector4d outVec( scrMatrix_ * ( Vector4d() << inVec, 1.0 ).finished() );
Except I get totally wrong results:
inVec: 0 0 10
outVec: 11.9406 -7.20796
In this example I expected something more like outVec: 0.55 -0.15. My GLSL vertex shader performs the calculation like this:
gl_Position = scrMatrix_ * transform * vec4( inVec, 1.0 );
In the examples above transform is the identity, so I can't see any difference between the two projections, and yet the outcomes are totally different! I know this is a long shot, but can anyone see where I'm going wrong?
Update:
I reimplemented the old (working) Qt code for comparison purposes:
QVector3D qvec( vector( 0 ), vector( 1 ), vector( 2 ) );
QMatrix4x4 qmat( Affine3d( scrMatrix_ ).data() );
QPointF pnt = ( qvec * qmat ).toPointF() / 2.0;
Vs:
Vector4d vec( scrMatrix_ * ( Vector4d() << vector, 1.0 ).finished() );
QPointF pnt = QPointF( vec( 0 ), vec( 1 ) ) / 2.0;
To me they are identical, but only the Qt version works!
Well I sussed it out, you need to scale the XYZ axes of the resulting vector by the W axis scale factor (clues in the name...).
It's amazing how much Qt and OpenGL do in the background for you.