FBX SDK importer and LH coordinates system in OpenGL - opengl

We are developing a renderer using OpenGL.One of the features is ability to import FBX models.The importer module uses FBX SDK (2017).
Now,as we all know,OpenGL is a right handed coordinate system.That's forward is from positive to negative,right is right and up vector is up.In our application the requirement to have forward vector positive,similar to DirectX.
On the application level we set it by scaling Z of the projection matrix with -1.
using glm math:
glm::mat4 persp = glm::perspectiveLH (glm::radians (fov), width / height, mNearPlane, mFarPlane);
Which is same as doing this:
glm::mat4 persp = glm::perspective (glm::radians (fov), width / height, mNearPlane, mFarPlane);
persp = glm::scale (persp , glm::vec3 (1.0f, 1.0f, -1.0f));
So far so good.The funny part comes when we import FBX models.
If using
FbxAxisSystem::OpenGL.ConvertScene(pSceneFbx);
Then transforming the vertices with the Node's global transform,which is calculated like this:
FbxSystemUnit fbxUnit = node->GetScene()->GetGlobalSettings().GetSystemUnit();
FbxMatrix globalTransform = node->EvaluateGlobalTransform();
glm::dvec4 c0 = glm::make_vec4((double*)globalTransform.GetColumn(0).Buffer());
glm::dvec4 c1 = glm::make_vec4((double*)globalTransform.GetColumn(1).Buffer());
glm::dvec4 c2 = glm::make_vec4((double*)globalTransform.GetColumn(2).Buffer());
glm::dvec4 c3 = glm::make_vec4((double*)globalTransform.GetColumn(3).Buffer());
The geometry faces are inverted. (Default CCW winding order in OpenGL)
If using DirectX converter:
FbxAxisSystem::DirectX.ConvertScene(pSceneFbx);
The model is both inverted and flipped upside-down.
OpenGL converted:
DirectX converted:
What we found that solves this problem,that's negating Z of the 3 column in that matrix.And also rotating it 180 degrees around Z axis.Otherwise the front of the model would be its back(yeah,sound tricky,but it makes sense when comparing between OpenGL and DirectX coordinate system difference.
So,the whole "conversion" matrix now looks like this:
FbxSystemUnit fbxUnit = node->GetScene()->GetGlobalSettings().GetSystemUnit();
FbxMatrix globalTransform = node->EvaluateGlobalTransform();
glm::dvec4 c0 = glm::make_vec4((double*)globalTransform.GetColumn(0).Buffer());
glm::dvec4 c1 = glm::make_vec4((double*)globalTransform.GetColumn(1).Buffer());
glm::dvec4 c2 = glm::make_vec4((double*)globalTransform.GetColumn(2).Buffer());
glm::dvec4 c3 = glm::make_vec4((double*)globalTransform.GetColumn(3).Buffer());
glm::mat4 mat =
glm::mat4(1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1, 0,//flip z to get correct mesh direction (CCW)
0, 0, 0, 1);
//in this case the model faces look right dir ,but the model
//itself needs to be rotated because the camera looks at it from the
//wrong direction.
mat = glm::rotate(mat, glm::radians(180.0f), glm::vec3(0.0f, 0.0f, 1.0f));
glm::mat4 convertMatr = glm::mat4(c0, c1, c2, c3) *mat;
Then,transforming FBX model's vertices with that matrix gives the desired result:
Btw,how do we know the result is desired?We compare it to Unity3D game engine.
Now,this is the first time I am required to perform such a hack.It feels like a very nasty hack.Especially when it comes to skinned meshes,we need to transform with the conversion matrix also bone matrices,pose matrix and what not...
The question is,in this case,when we need to keep CCW winding order and have positive forward in OpenGL. Is that the only way to get the geometry transformations right?
(the 3d model source is 3DsMax,exported with Y-up.)

Most of the programming I've done in OpenGL has used a left handed system as opposed to DirectX's right handed system. The thing to understand about OpenGL versus DirectX is that OpenGL does not have a built in camera object. You have to create / supply your own camera and by doing so, you can literally set your coordinate system to be either a RHC or LHC depending on how you set up your stored matrices in most cases default settings are LHC. The other major difference between DirectX and OpenGL is that once you have your Scene and your Camera in 3D space with DirectX when you move the camera or rotate it; it is the Camera that actually moves where as in OpenGL it is not the camera that moves for it is fixed and it is the entire scene that moves relative to the camera. So the simplest way to illustrate the conversion is to understand or to know the operations of multiple matrix calculations and to know the order of your MVPs (Model - View - Projection) matrices. The conversion from a RHC to a LHC should be done and pre-calculated after opening and loading in the model file and then stored into your Model matrix. Then once you have the appropriate Model Matrix, the rest of the MVP calculations should be accurate.
These would be the basic steps
Open model file, extract needed or wanted data and load contents into application specific custom data structures
If necessary convert from one coordinate system to another and save back into custom structures.
Use custom structures to create your MVPs and send them to the graphic card with a batch process using the desired shaders
Render Objects to the screen for each frame buffer.
Here are a few very good reads on converting between one coordinate system and another:
Geometrictools
techart3d
The easiest thing to do when converting from RHC to LHC and visa versa is as long as the X is left and right and the Y is up and down all you need to do is negate every z coordinate, but this is only good for translation and not rotations. Now if some application has a RHC system and you are using a LHC system but that application has it's model data saved where say their Up vector is Z and their in and out vector is Y then you first need to swap every Y with Z then you need to negate the new Z after the swap and again this is good with only translations and not rotations.
Here is the basic formula for conversions:
Let M0 be the given 4x4 transform matrix.
Let M1 be an additional 4x4 affine transform matrix that maps from the left handed coordinate system that you want to the right handed coordinate system that you have currently.
Then the total transform = inv(M1) * M0*M1
which can be found from here: mathworks:mathlab *The math or formula is the same converting in either direction.

Related

OpenGl rotations and translations

I am building a camera class to look arround a scene. At the moment I have 3 cubes just spread arround to have a good impression of what is going on. I have set my scroll button on a mouse to give me translation along z-axis and when I move my mouse left or right I detect this movement and rotate arround y-axis. This is just to see what happens and play arround a bit. So I succeeded in making the camera rotate by rotating the cubes arround the origin but after I rotate by some angle, lets say 90 degrees, and try to translate along z axis to my surprise I find out that my cubes are now going from left to right and not towards me or away from me. So what is going on here? It seems that z axis is rotated also. I guess the same goes for x axis. So it seems that nothing actually moved in regard to the origin, but the whole coordinate system with all the objects was just rotated. Can anyone help me here, what is going on? How coordinate system works in opengl?
You are most likely confusing local and global rotations. Usual cheap remedy is to change(reverse) order of some of your transformation. However doing this blindly is trial&error and can be frustrating. Its better to understand the math first...
Old API OpeGL uses MVP matrix which is:
MVP = Model * View * Projection
Where Model and View are already multiplied together. What you have is most likely the same. Now the problem is that Model is direct matrix, but View is Inverse.
So if you have some transform matrix representing your camera in oder to use it to transform back you need to use its inverse...
MVP = Model * Inverse(Camera) * Projection
Then you can use the same order of transformations for both Model and Camera and also use their geometric properties like basis vectors etc ... then stuff like camera local movements or camera follow are easy. Beware some tutorials use glTranspose instead of real matrix Inverse. That is correct only if the Matrix contains only unit (or equal sized) orthogonal basis vectors without any offset so no scale,skew,offset or projections just rotation and equal scale along all axises !!!
That means when you rotate Model and View in the same way the result is opposite. So in old code there is usual to have something like this:
// view part of matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotate3f(view_c,0,0,1); // ugly euler angles
glRotate3f(view_b,0,1,0); // ugly euler angles
glRotate3f(view_a,1,0,0); // ugly euler angles
glTranslatef(view_pos); // set camera position
// model part of matrix
for (i=0;i<objs;i++)
{
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glTranslatef(obj_pos[i]); // set camera position
glRotate3f(obj_a[i],1,0,0); // ugly euler angles
glRotate3f(obj_b[i],0,1,0); // ugly euler angles
glRotate3f(obj_c[i],0,0,1); // ugly euler angles
//here render obj[i]
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
note the order of transforms is opposite (I just wrote it here in editor so its not tested and can be opposite to native GL notation ... I do not use Euler angles) ... The order must match your convention... To know more about these (including examples) not using useless Euler angles see:
Understanding 4x4 homogenous transform matrices
Here is 4D version of what your 3D camera class should look like (just shrink the matrices to 4x4 and have just 3 rotations instead of 6):
reper4D
pay attention to difference between local lrot_?? and global grot_?? functions. Also note rotations are defined by plane not axis vector as axis vector is just human abstraction that does not really work except 2D and 3D ... planes work from 2D to ND
PS. its a good idea to have the distortions (scale,skew) separated from model and keep transform matrices representing coordinate systems orthonormal. It will ease up a lot of things latter on once you got to do advanced math on them. Resulting in:
MVP = Model * Model_distortion * Inverse(Camera) * Projection

OpenGL where to process vertices

I want to use Java and OpenGL (through LWJGL) to render a 3D object. I also use GLFW to set up windows, contexts, etc.
I have a custom class to represent a unit sphere, which stores coordinates of every vertex as well as an array of integers to represent the triangle mesh. It also stores the translation, scale factor and rotation (so that any sphere can be built from the unit sphere model).
The vertices are world coordinates e.g. (1, 0, 0) with s.f. as 5, translation (0, 10, 0) and rotation (0,0,0).
I also have a custom camera object which stores the position of the camera and the orientation (in radians) to each axis (yaw, roll, pitch). It also stores the distance to the projection plane to allow a changeable FOV.
I know that in order to display the sphere, I need to apply a series of transformations to all vertices. My question is, where should I apply each transformation?
OpenGL screen coordinates range from (-1,-1) to (1, 1).
My current solution (which I would like to verify is optimal) is as follows:
In CPU:
Apply model transform on the sphere (so some vertex is now [5, 10, 0]). My model transform is in the following order: scale, rotation [z,y,x] then translation. Buffer these into vbo/vao and load into shader along with camera position, rotation and distance to PP.
In vertex shader:
apply camera transform on world coordinates (build the transformation matrix here or load it in from CPU too?)
calculate 2D screen coordinates by similar triangles
calculate OpenGL coordinates by ratios of screen resolution
pass resulting vertex coordinates into the fragment shader
Have I understood the process correctly? Should I rearrange any processes? I am reading guides online but they aren't always specific enough - most already store the vertices (in Java) in the OpenGL coordinate system but not world coordinates like me. Some say that the camera is fixed at the origin in OpenGL, though I assume that I need to apply the camera transform for it to be so and display shapes properly.
Thanks

DirectX 11 Moving my square

I am learning DirectX 11 and have gotten myself to a point where I have a square displayed. My 4 vertices are defined as:
VertexPos vertices[] =
{
XMFLOAT3(-0.2f, 0.2f, 0.2f),
XMFLOAT3(0.2f, 0.2f, 0.2f),
XMFLOAT3(-0.2f, -0.2f, 0.2f),
XMFLOAT3(0.2f, -0.2f, 0.2f)
};
They then go through the necessary stages to render to the screen (like the hello world of DirectX programming). I have combined some code from a direct Input demo and would like to be able to move the square around using the arrow keys. I have so far:
void SquareMove::Update( float dt )
{
keyboardDevice_->GetDeviceState(sizeof(keyboardKeys_), (LPVOID)&keyboardKeys_);
// Button down event.
if (KEYDOWN(prevKeyboardKeys_, DIK_DOWN) && !KEYDOWN(keyboardKeys_, DIK_DOWN))
{
PostQuitMessage(0);
}
(That was to test that my eg. down arrow callback works). I am now at a loss how to actually implement the steps necessary to move my square. I understand it has to do with a D3DXMatrixTranslation, but am struggling to see how they all convolute together to perform the necessary operations. Thanks
I am not quite clear if you actually understand the effect of the translation matrix or not. There are many great tutorials to guide you through for the implementation details, so I would just share what is interesting to me - the way I understood rendering first.
In short you have to learn to visualize the effect of mathematics on your object. Your square's vertices are currently in what is called "model" and "local" space. I.e. they are just giving information actually on the shape and size of the model (related to the 0,0,0 world coordinates). Now you have to position it in your "world". To do so you have to move ("translate") each of the vertices of your object to the new place keeping the size and shape you just defined in your model space - to translate each of them with the same length and in the same direction, in order words with the same vector, to apply the same calculation to each of the vertices.
In rendering transformation of object is achieved through matrices. Each matrix has values at certain positions which when multiplied with the coordinates of the object will in some way change it. Their consequent application on the object's coordinates (via multiplication) applies consequent transformations - an object may first be rotated - this will rotate it around the center of your world, i.e. around (0,0,0), then translated, then rotated again...this is just multiplication of 3 matrices.
The specific implementation may be found at plenty of places: http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-5
Just for completeness this is how I managed to do it. In the render function I added the following
XMMATRIX m_Translation = XMMatrixTranslation(0.0f, fY, 0.0f);
XMMATRIX mvp = world*vpMatrix_*m_Translation;
(Previously it was just world*viewPortMatrix)
where fY was controlled via button down events
if (KEYDOWN(prevKeyboardKeys_, DIK_DOWN) && !KEYDOWN(keyboardKeys_, DIK_DOWN))
{
fY -= 0.1f;
}
And when running the application my object moves!

OpenGL/GLM - Switching coordinate system direction

I am converting some old software to support OpenGL. DirectX and OpenGL have different coordinate systems (OpenGL is right, DirectX is left). I know that in the old fixed pipeline functionality, I would use:
glScalef(1.0f, 1.0f, -1.0f);
This time around, I am working with GLM and shaders and need a compatible solution. I have tried multiplying my camera matrix by a scaling vector with no luck.
Here is my camera set up:
// Calculate the direction, right and up vectors
direction = glm::vec3(cos(anglePitch) * sin(angleYaw), sin(anglePitch), cos(anglePitch) * cos(angleYaw));
right = glm::vec3(sin(angleYaw - 3.14f/2.0f), 0, cos(angleYaw - 3.14f/2.0f));
up = glm::cross(right, direction);
// Update our camera matrix, projection matrix and combine them into my view matrix
cameraMatrix = glm::lookAt(position, position+direction, up);
projectionMatrix = glm::perspective(50.0f, 4.0f / 3.0f, 0.1f, 1000.f);
viewMatrix = projectionMatrix * cameraMatrix;
I have tried a number of things including reversing the vectors and reversing the z coordinate in the shader. I have also tried multiplying by the inverse of the various matrices and vectors and multiplying the camera matrix by a scaling vector.
Don't think about the handedness that much. It's true, they use different conventions, but you can just choose not to use them and it boils down to almost the same thing in both APIs. My advice is to use the exact same matrices and setups in both APIs except for these two things:
All you should need to do to port from DX to GL is:
Reverse the cull-face winding - DX culls counter-clockwise by default, while that's what GL keeps.
Adjust for the different depth range: DX uses a depth range of 0(near) to 1(far), while GL uses a signed range from -1 for near and 1 for far. You can just do this as "a last step" in the projection matrix.
DX9 also has issues with pixel-coordinate offsets, but that's something else entirely and it's no longer an issue with DX10 onward.
From what you describe, the winding is probably your problem, since you are using the GLM functions to generate matrices that should be alright for OpenGL.

Rotating a spaceship model for a space simulator/game

I been working on a space sim for sometime now.
At first I was using my own 3d engine with software rasterizer.
But I gave up when the time for implementing textures was due.
Now I started again after sometime and now I'm using Opengl (with SDL) instead to render the 3d models.
But now I hit another brick wall.
I can't figure out how to make proper rotations.
Being a space-simulator I want similar controls to a flighsim
using
glRotatef(angleX, 1.0f, 0.0f, 0.0f);
glRotatef(angleY, 0.0f, 1.0f, 0.0f);
glRotatef(angleZ, 0.0f, 0.0f, 1.0f);
or similar,
does not work properly if I rotate the model(spaceship) first 90 degrees to the left and then rotate it "up".
Instead it rolls.
Here's a image that illustrate my problem.
Image Link
I tried several tricks to try and counter this but somehow I feel I missing something.
It doesn't help either that simulator style rotation examples are almost impossible to find.
So I'm searching for examples, links and the theory of rotating a 3d model (like a spaceship, airplane).
Should I be using 3 vectors (left, up, forward) for orientation as I'm also going to have to calculate things like acceleration from thrusters and stuff that will change with the rotation (orientation?) and from the models perspective points in a direction like rocket-engines.
I'm not very good with math and trying to visualize a solution just give a headache
I'm not sure I entirely understand the situation, but it sounds like you might be describing gimbal lock. You might want to look at using quaternions to represent your rotations.
Getting this right certainly can be challenging. The problem I think you are facing is that you are using the same transformation matrices for rotations regardless of how the 'ship' is already oriented. But you don't want to rotate your ship based on how it would turn when it's facing forward, you want to rotate based on how it's facing now. To do that, you should transform your controlled turn matrices the same way you transform your ship.
For instance, say we've got three matrices, each representing the kinds of turns we want to do.
float theta = 10.0*(pi/180.0)
matrix<float> roll = [[ cos(theta), sin(theta), 0]
[ -sin(theta), cos(theta), 0]
[ 0, 0, 1]
matrix<float> pitch = [[ cos(theta), 0, sin(theta)]
[ 0, 1, 0]
[ -sin(theta), 0, cos(theta)]
matrix<float> yaw = [[1, 0, 0]
[0, cos(theta), sin(theta)]
[0, -sin(theta), cos(theta)]]
matrix<float> orientation = [[1, 0, 0]
[0, 1, 0]
[0, 0, 1]]
each which represent 10 degrees of rotation across each of the three flight attitude axes. Also we have a matrix for your ship's orientation, initially just strait forward. You will transform your ship's vertices by that orientation matrix to display it.
Then to get your orientation after a turn, you need to do just a bit of cleverness, first transform the attitude control matrices into the player's coordinates, and then apply that to the orientation to get a new orientation: something like
function do_roll(float amount):
matrix<float> local_roll = amount * (roll * orientation)
orientation = orientation * local_roll
function do_pitch(float amount):
matrix<float> local_pitch = amount * (pitch * orientation)
orientation = orientation * pitch_roll
function do_yaw(float amount):
matrix<float> local_yaw = amount * (yaw * orientation)
orientation = orientation * local_yaw
so that each time you want to rotate in one way or another, you just call one of those functions.
What you're going to want to use here is quaternions. They eliminate the strange behaviors you are experiencing. Thinking of them as a matrix on steroids with similar functionality. You CAN use matrices better than you are (in your code above) by using whatever OpenGL functionality that allows you to create a rotational matrix upon a particular rotational axis vector, but quaternions will store your rotations for future modification. For example, You start with an identity quaternion and rotate it upon a particular axis vector. The quaternion then gets translated into a world matrix for your object, but you keep the quaternion stored within your object. Next time you need to perform a rotation, then just further modify that quaternion instead of having to try and keep track of X, Y, and Z axis rotation degrees, etc.
My experience is within directx (sorry, no OpenGL experience here), though I did once run into your problem when I was attempting to rotate beachballs that were bouncing around a room & rotating as they encountered floors, walls, and each other.
Google has a bunch of options on "OpenGL Quaternion", but this, in particular, appears to be a good source:
http://gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation
As you may have guessed by now, Quaternions are great for handling cameras within your environment. Here's a great tutorial:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=Quaternion_Camera_Class
You should study 3D mathematics so that you can gain a deeper understanding of how to control rotations. If you don't know the theory it can be hard to even copy and paste correctly. Specifically texts such as 3D Math Primer(Amazon), and relevant sites like http://gamemath.com will greatly aid you in your project (and all future ones).
I understand you may not like math now, but learning the relevant arithmetic will be the best solution to your issue.
Quaternions may help, but a simpler solution may be to observe a strict order of rotations. It sounds like you're rotating around y, and then rotating around x. You must always rotate x first, then y, then z. Not that there's anything particularly special about that order, just that if you do it that way, rotations tend to work a little bit closer to how you expect them to work.
Edit: to clarify a little bit, you also shouldn't cumulatively rotate across time in the game. Each frame you should start your model out in identity position, and then rotate, x, y, then z, to that frame's new position.
General rotations are difficult. Physicist tend to use some set of so-called Euler angles to describe them. In this method a general rotation is described by three angles taken about three axis in fixed succession. But the three axis are not the X-, Y- and Z- axis of the original frame. They are often the Z-, Y- and Z- axis of the original frame (yes, it really is completely general), or two axis from the original frame followed by an axis in the intermediate frame. Many choices are available, and making sure that you are following the same convention all the way through can be a real hassle.