OpenGL/GLM - Switching coordinate system direction - c++

I am converting some old software to support OpenGL. DirectX and OpenGL have different coordinate systems (OpenGL is right, DirectX is left). I know that in the old fixed pipeline functionality, I would use:
glScalef(1.0f, 1.0f, -1.0f);
This time around, I am working with GLM and shaders and need a compatible solution. I have tried multiplying my camera matrix by a scaling vector with no luck.
Here is my camera set up:
// Calculate the direction, right and up vectors
direction = glm::vec3(cos(anglePitch) * sin(angleYaw), sin(anglePitch), cos(anglePitch) * cos(angleYaw));
right = glm::vec3(sin(angleYaw - 3.14f/2.0f), 0, cos(angleYaw - 3.14f/2.0f));
up = glm::cross(right, direction);
// Update our camera matrix, projection matrix and combine them into my view matrix
cameraMatrix = glm::lookAt(position, position+direction, up);
projectionMatrix = glm::perspective(50.0f, 4.0f / 3.0f, 0.1f, 1000.f);
viewMatrix = projectionMatrix * cameraMatrix;
I have tried a number of things including reversing the vectors and reversing the z coordinate in the shader. I have also tried multiplying by the inverse of the various matrices and vectors and multiplying the camera matrix by a scaling vector.

Don't think about the handedness that much. It's true, they use different conventions, but you can just choose not to use them and it boils down to almost the same thing in both APIs. My advice is to use the exact same matrices and setups in both APIs except for these two things:
All you should need to do to port from DX to GL is:
Reverse the cull-face winding - DX culls counter-clockwise by default, while that's what GL keeps.
Adjust for the different depth range: DX uses a depth range of 0(near) to 1(far), while GL uses a signed range from -1 for near and 1 for far. You can just do this as "a last step" in the projection matrix.
DX9 also has issues with pixel-coordinate offsets, but that's something else entirely and it's no longer an issue with DX10 onward.
From what you describe, the winding is probably your problem, since you are using the GLM functions to generate matrices that should be alright for OpenGL.

Related

OpenGL vs DirectX, local vs global?

I have read multiple articles and posts about pre/post multiplications, column/row major, DirectX vs OpenGL, and I might be more confused than at the beginning.
So, let's say we write those instructions (pseudocode) in OpenGL:
rotate(...)
translate(...)
From what I understand, OpenGL will do v' = R * T * v, effectively transforming the local coordinates frame of the vector.
But in DirectX, if we do the transformations (pseudocode) in the same order,
rotate(...)
translate(...)
the result will not be the same, right? Since DirectX pre-multiplies, the result will be v' = v * R * T, thus transforming the vector using global coordinates.
So, am I correct when I say that OpenGL being post-multiplication and DirectX being pre-multiplication is like saying that OpenGL moves in local coordinates while DirectX moves in global coordinates?
Thank you.
Your best bet is to read Matrices, Handedness, Pre and Post Multiplication, Row vs Column Major, and Notations.
OpenGL code is often using a right-handed coordinate system, column-major matrices, column vectors, and post-multiplication.
Direct3D code is often using a left-handed coordinate system, row-major matrices, row vectors, and pre-multiplication.
XNA Game Studio's math library (and therefore Monogame, Unity, etc.) use right-handed coordinate system, row-major matrices, row vectors, and pre-multiplication.
The DirectXMath library uses row-major matrices, row vectors, and pre-multiplication, but leaves it up to you to choose to use a left-handed or right-handed coordinate system where it matters. This was even true of the older now deprecated D3DXMath.
In either system, you are still doing the same object coordinate -> world coordinate -> eye coordinate -> clip coordinate transformations.
So with OpenGL GLM you might do:
using namespace glm;
mat4 myTranslationMatrix = translate(10.0f, 0.0f, 0.0f);
mat4 myRotationMatrix = rotate( 90.f, vec3( 0, 1, 0 ) );
mat4 myScaleMatrix = scale(2.0f, 2.0f, 2.0f);
mat4 myModelMatrix = myTranslationMatrix * myRotationMatrix * myScaleMatrix;
vec4 myTransformedVector = myModelMatrix * myOriginalVector;
In DirectXMath you'd do:
using namespace DirectX;
XMMATRIX myTranslationMatrix = XMMatrixTranslation(10.0f, 0.0f, 0.0f);
XMMATRIX myRotationMatrix = XMMatrixRotationY(XMConvertToRadians(90.f));
XMMATRIX myScaleMatrix = XMMatrixScaling(2.0f, 2.0f, 2.0f)
XMMATRIX myModelMatrix = myScaleMatrix * myRotationMatrix * myTranslationMatrix;
XMVECTOR myTransformedVector = XMVector4Transform( myOriginalVector, myModelMatrix );
And you will get the same transformation as a result.
If you are new to DirectXMath, then you should take a look at the SimpleMath wrapper in the DirectX Tool Kit which hides some of the strict SIMD-friendly alignment requirements with C++ constructors and operators. Because SimpleMath is based on XNA Game Studio C# math design, it assumes right-handed view coordinates but you can easily mix it with 'native' DirectXMath as well to use left-handed view coordinates if desired.
Most of these decisions were arbitrary, but did have sound design reasoning for them. OpenGL's math library was trying to match normal mathematical convention of post-multiplication, which lead to them adopting column-major matrices. In the early days of Direct3D, the team felt that the reversing of the concatenation order was confusing so they flipped all the conventions.
Many years later, the XNA Game Studio team felt that the traditional Direct3D concatenation order was intuitive, but that having 'forward' be negative z was confusing so they switched to right-handed coordinates. Many of the more modern Direct3D samples therefore use right-handed view systems, but you'll still see a mix of both left and right-handed viewing setups for Direct3D samples.So at this point, we really have "OpenGL style", "classic Direct3D style", and "modern Direct3D style".
Note that these conventions really mattered back when things were done with fixed-function hardware but with programmable shader pipelines what matters is that you are consistent. In fact, the HLSL shader defaults to expecting matrices to be in column-major form so you'll often see DirectXMath matrices transposed as they are copied into constant buffers.
There is no such thing as premultiplication in OpenGL.
The reason why you think OpenGL reverses the multiplications is, that it stores the matrices in column-major layout:
a c
b d
The mathematical rule for multiplying transposed matrices is:
A^T * B^T = (B*A)^T
So if you want to compute v' = v * R * T (all row-major matrices) you will have to write
(v')^T = v * R * T
or in your pseudo-code:
translate(...) rotate(...) translate(...)
Alternatively you could store your rotation and translation matrices column-major too.

What does the glFrustum function do?

According to MSDN, we are multiplying the current matrix with the perspective matrix. What matrix are we talking about here? Also from MSDN:
"The glFrustum function multiplies the current matrix by this matrix, with the result replacing the current matrix. That is, if M is the current matrix and F is the frustum perspective matrix, then glFrustum replaces M with M • F."
Now, from my current knowledge of grade 12 calculus (in which I am currently), M is a vector and M dot product F returns a scalar, so how can one replace a vector with a scalar?
I'm also not sure what "clipping" planes are and how they can be referenced via one float value.
Please phrase your answer in terms of the parameters, and also conceptually. I'd really appreciate that.
I'm trying to learn openGL via this tutorial: http://www.youtube.com/watch?v=WdGF7Bw6SUg. It is actually not really good because it explains nothing or else it assumes one knows openGL. I'd really appreciate a link to a tutorial otherwise, I really appreciate your time and responses! I'm sure I can struggle through and figure things out.
You misunderstand what it's saying. M is a matrix. M•F therefore is also a matrix. It constructs a perspective matrix. See this article for an explanation of how it is constructed and when you want to use glFrustum() vs. gluPerspective():
glFrustum() and gluPerspective() both produce perspective projection matrices that you can use to transform from eye coordinate space to clip coordinate space. The primary difference between the two is that glFrustum() is more general and allows off-axis projections, while gluPerspective() only produces symmetrical (on-axis) projections. Indeed, you can use glFrustum() to implement gluPerspective().
Clipping planes are planes that cut out sections of the world so they don't have to be rendered. The frustum describes where the planes are in space. Its sides define the view volume.
glFrustum generates a perspective projection matrix.
This matrix maps a portion of the space (the "frustum") to your screen. Many caveats apply (normalized device coordinates, perspective divide, etc), but that's the idea.
The part that puzzles you is a relic of the deprecated OpenGL API. You can safely ignore it, because usually you apply gluFrustum() on an identity projection matrix, so the multiplication mentioned in the doc has no effect.
A perhaps more comprehensible way to do things is the following :
// Projection matrix : 45° Field of View, 4:3 ratio, display range : 0.1 unit <-> 100 units
glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
// Or, for an ortho camera :
//glm::mat4 Projection = glm::ortho(-10.0f,10.0f,-10.0f,10.0f,0.0f,100.0f); // In world coordinates
// Camera matrix
glm::mat4 View = glm::lookAt(
glm::vec3(4,3,3), // Camera is at (4,3,3), in World Space
glm::vec3(0,0,0), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f);
// Our ModelViewProjection : multiplication of our 3 matrices
glm::mat4 MVP = Projection * View * Model; // Remember, matrix multiplication is the other way around
Here, I used glm::perspective (the equivalent of old-style gluPerspective() ), but you can use gluFrustum too.
As for the arguments of the function, they are best explained by this part of the doc :
left bottom - nearVal and right top - nearVal specify the points on
the near clipping plane that are mapped to the lower left and upper
right corners of the window, assuming that the eye is located at (0,
0, 0)
If you're unfamiliar with transformation matrices, I wrote a tutorial here; the "Projection Matrix" section especially should help you.

OpenGL/GLSL/GLM - Skybox rotates as if in 3rd person

I have just gotten into implementing skyboxes and am doing so with OpenGL/GLSL and GLM as my math library. I assume the problem is matrix related and I haven't been able to find an implementation that utilizes the GLM library:
The model for the skybox loads just fine, the camera however circles it as if it is rotating around it in 3d third person camera.
For my skybox matrix, I am updating it every time my camera updates. Because I use glm::lookAt, it is essentially created the same way as my view matrix except I use 0, 0, 0 for the direction.
Here is my view matrix creation. It works fine in rendering of objects and geometry:
direction = glm::vec3(cos(anglePitch) * sin(angleYaw), sin(anglePitch), cos(anglePitch) * cos(angleYaw));
right = glm::vec3(sin(angleYaw - 3.14f/2.0f), 0, cos(angleYaw - 3.14f/2.0f));
up = glm::cross(right, direction);
glm::mat4 viewMatrix = glm::lookAt(position, position+direction, up);
Similarly, my sky matrix is created in the same way with only one change:
glm::vec3 position = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 skyView = glm::lookAt(position, position + direction, up);
I know a skybox does not apply translation and only considers rotations so I am not sure what the issue is. Is there an easier way to do this?
Visual aids:
Straight on without any movement yet
When I rotate the camera:
My question is this: how do I set up the correct matrix for rendering a skybox using glm:lookAt?
Aesthete is right skybox/skydome is only object that means you do not change projection matrix !!!
your render should be something like this:
clear screen/buffers
set camera
set modelview to identity and then translate it to position of camera you can get the position directly from projection matrix (if my memory serves at array positions 12,13,14) to obtain the matrix see this https://stackoverflow.com/a/18039707/2521214
draw skybox/skydome (do not cross your z_far plane,or disable Depth Test)
optionaly clear Z-buffer or re-enable Depth Test
draw your scene stuf .... ( do not forget to set modelview matrix for each of your drawed model)
of course you can temporary set your camera position (projection matrix) to (0,0,0) and leave modelview matrix with identity, it is sometimes more precise approach but do not forget to set the camera position back after skybox draw.
hope it helps.

How to adapt DirectX-style world/view/projection matrices to OpenGL?

I have an application which uses DirectX, and hence a left-handed world-coordinate systen, a view which looks down positive z and a projection which projects into the unit cube but with z from 0..1.
I'm porting this application to OpenGL, and OpenGL has a different coordinate system etc. In particular, it's right-handed, the view looks down along negative z, and the projection is into the real unit cube.
What is the easiest way to adapt the DirectX matrices to OpenGL? The world-space change seems to be easy, just by flipping the x-axis, but I'm not entirely sure how to change the view/projection. I can modify all of the matrices individually before multiplying them together and sending them off to my shader. If possible, I would like to stick with the DirectX based coordinate system as far down the pipeline as possible (i.e. I don't want to change where I place my objects, how I compute my view frustum, etc. -- ideally, I'll only apply some modification to the matrices right before handing them over to my shaders.)
The answer is: There's no need to flip anything, only to tweak the depth range. OpenGL has -1..1, and DX has 0..1. You can use the DX matrices just as before, and only multiply a scale/bias matrix at the end on top of the projection matrix to scale the depth values from 0..1 to -1..1.
Not tested or anything, just out of the top of my head :
oglProj = transpose(dxProj)
glScalef (1., 1., -1.); // invert z
glScalef(0.5, 0.5, 1); // from [-1,1] to [-0.5, 0.5]
glTranslatef(0.5, 0.5, 0) // from [-0.5, 0.5] to [0,1]
oglView = transpose(dxView)
glScalef (1., 1., -1.); // invert z
oglModel = transpose(dwModel)
glScalef (1., 1., -1.); // invert z
oglModelView = oglView * oglModel
There is an age-old extension for OpenGL exactly for that: GL_ARB_TRANSPOSE_MATRIX. It transposes matrices on GPU and should be available on every video card.

Creating a tiled world with OpenGL

I'm planning to create a tiled world with OpenGL, with slightly rotated tiles and houses and building in the world will be made of models.
Can anybody suggest me what projection(Orthogonal, Perspective) should I use, and how to setup the View matrix(using OpenGL)?
If you can't figure what style of world I'm planning to create, look at this game:
http://www.youtube.com/watch?v=i6eYtLjFu-Y&feature=PlayList&p=00E63EDCF757EADF&index=2
Using Orhtogonal vs Perspective projection is entirely an art style choice. The Pokemon serious you're talking about is orthogonal -- in fact, it's entirely layered 2D sprites (no 3D involved).
OpenGL has no VIEW matrix. It has a MODELVIEW matrix and a PROJECTION matrix. For Pokemon-style levels, I suggest using simple glOrtho for the projection.
Let's assume your world is in XY space (coordinates for tiles, cameras, and other objects are of the form [x, y, 0]). If a single tile is sized 1,1, then something like glOrtho(12, 9, -10, 10) would be a good projection matrix (12 wide, 9 tall, and Z=0 is the ground plane).
For MODELVIEW, you can start by loading identity, glTranslate() by the tile position, and then glTranslate() by the negative of the camera position, before you draw your geometry. If you want to be able to rotate the camera, you glRotate() by the negative (inverse) of the camera rotation between the two Translate()s. In the end, you end up with the following matrix chain:
output = Projection × (CameraTranslation-1 × CameraRotation-1 × ModelLocation × ModelRotation) × input
The parts in parens are MODELVIEW, and the "-1" means "inverse" which really is negative for translation and transpose for rotation.
If you want to rotate your models, too, you generally do that first of all (before the first glTranslate().
Finally, I suggest the OpenGL forums (www.opengl.org) or the OpenGL subforums of www.gamedev.net might be a better place to ask this question :-)
The projection used by that video game looks Oblique to me. There are many different projections, not just perspective and orthographic. See here for a list of the most common ones: http://en.wikipedia.org/wiki/File:Graphical_projection_comparison.png
You definitely want perspective, with a fixed rotation around the X-axis only. Around 45-60 degrees or thereof. If you don't care about setting up the projection code yourself, the gluPerspective function from the GLU library is handy.
Assuming OpenGL 2.1:
glMatrixMode(GL_PROJECTION); //clip matrix
glLoadIdentity();
gluPerspective(90.0, width/height, 1.0, 20.0);
glMatrixMode(GL_MODELVIEW); //world/object matrix
glLoadIdentity();
glRotatef(45.0f, 1.0f, 0.0f, 0.0f);
/* render */
The last two parameters to gluPerspective is the distance to the near and far clipping planes. Their values depend on the scale you use for the environment.