I'm updating an old OpenGL project and I'm switching all the (deprecated) glMatrix() functions for matrices and quaternions, and I'm having trouble getting the rotation working.
My drawing looks like this:
//these two are supposedly working
mat4 mProjection = perspective(FOV, aspectRatio, near, far);
mat4 mView = lookAt(cameraPosition, cameraCenter, headsUp);
mat4 mModel = mat4(1.0f);
mat4 mMVP = mProjection * mView * mModel;
What I'm trying to do now is to apply rotation to an object around a specific point (like the object's center).
I tried:
mat4 mModelRotation = rotate(mModel, object->RotationY(), vec3(0.0, 1.0, 0.0)); //RotationY being an angle in degrees
mat4 mMVP = mProjection * mView * mModel * mModelRotation;
But this causes the object to rotate around one of it's edges, not it's center.
I'd like to know how can I apply Quaternions to rotate the object around any point I pass as parameter for example.
I'm unexperienced with matrices, since I avoided them because I could use the glMatrix() functions before, so I don't understand much about the relation between the spatial position and them, and trying to update them to Quaternions is looking even more complicated.
I've read about the logic of Quaternions and how to use them, technically, but I don't understand where their values comes from.
For example:
//axis is a unit vector
local_rotation.w = cosf( fAngle/2)
local_rotation.x = axis.x * sinf( fAngle/2 )
local_rotation.y = axis.y * sinf( fAngle/2 )
local_rotation.z = axis.z * sinf( fAngle/2 )
total = local_rotation * total
I read this, and I have no clue what these values are. Axis is a unit vector... of what? fAngle I assume it's the angle I want to rotate, but since Quaternions use an arbitrary axis, how do I get the value for each of the XYZ axis, and how do I specify it in the Quaternion?
So, I'm looking for any practical example/tutorial of a Quaternion, so I can understand what's going on.
The only information I have when I want to rotate an object is the axis I want to rotate (x, y OR z, not all of them, but a combination of them in the final result), and a value in degrees.
I'm not much of a math person, so any tutorial that doesn't use shortcuts is highly appreciated.
OK, let say that you have a model ML (set of points of a model) and a point P to rotate that ML.
All the rotations are referred at to the origin, so you need to move the set of points ML taking the point P as the origin, make the rotation of all the points and then move it back.
How to do this ?, simple, for each point ML(k) (a point in the set) you do:
ML(k)-P --> with this you move the points, using the P point as origin
then rotate:
ROT * (ML(k)-P)
and finally, you move it back:
ROT(ML(k)-P) + P
As quaternions, you replace the matrix mult by q and -q
q * (ML(k)-p) * -q + p
That should work.
Related
I have a Transfromation matrix , which is combination of three other transformation matrixes.
glm::mat4 Matrix1 = position * rotation * scaling;
glm::mat4 Matrix2 = position * rotation * scaling;
glm::mat4 Matrix3 = position * rotation * scaling;
glm::mat4 transMatrix = Matrix1 * Matrix2 * Matrix3;
if sometime later i just want to remove the effect of Matrix1 from the transMatrix.
How can i do that ?
In short you may simply multiply by the inverse of Matrix1:
glm::mat4 Matrix2And3 = glm::inverse(Matrix1) * transMatrix;
Order of operation is important, if you wanted to remove Matrix3, you would need make transMatrix * inverse(Matrix3) instead. If it was Matrix2, you would need to remove matrix1 (or 3) and then matrix2.
However, matrix inversion should be avoided at all costs, since it's very inefficient, and for your situation it is avoidable.
What you call Matrix is actually just a 3D Pose: Position + Rotation + Size.
Assuming you are using uniform scaling (Size = float) your mat3 component of your Pose, i.e., is a Orthogonal matrix, these type of matrices have a special property which is:
Inverse(O) == Transpose(O)
Calculating the transpose of a matrix is a lot simpler than the inverse, this means that you can do the following, to achieve the same results but a lot faster:
mat4 inv = (mat4)transpose((mat3)Matrix1);
inv[3] = glm::vec4(-position1, 1.0f);
mat4 Matrix2And3 = inv * transMatrix;
If you want to go even further, I recommend you create Pose class and provide cast operators for mat4 and mat3, to take full performance at ease.
I'm developing a game that consists of 2 stages, one of these has an orthographic projection, and the other stage has a perspective projection.
Currently when we go between modes we fade to black, and then come back in the new camera mode.
How would I go about smoothly transitioning between the two?
There are probably a handful of ways of accomplishing this, the two I found that seemed like they would work the best were:
Lerping all the matrix elements from one matrix to the other. Apparently this works pretty well all things considered. I don't believe this transition will appear linear, though. You could try to give it an easing function instead of doing the interpolation linearly
A dolly zoom on the perspective matrix going to/from a near 0 field of view. You would pop from the orthographic matrix to the near 0 perspective matrix and lerp the fov out to your target, and probably be heavily tweaking the near/far planes as you go. In reverse you would lerp to 0 and then pop to the orthographic matrix. The idea behind this being that things appear flatter with a lower fov and that a fov of 0 is essentially an orthographic projection. This is more complex but can also be tweaked a whole lot more.
If you have access to a programmable pipeline (a.k.a. shaders), you can do the transition in the vertex shader. I have found that this works very well and does not introduce artifacts. Here's a GLSL code snippet:
#version 150
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform float uNearClipPlane = 1.0;
uniform vec2 uPerspToOrtho = vec2( 0.0 );
in vec4 inPosition;
void main( void )
{
// Calculate view space position.
vec4 view = uViewMatrix * uModelMatrix * inPosition;
// Scale x&y to 'undo' perspective projection.
view.x = mix( view.x, view.x * ( -view.z / uNearClipPlane ), uPerspToOrtho.x );
view.y = mix( view.y, view.y * ( -view.z / uNearClipPlane ), uPerspToOrtho.y );
// Output clip space coordinate.
gl_Position = uProjectionMatrix * view;
}
In the code, uPerspToOrtho is a vec2 (e.g. a float2) that contains a value in the range [0..1]. When set to 0, your coordinates will use perspective projection (assuming your projection matrix is a perspective one). When set to 1, your coordinates will behave as if projected by an orthographic projection matrix. You can do this separately for the X- and Y-axes.
'uNearClipPlane' is the near plane distance, which is the value you used to create the perspective projection matrix.
When converting this to HLSL, you may need to use view.z instead of -view.z, but I could be wrong.
I hope you find this useful.
Edit: instead of passing in the near clip plane distance, you can also extract it from the projection matrix. For OpenGL, this is how:
float zNear = 2.0 * uProjectionMatrix[3][2] / ( 2.0 * uProjectionMatrix[2][2] - 2.0 );
Edit 2: you can optimize the code by doing the scaling on x and y at the same time:
view.xy = mix( view.xy, view.xy * ( -view.z / uNearClipPlane ), uPerspToOrtho.xy );
To get rid of the division, you could multiply by the inverse near plane distance:
uniform float uInvNearClipPlane; // = 1.0 / zNear
I managed to do this without the explicit use of matrices. I used Java so the syntax is different but comparable. One of the things I used was this mix() function. It returns value1 when factor is 1 and value2 when factor is 0, and has a linear transition for every value in between.
private double mix(double value1, double value2, double factor)
{
return (value1 * factor) + (value2 * (1 - factor));
}
When I call this function, I use value1 for perspective and value2 for orthographic, like so:mix(focalLength/voxel.z, orthoZoom, factor)
When determining your focal length and orthographic zoom factor, it is helpful to know that anything at distance focalLength/orthoZoom away from the camera will project to the same point throughout the transition.
Hope this helps. You can download my program to see how it looks at https://github.com/npetrangelo/3rd-Dimension/releases.
I have been trying to get variance shadow mapping to work in my webgl application, but I seem to be having an issue that I could use some help with. In short, my shadows seem to vary over a much smaller distance than the examples I have seen out there. I.e. the shadow range is from 0 to 500 units, but the shadow is black 5 units away and almost non-existent 10 units away. The examples I am following are based on these two links:
VSM from Florian Boesch
VSM from Fabian Sanglard
In both of those examples, the authors are using spot light perspective projection to map the variance values to a floating point texture. In my engine, I have so far tried to use the same logic except I am using a directional light and orthographic projection. I tried both techniques and the result seems to always be the same for me. I'm not sure if its the because of me using an orthographic matrix to do projection - I suspect it might be. Here is a picture of the problem:
Notice how the box is only a few units away from the circle but the shadow is much darker even though the camera shadow is 0.1 to 500 units.
In the light shadow pass my code looks like this:
// viewMatrix is a uniform of the inverse world matrix of the camera
// vWorldPosition is the varying vec4 of the vertex position x world matrix
vec3 lightPos = (viewMatrix * vWorldPosition).xyz;
depth = clamp(length(lightPos) / 40.0, 0.0, 1.0);
float moment1 = depth;
float moment2 = depth * depth;
// Adjusting moments (this is sort of bias per pixel) using partial derivative
float dx = dFdx(depth);
float dy = dFdy(depth);
moment2 += pow(depth, 2.0) + 0.25 * (dx * dx + dy * dy) ;
gl_FragColor = vec4(moment1, moment2, 0.0, 1.0);
Then in my shadow pass:
// lightViewMatrix is the light camera's inverse world matrix
// vertWorldPosition is the attribute position x world matrix
vec3 lightViewPos = lightViewMatrix * vertWorldPosition;
float lightDepth2 = clamp(length(lightViewPos) / 40.0, 0.0, 1.0);
float illuminated = vsm( shadowMap[i], shadowCoord.xy, lightDepth2, shadowBias[i] );
shadowColor = shadowColor * illuminated
Firstly, should I be doing anything differently with Orthographic projection (Its probably not this, but I don't know what it might be as it happens using both techniques above :( )? If not, what might I be able to do to get a more even spread of the shadow?
Many thanks
I am trying to rotate a point say (20,6,30) around a point (10,6,10) at a radius of 2 and i have failed so far trying to do it.
I know that to rotate a point around origin you just multiply rotation matrix with world matrix and to rotate a point around itself is translating the point to origin ,then rotating and translating back, but not sure how to approach this problem.
I could slap together some C++ code if you like (stray away from D3DX as it is deprecated), but I think figuring things out for yourself is a big part of programming. Here is the math behind rotating 3d point v2 around 3d point v1. Hope it helps:
1.) Compute difference vector by subractring v2 from v1. Store in v3.
2.) Convert v3 to spherical coordinates, a notation of defined by radius, yaw, and pitch.
3.) Change values of theta (yaw) and phi (pitch) as required.
4.) Convert v3 back into Cartesian (x, y, z) coordinates and add the coordinates of v1. That's where v2's new position should be.
Note 1 - In physics, the meaning of theta and phi are swapped, so theta is pitch and phi is yaw. In mathematics, theta is yaw and phi is pitch.
Note 2 - yaw, pitch and roll are described as:
Note 3 - Wikipedia on D3DX: "In 2012, Microsoft announced that D3DX would be deprecated in the Windows 8 SDK, along with other development frameworks such as XNA. The mathematical constructs of D3DX, like vectors and matrices, would be consolidated with XNAMath into a new library: DirectXMath."
Using a radius doesn't make sense if you are rotating an object coordinate around an arbitrary origin coordinate because the radius is going to be constant during the rotation. To use a desired radius during rotation make sure that the object coordinate and origin coordinate are the desired radius apart.
Also note that you need a rotation axis to rotate around a point while using euler angles (vs. spherical or quaternions) because otherwise its undefined in which direction(s) you will be rotating.
If you are willing to do matrix math here is how I do it (using pseudo code because I only know OGL):
vec3 translate_around_point
(
vec3 object_pos,
vec3 origin_pos,
vec3 rotation_axis,
float rotation
)
{
// get difference
vec3 difference = object_pos - origin_pos;
//rotate difference on rotation axis
mat4 model = rotate(mat4 model(1.0), rotation, rotation_axis);
vec3 trans = model * vec4(difference, 1.0);
//add add translation to origin pos
return origin_pos + trans;
}
Your code using this function would look like this (radius is 22.360679626464844):
vec3 object_pos(20.0, 6.0, 30.0);
vec3 orig_pos(10.0, 6.0, 10.0);
vec3 axis(0.0, 1.0, 0.0); // rotation axis is 'up' in world space
float angle = 90.0;
object_pos = translate_around_point(object_pos, orig_pos, axis, angle);
Also if you were to run it with the object position having a greater height or a tilted axis of rotation then the origin position may not appear to be at the center of the rotation anymore. This is ok, because if the rotation doesn't rotate on part of the axis (x,y, or z), then it won't effect the translation on that part.
In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}