Is there a standard interpolation method for interpolation between skeletal animation keyframes? Right now I'm using glm::slerp().
Are there interpolation methods other than slerp and lerp?
According to the glm docs, glm::mix(quat1, quat2, a) does spherical linear interpolation of two quaternions, and glm::slerp(quat1, quat2, a) does "short path spherical linear interpolation" of two quaternions. What's the difference?
When in doubt, look at source code. The only difference is this part (aside from different x/y/z naming):
// If cosTheta < 0, the interpolation will take the long way around the sphere.
// To fix this, one quat must be negated.
if (cosTheta < T(0))
{
z = -y;
cosTheta = -cosTheta;
}
Basically they're the same except slerp ensures interpolation will take shorter path on the sphere while mix don't care and may take opposite longer path.
There are many other interpolation methods; probably most advanced is bezier curve (used my most animation software), but it requires quite a lot more memory and computation power.
Related
I have a little "2 1/2-D" Engine I'm working on that targets multiple platforms, but currently I am upgrading my DirectX 11 version.
One of the things that is really important for my engine to do is to be able to adjust the horizon point so that the perspective below the horizon moves into the distance at a different angle than the perspective above the horizon.
In a normal 3D environment this would be typically accomplished by tilting the camera up above the horizon, however, in my engine, which makes heavy use of 2D sprites, tilting the camera in the traditional sense would also tilt the sprites... something I don't want to do (It ruins the 16-bit-arcade-style of the effect)
I had this working at one point by manually doing the perspective divide in the CPU using a center-point that was off-center, but I'd like to do this with a special projection matrix if possible. Right now I'm using a stock matrix that uses FOV, Near-Plane, and Far-Plane arguments.
Any ideas? Is this even possible with a matrix? Isn't the perpective divide automatic in the DX11 pipeline? How do I control when or how the perspective divide is performed? Am I correct in assuming that the perspective divide cannot be accomplished with a matrix alone? It requires each vertex to be manually divided by Z, correct?
What you are looking for is an off center perspective projection matrix, instead of a fov and aspect ratio, you provide left/right/top/bottom has tan(angle). The result is more or less the same as with a symmetric projection matrix with the addition of two extra non zero values.
You are right also, the GPU is hard wired to perform the w divide, and it is not a good idea to do it in the vertex shader, it will mess with perspective correction for the texture coordinates and clipping ( either not a big deal with the sprite special case ).
You can find an example of such a matrix here : https://msdn.microsoft.com/en-us/library/windows/desktop/bb205353(v=vs.85).aspx
Whether you use fixed or programmable shader pipeline, a common vertex pipeline consists of this matrix multiplication (either custom coded or behind the scenes):
Projection * Modelview * Position
Lots of tutorials note items such as an object's rotation should go into the Modelview matrix.
I created a standard rotation matrix function based on degrees and then added to the degrees parameter the proper multiple of 90 to account for the screen's autorotation orientation. Works.
For different screen sizes (different pixel widths and heights of screen), I could also factor a Scale multiplier in there so that a Modelview matrix might incorporate a lot of these.
But what I've settled on is a much more verbose matrix math and since I'm new to this stuff, I'd appreciate feedback on whether this is smart.
I simply add independent matrices for the screensize scaling as well as the screen orientation, in addition to object manipulation such as scale and rotation. I end up with this:
Projection * ScreenRotation * ScreenScale * Translate * Rotate * Scale * Position
Some of these are interchangeable order, such as Rotate and Scale could be switched, I find.
This gives me more fine-tuned control and segregation of code so I can concentrate on just an object's rotation without thinking of the screen's orientation at the same time, for example.
Is this is a common or acceptable strategy to organize matrix math appropriately? It seems to work fine, but are there any pitfalls to such verbosity?
The main issue with such verbosity is, that it wastes precious computation cycles if performed on the GPU. Each matrix would be supplied as a uniform, thus forcing the GPU into computing for each and every vertex, while it would be actually a constant across the whole shader. The nice thing about matrices is, that a single matrix can hold the whole chain of transformations and transformation can be done by a single vector-matrix multiplication.
The typical stanza
Projection · Modelview · Position
of using two matrices comes from, that usually one needs the intermediary result of Modelview · Position for some calculations. In theory you could contract the whole thing down to
ProjectionViewModel · Position
Now you're proposing this matrix expression
Projection * ScreenRotation * ScreenScale * Translate * Rotate * Scale * Position
Ugh… this whole thing is the pinnacle of unflexibility. You want flexibility? This thing is rigid, what if you wat to apply some nonuniform scaling onto already rotated geometry. The order of operations in matrix math matters and you can not freely mix them. Assume you're drawing a sphere
Rotate(45, 0, 0, 1) · Scale(1,2,1) · SphereVertex
looks totally different than
Scale(1,2,1) · Rotate(45, 0, 0, 1) · SphereVertex
Screen scale and rotation can, and should be, applied directly into the Projection matrix, no need for extra matrices. The key understanding is, that you can compose every linear transformation chain into a single matrix. And for practical reasons you want to apply screen pixel aspect scaling as last step in the chain, and screen rotation as the second to last step in the chain.
So you can build your projection matrix, not in the shader, but in your display routines frame setup code. Assume you're using my linmath.h it would look like the following
mat4x4 projection;
mat4x4_set_identity(projection);
mat4x4_mul_scale_aniso(projection, …);
mat4x4_mul_rotate_Z(projection, …);
if(using_perspective)
mat4x4_mul_frustum(projection, …);
else
mat4x4_mul_ortho(projection, …);
The resulting matrix projection you'd then set as the projection matrix uniform.
What's the correct/best way of constraining a 3D rotation (using Euler angles and/or quaternions)?
It seems like there's something wrong with my way of doing it. I'm applying the rotations to bones in a skeletal hierarchy for animation, and the bones sometimes visibly "jump" into the wrong orientation, and the individual Euler components are wrapping around to the opposite end of their ranges.
I'm using Euler angles to represent the current orientation, converting to quaternions to do rotations, and clamping each Euler angle axis independently. Here's C++ pseudo-code showing basically what I'm doing:
Euler min = ...;
Euler max = ...;
Quat rotation = ...;
Euler eCurrent = ...;
// do rotation
Quat qCurrent = eCurrent.toQuat();
qCurrent = qCurrent * rotation;
eCurrent = qCurrent.toEuler();
// constrain
for (unsigned int i = 0; i < 3; i++)
eCurrent[i] = clamp(eCurrent[i], min[i], max[i]);
One problem with Euler angles is that there are multiple ways to represent the same rotation, so you can easily create a sequence of rotations that are smooth, but the angles representing that rotation may jump around. If the angles jump in and out of the constrained range, then you will see effects like you are describing.
Imagine that only the X rotation was involved, and you had constrained the X rotation to be between 0 and 180 degrees. Also imagine that your function that converts the quaternion to Euler angles gave angles from -180 to 180 degrees.
You then have this sequence of rotations:
True rotation After conversion After constraint
179 179 179
180 180 180
181 -179 0
You can see that even though the rotation is changing smoothly, the result will suddenly jump from one side to the other because the conversion function forces the result to be represented in a certain range.
When you are converting the quaternion to Euler angles, find the angles that are closest to the previous result. For example:
eCurrent = closestAngles(qCurrent.toEuler(),eCurrent);
eConstrained = clampAngles(eCurrent,min,max);
remember the eCurrent values for next time, and apply the eConstrained rotations to your skeleton.
The problem here is that the constraints you are applying have no relation to the rotation be applied. From a conceptual point of view this is what you are trying to achieve:
assume a bone is in an unconstrained state.
apply rotation
has bone exceeded constraints? If yes, rotate it to where it is not constrained any more
Your code that clamps the Euler rotations is the part where you rotating the bone back. However this code ignores the original bone rotation so you will see odd behavior, such as the snapping you are seeing.
A simple way to work with this is to do this instead:
assume a bone is in an unconstrained state
apply rotation
test if bone exceeded constraints
if yes, we need to find where the constraint stops movement.
halve the rotation, apply it in reverse
is the bone exceeding constraints? If yes go to 1
If no, halve the rotation, apply it in the forward direction. Goto 2
keep doing that until you are within some tolerance of your constraining angles
Now this will work, but because your rotation quarternions is being applied on all angles, the rotation will stop when any one of those constraints are net, even if there is freedom some where else.
If instead you apply rotations independently of each other, then you will be able to reliable use your clamping or the above technique to honor constraints, and also rotate as closely to your target as you can.
-
-
In my OpenGL application I need to use ArcBall rotation to rotate objects using mouse.
I relized that I have to go with Quaternions after reading this article - http://www.gamedev.net/page/resources/_/technical/math-and-physics/quaternion-powers-r1095
And I found an easy to use implementation at here - http://www.codeproject.com/KB/openGL/virtualtrackball.aspx
But my problem is I also need to animate my object between saved two status.
That is -
State(1)= (Postition X1,Position Y1,Position Z1, Rotation 1);
State(2)= (Postition X2,Position Y2,Position Z2, Rotation 2);
*These 'Rotations' are rotation matrices
And the animation is done in n steps.
Something like shown in this video - http://www.youtube.com/watch?v=rrUCBOlJdt4
If I was using 3 seperate angles for three axises(roll, pitch, yaw) I could easily interpolate the angles.
But ,since ArcBall uses rotation Matrix , how can I interpolate the rotations between State 1 and State2 ?
Any suggestions ?
Use quaternions (SLERP). Neither rotation matrices nor Euler angles are appropriate for interpolation.
See 45:05 here (David Sachs, Google Tech Talk).
See also Interpolating between rotation matrices
Either use Matrix or Quaternion for rotation representation internally. With roll/pitch/yaw you could interpolate the angles in theory, but this will not work every time - read up on Gimbal lock.
With Quaternions the rotation interpolation is easy - just interpolate the individual coordinates (just as you would with r/p/y angles), normalizing when necessary. You'd have to then adjust your rendering function to work with quaternions (but I assume you already did that since you mention quaternions youself).
With Matrixes the interpolation is not so nice, I've struggled with it several years back and all I can remember it that I finally decided to go with quaternions. So I can't advice on this, sorry.
I have a virtual landscape with the ability to walk around in first-person. I want to be able to walk up any slope if it is 45 degrees or less. As far as I know, this involves translating your current position out x units then finding the distance between the translated point and the ground. If that distance is x units or more, the user can walk there. If not, the user cannot. I have no idea how to find the distance between one point and the nearest point in the negative y direction. I have programmed this in Java3D, but I do not know how to program this in OpenGL.
Barking this problem at OpenGL is barking up the wrong tree: OpenGL's sole purpose is drawing nice pictures to the screen. It's not a math library!
Depending you your demands there are several solutions. This is how I'd tackle this problem: The normals you calculate for proper shading give you the slope of each point. Say your heightmap (=terrain) is in the XY plane and your gravity vector g = -Z, then the normal force is terrain_normal(x,y) · g. The normal force is, what "pushes" your feet against the ground. Without sufficient normal force, there's not enough friction to convey your muscles force into a movement perpendicular to the ground. If you look at the normal force formula you can see that the more the angle between g and terrain_normal(x,y) deviates, the smaller the normal force.
So in your program you could simply test if the normal force exceeds some threshold; correctly you'd project the excerted friction force onto the terrain, and use that as acceleration vector.
If you just have a regular triangular hightmap you can use barycentric coordinates to interpolate Z values from a given (X,Y) position.