Calculate rotation matrix to transform one vector to another? - c++

There are several questions about this on Stack Overflow, but most use Quaternions and I am looking for something that does not. And this question is all over the web, but it is surprisingly hard to find a straightforward example in code. I am looking for a C/C++, or GLSL/Metal solution for the rotation matrix to transform one vector to another. I found this which seems to answer the question: http://www.j3d.org/matrix_faq/matrfaq_latest.html#Q38
However, it is not working for me in practice. With this I am getting the correct transformations at the ninety-degree angles (ie transforming 0,0,1 to 0,1,0), but in between it twists in the wrong direction. Here is the code I am using -- does anyone see what is wrong with it?
In this case I am always transforming from a fixed start normal of (0,0,1).
// matrix to rotate from (0,0,1) to specified direction
float4x4 directionMatrix(float3 direction) {
float3 normal = float3(0.0,0.0,1.0);
float3 V = normalize(cross(normal, direction));
float phi = acos(dot(normal,direction));
float rcos = cos(phi);
float rsin = sin(phi);
float4x4 M;
M[0].x = rcos + V.x * V.x * (1.0 - rcos);
M[1].x = V.z * rsin + V.y * V.x * (1.0 - rcos);
M[2].x = -V.y * rsin + V.z * V.x * (1.0 - rcos);
M[3].x = 0.0;
M[0].y = -V.z * rsin + V.x * V.y * (1.0 - rcos);
M[1].y = rcos + V.y * V.y * (1.0 - rcos);
M[2].y = -V.x * rsin + V.z * V.y * (1.0 - rcos);
M[3].y = 0.0;
M[0].z = V.y * rsin + V.x * V.z * (1.0 - rcos);
M[1].z = -V.x * rsin + V.y * V.z * (1.0 - rcos);
M[2].z = rcos + V.z * V.z * (1.0 - rcos);
M[3].z = 0.0;
M[0].w = 0.0;
M[1].w = 0.0;
M[2].w = 0.0;
M[3].w = 1.0;
return M;
}
I tried transposing this, in case the original article used the opposite column-row majority of mine, but it did not help.
EDIT: The problem here turned out to be that this code works for a left-handed coordinate system, where mine is right-handed. So to get this to work you just have to multiply phi by -1. I also edited the code to add normalization of the cross vector, as suggested in the comments.

Related

GLSL: Sample from previous output and not texture2D

I'm trying to write a custom shader where I first blur the texture and then run sobel edge finding.
I've got sobel running ok via the following script
vec4 colorSobel = texture2D(texture, uv);
float bottomLeftIntensity = texture2D(texture, uv + vec2(-0.0015625, 0.0020833)).r;
float topRightIntensity = texture2D(texture, uv + vec2(0.0015625, -0.0020833)).r;
float topLeftIntensity = texture2D(texture, uv + vec2(-0.0015625, 0.0020833)).r;
float bottomRightIntensity = texture2D(texture, uv + vec2(0.0015625, 0.0020833)).r;
float leftIntensity = texture2D(texture, uv + vec2(-0.0015625, 0)).r;
float rightIntensity = texture2D(texture, uv + vec2(0.0015625, 0)).r;
float bottomIntensity = texture2D(texture, uv + vec2(0, 0.0020833)).r;
float topIntensity = texture2D(texture, uv + vec2(0, -0.0020833)).r;
float h = -secondary * topLeftIntensity - coef * topIntensity - secondary * topRightIntensity + secondary * bottomLeftIntensity + coef * bottomIntensity + secondary * bottomRightIntensity;
float v = -secondary * bottomLeftIntensity - coef * leftIntensity - secondary * topLeftIntensity + secondary * bottomRightIntensity + coef * rightIntensity + secondary * topRightIntensity;
float mag = length(vec2(h, v));
// alpha values removed atm
if (mag < 0.5) {
colorSobel.rgb *= (1.0 - 1.0);
colorSobel.r += 0.0 * 1.0;
colorSobel.g += 0.0 * 1.0;
colorSobel.b += 0.0 * 1.0;
colorSobel.rgb += 1.0 * mag;
} else {
colorSobel.rgb *= (1.0 - 1.0);
colorSobel.r += 0.0 * 1.0;
colorSobel.g += 0.0 * 1.0;
colorSobel.b += 0.0 * 1.0;
colorSobel.rgb += 1.0 * mag;
}
gl_FragColor = colorSobel;
However I know it works by sampling the texture via texture2D.
If I were to first manipulate the output via a simple script such as this which reduces the colours
vec4 bg = texture2D(texture,uv);
gl_FragColor = vec4(gb.rgb, 1.0);
gl_FragColor.r = float(floor(gl_FragColor.r * 0.5 ) / 0.5);
gl_FragColor.g = float(floor(gl_FragColor.g * 0.5 ) / 0.5);
gl_FragColor.b = float(floor(gl_FragColor.b * 0.5 ) / 0.5);
The output still samples from the texture and ignores the first paint.
Is there a way to sample from the color output rather than using texture2D?
The reason I'm asking is that i'm chaining my shaders at runtime depending on user interaction?

Most accurate way to calculate view frustum corners in world space

I'm trying to calculate the view frustum corners in world space. I've implemented it by using the FOV and using the width/height of the planes and some vector math
However, a lot of examples simply state that you can multiply a NDC corner like (1,1,1) by the inverse viewProjection matrix. But when I do this I get somewhat different results. This is the code I'm using right now to test things:
float nearHeight = 2 * tan(mFOV / 2) * mNear;
float nearWidth = mNear * mRatio;
float farHeight = 2 * tan(mFOV / 2) * mFar;
float farWidth = mFar * mRatio;
glm::vec3 fc = mPos + mFront * mFar;
glm::vec3 nc = mPos + mFront * mNear;
mFrustum.frustumCorners[0] = fc + (mUp * farHeight / 2.0f) - (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[1] = fc + (mUp * farHeight / 2.0f) + (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[2] = fc - (mUp * farHeight / 2.0f) - (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[3] = fc - (mUp * farHeight / 2.0f) + (mRight * farWidth / 2.0f);
mFrustum.frustumCorners[4] = nc + (mUp * nearHeight / 2.0f) - (mRight * nearWidth / 2.0f);
mFrustum.frustumCorners[5] = nc + (mUp * nearHeight / 2.0f) + (mRight * nearWidth / 2.0f);
mFrustum.frustumCorners[6] = nc - (mUp * nearHeight / 2.0f) - (mRight * nearWidth / 2.0f);
mFrustum.frustumCorners[7] = nc - (mUp * nearHeight / 2.0f) + (mRight * nearWidth / 2.0f);
glm::vec4 test(1.0f, 1.0f, 1.0f,1.0f);
glm::vec4 test2(-1.0f, -1.0f, -1.0f, 1.0f);
glm::mat4 testingMatrix = glm::inverse(mProjectionMatrix * getViewMatrix());
test = testingMatrix*test;
test2 = testingMatrix*test2;
test2.x /= test2.w;
test2.y /= test2.w;
test2.z /= test2.w;
test.x /= test.w;
test.y /= test.w;
test.z /= test.w;
Now both of these results give an accurate z value for [near,far] = [1, 10000] but the x values are off by quite a bit while the y values are pretty much the same. I was just wonder which way is the most accurate one?
Inverse viewProjection
Regular calculation

Projection in OpenGl (bug with z-axis)

I don't know why but it seems my z-axis is bugged (it seems it is doubling the value or something)
This is supposed to be a cube
https://scontent-mad1-1.xx.fbcdn.net/hphotos-xtp1/v/t34.0-12/11998230_879538255460372_61658668_n.jpg?oh=dd08fee0a66e37bf8f3aae2fba107fa1&oe=55F6170C
However it seems like it is "bigger in depth" what seems wrong
My pr_Matrix:
mat4 mat4::prespective(float fov, float aspectRatio, float near, float far){
mat4 result;
float yScale = 1.0f / tan(toRadians(fov/2.0f));
float xScale = yScale / aspectRatio;
float frustumLength = far - near;
result.elements[0 + 0 * 4] = xScale;
result.elements[1 + 1 * 4] = yScale;
result.elements[2 + 2 * 4] = -(far + near) / frustumLength;
result.elements[3 + 2 * 4] = -1.0f;
result.elements[2 + 3 * 4] = -(2.0f * far * near) / frustumLength;
return result;
}
My ml_Matrix:
maths::mat4 &ProjectionMatrix(){
maths::mat4 m_ProjM = maths::mat4::identity();
m_ProjM *= maths::mat4::translation(m_Position);
m_ProjM *= maths::mat4::rotation(m_Rotation.x, maths::vec3(1, 0, 0));
m_ProjM *= maths::mat4::rotation(m_Rotation.y, maths::vec3(0, 1, 0));
m_ProjM *= maths::mat4::rotation(m_Rotation.z, maths::vec3(0, 0, 1));
maths::mat4 scale_matrix = maths::mat4(m_Scale);
scale_matrix.elements[3 + 3 * 4] = 1.0f;
m_ProjM *= scale_matrix;
return m_ProjM;
}
My vw_matrix (camera)
void update(){
maths::mat4 newMatrix = maths::mat4::identity();
newMatrix *= maths::mat4::rotation(m_Pitch, maths::vec3(1, 0, 0));
newMatrix *= maths::mat4::rotation(m_Yaw, maths::vec3(0, 1, 0));
newMatrix *= maths::mat4::translation(maths::vec3(-m_Pos.x, -m_Pos.y, -m_Pos.z));
m_ViewMatrix = newMatrix;
}
My Matrix Multiplication in the glsl code:
vec4 worldPosition = ml_matrix * vec4(position, 1.0);
vec4 positionRelativeToCamera = vw_matrix * worldPosition;
gl_Position = pr_matrix * positionRelativeToCamera;
Edit: I think I got it! (Will double check maths once I get time) Most tutorials(so, the source of my code) use a mat4::prespective(float fov, float aspectRatio, float near, float far), the thing they don't say is that "fov" means "fovy" (so "vertical field of view"), the results seem to replicate the "usual fov in games" now with this simple change:
float xScale = 1.0f / tan(toRadians(fov/2.0f));
float yScale = xScale * aspectRatio;
Thanks for pointing it out as a fov problem Henk De Boer
This is simply a result of FOV, the FOV you supply greatly alters the way an image looks. It's a matter of personal taste what you choose. To get a natural feeling scene anything between 45-60 is good, but you can enhance this to make the scene feel more immediate or the action more fast paced. Observe how the following image has twice the exact same geometry, but the dock to the right is protruding much further in to the viewport with a 90 degree FOV.
My guess would be that in declaring:
mat4 result;
You're invoking a default constructor that's initialising the matrix to identity.
Then in the subsequent code that sets up the matrix, you need a call to:
result.elements[3 + 3 * 4] = 0.0f;
Also, for what it's worth, in my code the lines:
result.elements[3 + 2 * 4] = -1.0f;
result.elements[2 + 3 * 4] = -(2.0f * far * near) / frustumLength;
Are not negated. TBH, I'm not sure if that's anything to worry about.

Rotate quad made in geometry shader

I'm drawing a quad using Geometry Shader, but can't figure out how to rotate it with angle.
void main(void)
{
float scaleX = 2.0f / u_resolution.x;
float scaleY = 2.0f / u_resolution.y;
float nx = (u_position.x * scaleX) - 1.0f;
float ny = -(u_position.y * scaleY) + 1.0f;
float nw = u_size.x * scaleX;
float nh = u_size.y * scaleY;
gl_Position = vec4( nx+nw, ny, 0.0, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(nx, ny, 0.0, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
gl_Position = vec4( nx+nw, ny-nh, 0.0, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(nx, ny-nh, 0.0, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
EndPrimitive();
}
Should I use a rotation matrix or sin and cos? I'm not too great at math.
You don't have to use matrices, but you need to use those sine functions. Here is a way to rotate a 3D position about some arbitrary axis by some angle specified in degrees:
// This is the 3D position that we want to rotate:
vec3 p = position.xyz;
// Specify the axis to rotate about:
float x = 0.0;
float y = 0.0;
float z = 1.0;
// Specify the angle in radians:
float angle = 90.0 * 3.14 / 180.0; // 90 degrees, CCW
vec3 q;
q.x = p.x * (x*x * (1.0 - cos(angle)) + cos(angle))
+ p.y * (x*y * (1.0 - cos(angle)) + z * sin(angle))
+ p.z * (x*z * (1.0 - cos(angle)) - y * sin(angle));
q.y = p.x * (y*x * (1.0 - cos(angle)) - z * sin(angle))
+ p.y * (y*y * (1.0 - cos(angle)) + cos(angle))
+ p.z * (y*z * (1.0 - cos(angle)) + x * sin(angle));
q.z = p.x * (z*x * (1.0 - cos(angle)) + y * sin(angle))
+ p.y * (z*y * (1.0 - cos(angle)) - x * sin(angle))
+ p.z * (z*z * (1.0 - cos(angle)) + cos(angle));
gl_Position = vec4(q, 1.0);
If you know that you are rotating about some standard x-, y-, or z-axis, you can simplify the "algorithm" a lot by defining it explicitly for that standard axis.
Notice how we rotate about the z-axis in the above code. For example, rotation about the x-axis would be (x,y,z) = (1,0,0). You could set the variables to anything, but the values should result in the axis being a unit vector (if that even matters.)
Then again, you might as well use matrices:
vec3 n = vec3(0.0, 0.0, 1.0); // the axis to rotate about
// Specify the rotation transformation matrix:
mat3 m = mat3(
n.x*n.x * (1.0f - cos(angle)) + cos(angle), // column 1 of row 1
n.x*n.y * (1.0f - cos(angle)) + n.z * sin(angle), // column 2 of row 1
n.x*n.z * (1.0f - cos(angle)) - n.y * sin(angle), // column 3 of row 1
n.y*n.x * (1.0f - cos(angle)) - n.z * sin(angle), // column 1 of row 2
n.y*n.y * (1.0f - cos(angle)) + cos(angle), // ...
n.y*n.z * (1.0f - cos(angle)) + n.x * sin(angle), // ...
n.z*n.x * (1.0f - cos(angle)) + n.y * sin(angle), // column 1 of row 3
n.z*n.y * (1.0f - cos(angle)) - n.x * sin(angle), // ...
n.z*n.z * (1.0f - cos(angle)) + cos(angle) // ...
);
// Apply the rotation to our 3D position:
vec3 q = m * p;
gl_Position = vec4(q, 1.0);
Notice how the elements of the matrix are laid out such that we first complete the first column, and then the second, and so on; the matrix is in column-major order. This matters when you try to transfer a matrix written in mathematical notation into a data type in your code. You basically need to transpose it (to flip the elements diagonally) in order to use it in your code. Also, we are essentially multiplying a matrix on left with a column vector on right.
If you need a 4-by-4 homogeneous matrix, then you would simply add an extra column and a row into the above matrix, such that both the new rightmost column and bottommost row would consist of [0 0 0 1]:
vec4 p = position.xyzw; // new dimension
vec3 n = ...; // same
mat4 m = mat4( // new dimension
...
0.0,
...
0.0,
...
0.0,
0.0,
0.0,
0.0,
1.0
);
vec4 q = m * p;
gl_Position = q;
Again, notice the order of the multiplication when applying the rotation, it is important because it affects the end result. What happens in the multiplication is basically that a new vector is formed by calculating the dot-product of the position vector and each column in the matrix; each coordinate in the resulting vector is the dot-product of the original vector and a single column in the matrix (see the first code example.)
The
q.x = p.x * (x*x * (1.0 - cos(angle)) + cos(angle))
+ p.y * (x*y * (1.0 - cos(angle)) + z * sin(angle))
+ p.z * (x*z * (1.0 - cos(angle)) - y * sin(angle));
Is same as:
q.x = dot(p, m[0]);
One could even compose the matrix with itself: m = m*m; which would result in a 180-degree counterclockwise rotation matrix, depending on the angle used.

Optimize Normalmap rotation of a vector

I use a quaternion to rotate the normal vector of a mesh into the direction the normal map vector. I thought I could use the Quaternion. So I create a quaternion with the angle between the normal of the mesh and (0,0,1). Shortened the methods in libgdx, creation of the Quaternion looks like this
vec4 createQuaternion(vec3 normal) {
float l_ang = acos(clamp(normal.z, -CONST_ONE, CONST_ONE))/CONST_TWO;
float l_sin = sin(l_ang);
return normalize(vec4(-normal.y * l_sin, normal.x * l_sin, CONST_ZERO, cos(l_ang)));
}
with this Quaternion I can rotate the normal of the mesh in the direction of the normal map. Therefore I use a method I've seen in the libgdx Quaternion (while v is the normal from the normalmap and m is the quaternion):
vec3 rotateNormal(vec3 v, vec4 m) {
vec4 tmp1 = vec4(v,CONST_ZERO);
vec4 tmp2 = vec4(m);
// conjugate
tmp2.x = -tmp2.x;
tmp2.y = -tmp2.y;
tmp2.z = -tmp2.z;
tmp2 = mulLeft(tmp1, tmp2);
tmp1 = mulLeft(m, tmp2);
v.x = tmp1.x;
v.y = tmp1.y;
v.z = tmp1.z;
return v;
}
The method mulLeft looks like that:
vec4 mulLeft(vec4 q, vec4 a) {
float newX = q.w * a.x + q.x * a.w + q.y * a.z - q.z * a.y;
float newY = q.w * a.y + q.y * a.w + q.z * a.x - q.x * a.z;
float newZ = q.w * a.z + q.z * a.w + q.x * a.y - q.y * a.x;
float newW = q.w * a.w - q.x * a.x - q.y * a.y - q.z * a.z;
a.x = newX;
a.y = newY;
a.z = newZ;
a.w = newW;
return a;
}
For the use in the shader I just call:
normal = rotateNormal(normalMap, createQuaternion(normalMesh));
and it works as expected.
The only thing I am considered about is that I can imagine that there is a shorter way to write that. Especially the mulLeft Method. Is there?
What do you think about the whole method, turning the vector by quaternion instead of using NTB?
ntb looks a bit like too much calculation while animating the mesh.
Edit:
This is the a test with my sourcecode
This is a Test with Tenfour's suggestion
Edit2:
I shortened the whole thing to:
vec3 rotateNormal(vec3 normalMap, vec3 normal) {
// create quaternion from cross(normal, vec3(0,0,1))
float l_ang = acos(clamp(normal.z, -CONST_ONE, CONST_ONE))/CONST_TWO;
float l_sin = sin(l_ang);
vec4 quat = normalize(vec4(-normal.y * l_sin, normal.x * l_sin, CONST_ZERO, cos(l_ang)));
// shortened function to double mulQuat the normalMap on the quaternion
return vec3(
quat.x*quat.x*normalMap.x - quat.y*quat.y*normalMap.x - quat.z*quat.z*normalMap.x + quat.w*quat.w*normalMap.x - CONST_TWO*quat.z*quat.w*normalMap.y + CONST_TWO*quat.y*quat.w*normalMap.z + CONST_TWO*quat.x *(quat.y*normalMap.y + quat.z*normalMap.z),
-(quat.x*quat.x*normalMap.y) - quat.z*quat.z* normalMap.y + (quat.y*quat.y + quat.w*quat.w ) * normalMap.y + CONST_TWO*quat.z *(quat.w*normalMap.x + quat.y*normalMap.z) + CONST_TWO*quat.x* (quat.y*normalMap.x - quat.w*normalMap.z),
quat.y*(-CONST_TWO*quat.w*normalMap.x + CONST_TWO*quat.z*normalMap.y) + CONST_TWO*quat.x*(quat.z*normalMap.x + quat.w*normalMap.y) - quat.x*quat.x*normalMap.z - quat.y*quat.y*normalMap.z + (quat.z*quat.z + quat.w*quat.w )* normalMap.z
);
}
Input is the normal from the normal map and the normal of the mesh.
I am sure there's something better than that. what could that be?
I found this
vec3 rotate_vector( vec4 quat, vec3 vec )
{
return vec + 2.0 * cross( cross( vec, quat.xyz ) + quat.w * vec, quat.xyz );
}
here. Does that work? Not sure how much less underlying math there is, but built-in GLSL functions can often take advantage of GPU optimizations.