Rotating a Group of Vectors - opengl

I am trying to rotate a group of vectors I sampled to the normal of a triangle
If this was correct, the randomly sampled hemisphere would line up with the triangle.
Currently I generate it on the Z-axis and am attempting to rotate all the samples to the normal of the triangle.
but it seems to be "just off"
glm::quat getQuat(glm::vec3 v1, glm::vec3 v2)
{
glm::quat myQuat;
float dot = glm::dot(v1, v2);
if (dot != 1)
{
glm::vec3 aa = glm::normalize(glm::cross(v1, v2));
float w = sqrt(glm::length(v1)*glm::length(v1) * glm::length(v2)*glm::length(v2)) + dot;
myQuat.x = aa.x;
myQuat.y = aa.y;
myQuat.z = aa.z;
myQuat.w = w;
}
return myQuat;
}
Which I pulled from the bottom of this page : http://lolengine.net/blog/2013/09/18/beautiful-maths-quaternion-from-vectors
Then I :
glm::vec3 zaxis = glm::normalize( glm::vec3(0, 0, 1) ); // hardcoded but test orginal axis
glm::vec3 n1 = glm::normalize( glm::cross((p2 - p1), (p3 - p1)) ); //normal
glm::quat myQuat = glm::normalize(getQuat(zaxis, n1));
glm::mat4 rotmat = glm::toMat4(myQuat); //make a rotation matrix
glm::vec4 n3 = rotmat * glm::vec4(n2,1); // current vector I am trying to rotate

Construct 4x4 transform matrix instead of Quaternions.
Do not forget that OpenGL has column wise matrix
so for double m[16];
is X axis vector in m[ 0],m[ 1],m[ 2]
is Y axis vector in m[ 4],m[ 5],m[ 6]
is Z axis vector in m[ 8],m[ 9],m[10]
and position is in m[12],m[13],m[14]
The LCS mean local coordinate system (your triangle or object or whatever) and GCS mean global coordinate system (world or whatever).
All the X,Y,Z vectors should be normalized to unit vectors otherwise scaling will occur.
construction
set Z-axis vector to your triangle normal
set position (LCS origin) to mid point of your triangle (or average point form its vertexes)
now you just need X and Y axises which is easy
let X = any triangle vertex - triangle midpoint
or X = substraction of any 2 vertexes of triangle
The only condition that must be met for X is that it must lie on triangle plane.
Now let Y = X x Z the cross product will create vector perpendicular to X and Z (which also lies in triangle plane).
now put all this inside matrix and load it to OpenGL as ModelView matrix or what ever.

Related

How can I define vertices of a plane that is always parallel to the camera?

I am trying to draw a rectangle (basically a plane) that is always parallel to the camera. I want to restrict plane to a certain size (lets say height = 2 and width = 2 units). However, I do not understand how to set position to the vertices such that rectangle will always be parallel to the camera.
First I am calculating camera normal (direction) using:
glm::normalize(mPosition - mTargetPos); // normal
and then I am using point-normal equation to define the plane:
normal = (A, B, C)
point = (a, b, c) // this point will serve as a center to the plane
A(x−a)+B(y−b)+C(z−c) = 0
Question: How can I define vertices of the plane?
Take some normilized vector UpDir for up direction (it can be UpDir=(0,1,0) or UpDir=(0,0,1) depending on your coordinate system, or it can be computed somehow)
Compute cross product SideDir of the normal and the UpDir.
Now you can use the SideDir and UpDir as basis for your plane's coordinate system, and compute four vertices of the rectangle as point+width*SideDir+height*UpDir, point+width*SideDir-height*UpDir, point-width*SideDir-height*UpDir, point-width*SideDir+height*UpDir
I recommend to define the points in view space. Finally transform the points by the inverse view matrix.
In view space the points are parallel to the view if they have the same z coordinate. The z-coordinate has to be negative and its amount has to greater than the distance to the near plane and less than the distance to the far plane:
near < -z < far
Compute the view matrix (view_mat) and define the points in view sapce:
glm::mat4 view_mat = glm::lookAt(mPosition, mTargetPos, mUp);
float z =
glm::vec3 pt1View(x1, y1, z);
glm::vec3 pt2View(x2, y2, z);
// [...]
Transform the points from view space to world space:
glm::mat4 inverse_view_mat = glm::inverse(view_mat);
glm::vec3 pt1World = glm::vec3(inverse_view_mat * glm::vec4(pt1View, 1.0f));
glm::vec3 pt2World = glm::vec3(inverse_view_mat * glm::vec4(pt2View, 1.0f));
// [...]

OpenGL ray tracing using inverse transformations

I have a pipeline that uses model, view and projection matrices to render a triangle mesh.
I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.
When I just had a model (no view or projection) in the vertex shader I had
Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);
and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be
Vector4f ray_origin = model.inverse() * view.inverse() * projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() * projection.inverse() * Vector4f(0, 0, -1, 0);
and nothing is working anymore. What am I doing wrong?
If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:
float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped
Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc = Vector4f(x_ndc, y_ndc, 1, 1); // z far = 1
A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:
Vector4f p_near_h = model.inverse() * view.inverse() * projection.inverse() * p_near_ndc;
Vector4f p_far_h = model.inverse() * view.inverse() * projection.inverse() * p_far_ndc;
After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:
Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>() / p_far_h.w();
The "ray" in model space, defined by point r and a normalized direction d finally is:
Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()

How do you find the Y position of a point between four vertices? HLSL

Let's say there is a grid terrain for a game composed of tiles made of two triangles - made from four vertices. How would we find the Y (up) position of a point between the four vertices?
I have tried this:
float diffZ1 = lerp(heights[0], heights[2], zOffset);
float diffZ2 = lerp(heights[1], heights[3], zOffset);
float yPosition = lerp(diffZ1, diffZ2, xOffset);
Where z/yOffset is the z/y offset from the first vertex of the tile in percent / 100. This works for flat surfaces but not so well on bumpy terrain.
I expect this has something to do with the terrain being made from triangles where the above may work on flat planes. I'm not sure, but does anybody know what's going wrong?
This may better explain what's going on here:
In the code above "heights[]" is an array of the Y coordinate of surrounding vertices v0-3.
Triangle 1 is made of vertex 0, 2 and 1.
Triangle 2 is made of vertex 1, 2 and 3.
I wish to find coordinate Y of p1 when its x,y coordinates lay between v0-3.
So I have tried determining which triangle the point is between through this function:
bool PointInTriangle(float3 pt, float3 pa, float3 pb, float3 pc)
{
// Compute vectors
float2 v0 = pc.xz - pa.xz;
float2 v1 = pb.xz - pa.xz;
float2 v2 = pt.xz - pa.xz;
// Compute dot products
float dot00 = dot(v0, v0);
float dot01 = dot(v0, v1);
float dot02 = dot(v0, v2);
float dot11 = dot(v1, v1);
float dot12 = dot(v1, v2);
// Compute barycentric coordinates
float invDenom = 1.0f / (dot00 * dot11 - dot01 * dot01);
float u = (dot11 * dot02 - dot01 * dot12) * invDenom;
float v = (dot00 * dot12 - dot01 * dot02) * invDenom;
// Check if point is in triangle
return (u >= 0.0f) && (v >= 0.0f) && (u + v <= 1.0f);
}
This isn't giving me the results I expected
I am then trying to find the y coordinate of point p1 inside each triangle:
// Position of point p1
float3 pos = input[0].PosI;
// Calculate point and normal for triangles
float3 p1 = tile[0];
float3 n1 = (tile[2] - p1) * (tile[1] - p1); // <-- Error, cross needed
// = cross(tile[2] - p1, tile[1] - p1);
float3 p2 = tile[3];
float3 n2 = (tile[2] - p2) * (tile[1] - p2); // <-- Error
// = cross(tile[2] - p2, tile[1] - p2);
float newY = 0.0f;
// Determine triangle & get y coordinate inside correct triangle
if(PointInTriangle(pos, tile[0], tile[1], tile[2]))
{
newY = p1.y - ((pos.x - p1.x) * n1.x + (pos.z - p1.z) * n1.z) / n1.y;
}
else if(PointInTriangle(input[0].PosI, tile[3], tile[2], tile[1]))
{
newY = p2.y - ((pos.x - p2.x) * n2.x + (pos.z - p2.z) * n2.z) / n2.y;
}
Using the following to find the correct triangle:
if((1.0f - xOffset) <= zOffset)
inTri1 = true;
And correcting the code above to use the correct cross function seems to have solved the problem.
Because your 4 vertices may not be on a plane, you should consider each triangle separately. First find the triangle that the point resides in, and then use the following StackOverflow discussion to solve for the Z value (note the different naming of the axes). I personally like DanielKO's answer much better, but the accepted answer should work too:
Linear interpolation of three 3D points in 3D space
EDIT: For the 2nd part of your problem (finding the triangle that the point is in):
Because the projection of your tiles onto the xz plane (as you define your coordinates) are perfect squares, finding the triangle that the point resides in is a very simple operation. Here I'll use the terms left-right to refer to the x axis (from lower to higher values of x) and bottom-top to refer to the z axis (from lower to higher values of z).
Each tile can only be split in one of two ways. Either (A) via a diagonal line from the bottom-left corner to the top-right corner, or (B) via a diagonal line from the bottom-right corner to the top-left corner.
For any tile that's split as A:
Check if x' > z', where x' is the distance from the left edge of the tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x' > z' then your point is in the bottom-right triangle; otherwise it's in the upper-left triangle.
For any tile that's split as B: Check if x" > z', where x" is the distance from the right edge of your tile to the point, and z' is the distance from the bottom edge of the tile to the point. If x" > z' then your point is in the lower-left triangle; otherwise it's in the upper-right triangle.
(Minor note: Above I assume your tiles aren't rotated in the xz plane; i.e. that they are aligned with the axes. If that's not correct, simply rotate them to align them with the axes before doing the above checks.)

How to rotate a vector by a given direction

I'm creating some random vectors/directions in a loop as a dome shape like this:
void generateDome(glm::vec3 direction)
{
for(int i=0;i<1000;++i)
{
float xDir = randomByRange(-1.0f, 1.0f);
float yDir = randomByRange(0.0f, 1.0f);
float zDir = randomByRange(-1.0f, 1.0f);
auto vec = glm::vec3(xDir, yDir, zDir);
vec = glm::normalize(vec);
...
//some transformation with direction-vector
}
...
}
This creates vectors as a dome-shape in +y direction (0,1,0):
Now I want to rotate the vec-Vector by a given direction-Vector like (1,0,0).
This should rotate the "dome" to the x-direction like this:
How can I achieve this? (preferably with glm)
A rotation is generally defined using some sort of offset (axis-angle, quaternion, euler angles, etc) from a starting position. What you are looking for would be more accurately described (in my opinion) as a re-orientation. Luckily this isn't too hard to do. What you need is a change-of-basis matrix.
First, lets just define what we're working with in code:
using glm::vec3;
using glm::mat3;
vec3 direction; // points in the direction of the new Y axis
vec3 vec; // This is a randomly generated point that we will
// eventually transform using our base-change matrix
To calculate the matrix, you need to create unit vectors for each of the new axes. From the example above it becomes apparent that you want the vector provided to become the new Y-axis:
vec3 new_y = glm::normalize(direction);
Now, calculating the X and Z axes will be a tad more complicated. We know that they must be orthogonal to each other and to the Y axis calculated above. The most logical way to construct the Z axis is to assume that the rotation is taking place in the plane defined by the old Y axis and the new Y axis. By using the cross-product we can calculate this plane's normal vector, and use that for the Z axis:
vec3 new_z = glm::normalize(glm::cross(new_y, vec3(0, 1, 0)));
Technically the normalization isn't necessary here since both input vectors are already normalized, but for the sake of clarity, I've left it. Also note that there is a special case when the input vector is colinear with the Y-axis, in which case the cross product above is undefined. The easiest way to fix this is to treat it as a special case. Instead of what we have so far, we'd use:
if (direction.x == 0 && direction.z == 0)
{
if (direction.y < 0) // rotate 180 degrees
vec = vec3(-vec.x, -vec.y, vec.z);
// else if direction.y >= 0, leave `vec` as it is.
}
else
{
vec3 new_y = glm::normalize(direction);
vec3 new_z = glm::normalize(glm::cross(new_y, vec3(0, 1, 0)));
// code below will go here.
}
For the X-axis, we can cross our new Y-axis with our new Z-axis. This yields a vector perpendicular to both of the others axes:
vec3 new_x = glm::normalize(glm::cross(new_y, new_z));
Again, the normalization in this case is not really necessary, but if y or z were not already unit vectors, it would be.
Finally, we combine the new axis vectors into a basis-change matrix:
mat3 transform = mat3(new_x, new_y, new_z);
Multiplying a point vector (vec3 vec) by this yields a new point at the same position, but relative to the new basis vectors (axes):
vec = transform * vec;
Do this last step for each of your randomly generated points and you're done! No need to calculate angles of rotation or anything like that.
As a side note, your method of generating random unit vectors will be biased towards directions away from the axes. This is because the probability of a particular direction being chosen is proportional to the distance to the furthest point possible in a given direction. For the axes, this is 1.0. For directions like eg. (1, 1, 1), this distance is sqrt(3). This can be fixed by discarding any vectors which lie outside the unit sphere:
glm::vec3 vec;
do
{
float xDir = randomByRange(-1.0f, 1.0f);
float yDir = randomByRange(0.0f, 1.0f);
float zDir = randomByRange(-1.0f, 1.0f);
vec = glm::vec3(xDir, yDir, zDir);
} while (glm::length(vec) > 1.0f); // you could also use glm::length2 instead, and avoid a costly sqrt().
vec = glm::normalize(vec);
This would ensure that all directions have equal probability, at the cost that if you're extremely unlucky, the points picked may lie outside the unit sphere over and over again, and it may take a long time to generate one that's inside. If that's a problem, it could be modified to limit the iterations: while (++i < 4 && ...) or by increasing the radius at which a point is accepted every iteration. When it is >= sqrt(3), all possible points would be considered valid, so the loop would end. Both of these methods would result in a slight biasing away from the axes, but in almost any real situation, it would not be detectable.
Putting all the code above together, combined with your code, we get:
void generateDome(glm::vec3 direction)
{
// Calculate change-of-basis matrix
glm::mat3 transform;
if (direction.x == 0 && direction.z == 0)
{
if (direction.y < 0) // rotate 180 degrees
transform = glm::mat3(glm::vec3(-1.0f, 0.0f 0.0f),
glm::vec3( 0.0f, -1.0f, 0.0f),
glm::vec3( 0.0f, 0.0f, 1.0f));
// else if direction.y >= 0, leave transform as the identity matrix.
}
else
{
vec3 new_y = glm::normalize(direction);
vec3 new_z = glm::normalize(glm::cross(new_y, vec3(0, 1, 0)));
vec3 new_x = glm::normalize(glm::cross(new_y, new_z));
transform = mat3(new_x, new_y, new_z);
}
// Use the matrix to transform random direction vectors
vec3 point;
for(int i=0;i<1000;++i)
{
int k = 4; // maximum number of direction vectors to guess when looking for one inside the unit sphere.
do
{
point.x = randomByRange(-1.0f, 1.0f);
point.y = randomByRange(0.0f, 1.0f);
point.z = randomByRange(-1.0f, 1.0f);
} while (--k > 0 && glm::length2(point) > 1.0f);
point = glm::normalize(point);
point = transform * point;
// ...
}
// ...
}
You need to create a rotation matrix. Therefore you need a identity Matrix. Create it like this with
glm::mat4 rotationMat(1); // Creates a identity matrix
Now your can rotate the vectorspacec with
rotationMat = glm::rotate(rotationMat, 45.0f, glm::vec3(0.0, 0.0, 1.0));
This will rotate the vectorspace by 45.0 degrees around the z-axis (as shown in your screenshot). Now your almost done. To rotate your vec you can write
vec = glm::vec3(rotationMat * glm::vec4(vec, 1.0));
Note: Because you have a 4x4 matrix you need a vec4 to multiply it with the matrix. Generally it is a good idea always to use vec4 when working with OpenGL because vectors in smaller dimension will be converted to homogeneous vertex coordinates anyway.
EDIT: You can also try to use GTX Extensions (Experimental) by including <glm/gtx/rotate_vector.hpp>
EDIT 2: When you want to rotate the dome "towards" a given direction you can get your totation axis by using the cross-product between the direction and you "up" vector of the dome. Lets say you want to rotate the dome "toward" (1.0, 1.0, 1.0) and the "up" direction is (0.0, 1.0, 0.0) use:
glm::vec3 cross = glm::cross(up, direction);
glm::rotate(rotationMat, 45.0f, cross);
To get your rotation matrix. The cross product returns a vector that is orthogonal to "up" and "direction" and that's the one you want to rotate around. Hope this will help.

How to rotate a vector to a place that is aligned with Z axis?

I want to rotate a vector to the Z axis and its direction is Z-axis backward. So if the vector is (1,1,1), my result should be (0,0,-sqrt(3)).
My idea is two steps. The first step is to rotate my vector around X axis to the XZ plane. The second step is to rotate the vector in the XZ plane around Y axis to Z axis.
Here is my code:
GLfloat p[4] = {1,1,1,0}; //my vector, in homogeneous coordinates
GLfloat r[4]; //result vector to test
float theta1 = ((double)180/PI)*asin(p[1]/sqrt(p[0]*p[0]+p[1]*p[1]+p[2]*p[2]));
//angle theta1 between the vector and XZ plane, is this right ??? I doubt it !!!
float theta2 = ((double)180/PI)*atan(p[0]/p[2]);
//angle theta2 between the vector's projection in XZ plane and Z axis
GLfloat m[16];
glMatrixMode(GL_MODELVIEW); // get the rotation matrix in model-view matrix
glPushMatrix();
glLoadIdentity();
glRotatef(theta1, 1,0,0); //rotate to the XZ plane
glRotatef(180-theta2,0,1,0); //rotate to the Z axis
glGetFloatv(GL_MODELVIEW_MATRIX, m); // m is column-major.
glPopMatrix();
// use the matrix multiply my vector and get the result vector r[4]
//my expectation is (0,0,-sqrt(3))
r[0] = p[0]*m[0]+p[1]*m[4]+p[2]*m[8]+p[3]*m[12];
r[1] = p[0]*m[1]+p[1]*m[5]+p[2]*m[9]+p[3]*m[13];
r[2] = p[0]*m[2]+p[1]*m[6]+p[2]*m[10]+p[3]*m[14];
r[3] = p[0]*m[3]+p[1]*m[7]+p[2]*m[11]+p[3]*m[15];
However, the result r[4] is not my expectation. So I think I made some mistakes in some places above. Could anyone give me a hint about that ?
To rotate one vector so it faces another:
normalise it
take their dot product to get the cosine of the rotation angle
take their cross product to find an orthogonal rotation vector
rotate around that new vector by the angle found in #2
Step 2 can be omitted if you remember that | A x B | = sin(theta) if A and B are both normalised.