I have a ray with an origin (x,y,z) and a direction (dx, dy, dz) given in homogeneous eye space coordinates:
p = (x,y,z,1) + t * (dx, dy, dz, 0)
What I need to calculate is a positive value for t that for a given pixel distance n results in a point n pixels away from the screen projection of (x,y,z). How can I achieve this?
Regards
Projecting a point into a plane is obtaining the intersection with this plane of the ray through the point and with the direction of the view. Let's say that view direction is vector v.
Put the origin of your ray (let's call it O{x,y,z}) in the line from the camera perpendicular to the near plane of projection. Let's call its projection P. Then a second point (that you expressed as S=O+t·d) will project into the near plane at point T. You need 't' that makes distance PT = n.
If you do the cross product c = vxd you get a distance in near plane. Remember vxd= |v||d|sin(a). If both v and d are normalized that distance is the sin of the angle between v and d.
If d is not normalized (dnn), i.e. |d|=distance(O,S), then the distance between O and S after projection on near plane is k= cross(dnn,d) which is = distance(O,S)·cross(v,d) = t·cross(v,d). Making the required value, n, same as k n= k results in t= n / cross(v,d) with both v,d normalized.
Using pixels instead of values in the near plane is just a matter of scaling properly with window sizes.
Related
I am trying to draw a rectangle (basically a plane) that is always parallel to the camera. I want to restrict plane to a certain size (lets say height = 2 and width = 2 units). However, I do not understand how to set position to the vertices such that rectangle will always be parallel to the camera.
First I am calculating camera normal (direction) using:
glm::normalize(mPosition - mTargetPos); // normal
and then I am using point-normal equation to define the plane:
normal = (A, B, C)
point = (a, b, c) // this point will serve as a center to the plane
A(x−a)+B(y−b)+C(z−c) = 0
Question: How can I define vertices of the plane?
Take some normilized vector UpDir for up direction (it can be UpDir=(0,1,0) or UpDir=(0,0,1) depending on your coordinate system, or it can be computed somehow)
Compute cross product SideDir of the normal and the UpDir.
Now you can use the SideDir and UpDir as basis for your plane's coordinate system, and compute four vertices of the rectangle as point+width*SideDir+height*UpDir, point+width*SideDir-height*UpDir, point-width*SideDir-height*UpDir, point-width*SideDir+height*UpDir
I recommend to define the points in view space. Finally transform the points by the inverse view matrix.
In view space the points are parallel to the view if they have the same z coordinate. The z-coordinate has to be negative and its amount has to greater than the distance to the near plane and less than the distance to the far plane:
near < -z < far
Compute the view matrix (view_mat) and define the points in view sapce:
glm::mat4 view_mat = glm::lookAt(mPosition, mTargetPos, mUp);
float z =
glm::vec3 pt1View(x1, y1, z);
glm::vec3 pt2View(x2, y2, z);
// [...]
Transform the points from view space to world space:
glm::mat4 inverse_view_mat = glm::inverse(view_mat);
glm::vec3 pt1World = glm::vec3(inverse_view_mat * glm::vec4(pt1View, 1.0f));
glm::vec3 pt2World = glm::vec3(inverse_view_mat * glm::vec4(pt2View, 1.0f));
// [...]
I have a texture that I am wrapping around a sphere similar to this image on Wikipedia.
https://upload.wikimedia.org/wikipedia/commons/0/04/UVMapping.png
What I am trying to achieve is to take a UV coordinate from the texture let's say (0.8,0.8) which would roughly be around Russia in the above example.
With this UV somehow calculate the rotation I would need to apply to the sphere to have that UV centred on the sphere.
Could someone point me in the right direction of the math equation I would need to calculate this?
Edit - it was pointed out that I am actually looking for the rotation of the sphere so the uv is centered towards the camera. So starting with the rotation of 0,0,0 my camera is pointed at the uv (0,0.5)
Thanks
This particular type of spherical mapping has the very convenient property that the UV coordinates are linearly proportional to the corresponding polar coordinates.
Let's assume for convenience that:
UV (0.5, 0.5) corresponds to the Greenwich Meridian line / Equator - i.e. (0° N, 0° E)
The mesh is initially axis-aligned
The texture is centered at spherical coordinates (θ, φ) = (π/2, 0) - i.e. the X-axis
A diagram to demonstrate:
Using the boundary conditions:
U = 0 -> φ = -π
U = 1 -> φ = +π
V = 1 -> θ = 0
V = 0 -> θ = π
We can deduce the required equations, and the corresponding direction vector r in the sphere's local space:
Assuming the sphere has rotation matrix R and is centered at c, simply use lookAt with:
Position c + d * (R * r)
Direction -(R * r)
I need a method to find a set of homogenous transformation matrices that describes the position and orientation in a sphere.
The idea is that I have an object in the center of this sphere which has a radius of dz. Since I know the 3d coordinate of the object I know all the 3d coordinates of the sphere. Is it possible to determine the RPY of any point on the sphere such that the point always points toward the object in the center?
illustration:
At the origo of this sphere we have an object. The radius of the sphere is dz.
The red dot is a point on the sphere, and the vector from this point toward the object/origo.
The position should be relatively easy to extract, as a sphere can be described by a function, but how do I determine the vector, or rotation matrix that points such that it points toward origo.
You could, using the center of the sphere as the origin, compute the unit vector of the line formed by the origin to the point on the edge of the sphere, and then multiply that unit vector by -1 to obtain the vector pointing toward the center of the sphere from the point on the edge of the sphere.
Example:
vec pointToCenter(Point edge, Point origin) {
vec norm = edge - origin;
vec unitVec = norm / vecLength(norm);
return unitVec * -1;
}
Once you have the vector you can convert it to euler angles for the RPY, an example is here
Of the top of my head I would suggest using quaterneons to define the rotation of any point at the origin, relative to the point you want on the surface of the sphere:
Pick the desired point on the sphere's surface, say the north pole for example
Translate that point to the origin (assuming the radius of the sphere is known), using 3D Pythagorus: x_comp^2 + y_comp^2 + z_comp^2 = hypotenuse^2
Create a rotation that points an axis at the original surface point. This will just be a scaled multiple of the x, y and z components making up the hypotenuse. I would just make it into unit components. Capture the resulting axis and rotation in a quaterneon (q, x, y, z), where x, y, z are the components of your axis and q is the rotation about that axis. Hard code q to one. You want to use quaterneons because it will make your resulting rotation matricies easier to work with
Translate the point back to the sphere's surface and negate the values of the components of your axis, to get (q, -x, -y, -z).
This will give you your point on the surface of the sphere, with an axis pointing back to the origin. With the north pole as an example, you would have a quaternion of (1, 0, -1, 0) at point (0, radius_length, 0) on the sphere's surface. See quatrotation.c in my below github repository for the resulting rotation matrix.
I don't have time to write code for this but I wrote a little tutorial with compilable code examples in a github repository a while back, which should get you started:
https://github.com/brownwa/opengl
Do the mat_rotation tutorial first, then do the quatereons one. It's doable in a weekend, a day if you're focused.
I am trying to understand the math behind the transformation from world coordinates to view coordinates.
This is the formula to calculate the matrix in view coordinates:
and here is an example, that should normally be correct...:
where b = width of the viewport and h= the height of the viewport
But I just don't know how to calculate the R matrix. How do you get Ux, Uy, Uz, Vx, Vy, etc... ? U,v and, n is the coordinatesystem fixed to the camera. And the camera is at position X0, Y0, Z0.
The matrix T is applied first. It translates some world coordinate P by minus the camera coordinate (call it C), giving the relative coordinate of P (call this Q) with respect to the camera (Q = P - C), in the world axes orientation.
The matrix R is then applied to Q. It performs a rotation to obtain the coordinates of Q in the camera's axes.
u is the horizontal view axis
v is the vertical view axis
n is the view direction axis
(all three should be normalized)
Multiplying R with Q :
multiplying with the first line of R gives DOT(Q, u). This returns the component of Q projected onto u, which is the horizontal view coordinate.
the second line gives DOT(Q, v), which similar to above gives the vertical view coordinate.
the third line gives DOT(Q, n), which is the depth view coordinate.
A diagram:
BTW These are NOT screen/viewport coordinates! They are just the coordinates in the camera/view frame. To get the perspective-corrected coordinate another matrix (the projection matrix) needs to be applied.
I'm currently working on a game which renders a textured sphere (representing Earth) and cubes representing player models (which will be implemented later).
When a user clicks a point on the sphere, the cube is translated from the origin (0,0,0) (which is also the center of the sphere) to the point on the surface of the sphere.
The problem is that I want the cube to rotate so as to sit with it's base flat on the sphere's surface (as opposed to just translating the cube).
What the best way is to calculate the rotation matrices about each axis in order to achieve this effect?
This is the same calculation as you'd perform to make a "lookat" matrix.
In this form, you would use the normalised point on the sphere as one axis (often used as the 'Z' axis), and then make the other two as perpendicular vectors to that. Typically to do that you choose some arbitrary 'up' axis, which needs to not be parallel to your first axis, and then use two cross-products. First you cross 'Z' and 'Up' to make an 'X' axis, and then you cross the 'X' and 'Z' axes to make a 'Y' axis.
The X, Y, and Z axes (normalised) form a rotation matrix which will orient the cube to the surface normal of the sphere. Then just translate it to the surface point.
The basic idea in GL is this:
float x_axis[3];
float y_axis[3];
float z_axis[3]; // This is the point on sphere, normalised
x_axis = cross(z_axis, up);
normalise(x_axis);
y_axis = cross(z_axis, x_axis);
DrawSphere();
float mat[16] = {
x_axis[0],x_axis[1],x_axis[2],0,
y_axis[0],y_axis[1],y_axis[2],0,
z_axis[0],z_axis[1],z_axis[2],0,
(sphereRad + cubeSize) * z_axis[0], (sphereRad + cubeSize) * z_axis[1], (sphereRad + cubeSize) * z_axis[2], 1 };
glMultMatrixf(mat);
DrawCube();
Where z_axis[] is the normalised point on the sphere, x_axis[] is the normalised cross-product of that vector with the arbitrary 'up' vector, and y_axis[] is the normalised cross-product of the other two axes. sphereRad and cubeSize are the sizes of the sphere and cube - I'm assuming both shapes are centred on their local coordinate origin.