I understand the basic concept of how to unproject:
let mut z = 0.0;
gl::ReadPixels(x as i32, y as i32, 1, 1, gl::DEPTH_COMPONENT, gl::FLOAT, &z);
// window position to screen position
let screen_position = self.to_screen_position(x, y);
// screen position to world position
let world_position = self.projection_matrix().invert() *
Vector4::new(screen_position.x, screen_position.y, z, 1.0);
But this doesn't take the W coordinate properly - when I render things from world space to screen space, they end up with a W != 1, because of the perspective transformation (https://www.opengl.org/sdk/docs/man2/xhtml/gluPerspective.xml). When I transform back from screen space to world space (with an assumption of W=1), the objects are in the wrong position.
As I understand it, W is a scaling factor for all the other coordinates. If this is the case, doesn't it mean screen vectors (0, 0, -1, 1) and (0, 0, -2, 2) will map to the same window coordinates, and that unprojecting doesn't necessarily produce unique results without further work?
Thanks!
Because of the perspective transformation, you can't really ignore W.
I would suggest looking at the source code for the gluUnproject function here: http://www.opengl.org/wiki/GluProject_and_gluUnProject_code. You'll see that what this does is:
Calculate projection*modelView matrix and invert it.
Multiply a vector made from the screen position (in the code, winZ=0 would correspond to the near plane, winZ=1 to the far plane of your perspective projection; W will always be 1).
Divide the calculated vector's X, Y and Z by W.
Note that if you do it like this, the result's W should be ignored (i.e. assumed to be 1).
You can also look here to see how the transformations in OpenGL work - if you're not familiar with this, I'd suggest reading about Clip coordinates and Normalized Device coordinates.
Related
I'm try to convert a 3D point to its screen position.
this is the code that I use.
glm::vec2 screenPosition(const glm::vec3 & _coord) const {
glm::vec4 coord = glm::vec4(_coord, 1);
coord = getProjection() * getView() * coord;
coord.x /= coord.w;
coord.y /= coord.w;
coord.z /= coord.w;
coord.x = (coord.x + 1) * width * 0.5;
coord.y = (coord.y + 1) * height * 0.5;
return glm::vec2(coord.x, coord.y);
I'm not 100% sure about the code but I do not know how to discard the points that are behind the camera.
Some one can help me?
Thanks
If coord.z (after division by coord.w) is not in interval [-1,1] the point should be discarded. The value outside that interval indicates that the point is not in camera frustum which also includes the case when point is behind the camera. For DirectX and Vulkan the interval is [0,1].
The correct way to do the culling / clipping of primitives is to do it in clip space, before the perspective divide.
The default OpenGL clip convention is -w <= x,y,z, <= w, and all points which do not fulfill this condition can be discarded (culling). Note that discarding points only works for point primitives, if you deal with more complex primitives (lines, triangles), you need to do actual clipping.
In the most general case, you will be using a perspective projection, and the clip space w value will vary per vertex - and it can be 0 - trying to do the discard in NDC will yield to a division by zero in such cases.
If you only want to deal about clipping points behind the camera, you can discard everything which is w <= 0, but usually, additionally clipping against the near plane makes much more sense (and also invoids some numerical issues when going very close to the camera): z < -w.
I'd like to stress on a few details here. The clip condition -w <= x,y,z, <= w implies that points which lie truly behind the camera (w < 0) must be rejected, but the w = 0 case is still a bit weird, because the homogeneous point (0,0,0,0) would still satisfy the above clip condition (and really yield no useful results when doing the perspective divide). However, OpenGL (and GPUs) do not clip against the plane where the camera is in (w=0), but against a view volume, and require you to set up a near plane which is in front of the camera. And in such a scenario, even if w=0 can occur, it is guaranteed that there is never both w=0 and z=0 simultaneously, so the (0,0,0,0) case is never hit. However, this does not prevent people from actually feeding (0,0,0,0) into gl_Position, and you can assume that real world implementations will not only reject the w < 0 case which is directly mandated from the above clip condition, but will reject /clip anything w <= 0. Note that primitive clipping where one vertex has clip space coords of (0,0,0,0) will still result in nonsense, but you're explicitly asking for that then.
For an orthogonal projection, there is actually no way to clip points behind the camera, because conceptually, the camera is infinitely far away. You still might set up an imagined "camera position" via your view matrix, and a view volume via the projection matrix, and you can still cull/clip against the near plane there (z < -w). Note that for practical purposes, an orthogonal projection will yield w = 1, so the additional w <= 0 check required in the perspective case is irrelevant.
I'm trying to implement textures for spheres in my ray tracer. I managed to get something working, but I am unsure about its correctness. Below is the code for getting the texture coordinates. For now, the texture is random and is generated at runtime.
virtual void GetTextureCoord(Vect hitPoint, int hres, int vres, int& x, int& y) {
float theta = acos(hitPoint.getVectY());
float phi = atan2(hitPoint.getVectX(), hitPoint.getVectZ());
if (phi < 0.0) {
phi += TWO_PI;
}
float u = phi * INV_TWO_PI;
float v = 1 - theta * INV_PI;
y = (int) ((hres - 1) * u);
x = (int) ((vres - 1) * v);
}
This is how the spheres look now:
I had to normalize the coordinates of the hit point to get the spheres to look like that. Otherwise they would look like:
Was normalising the hit point coordinates the right approach, or is something else broken in my code? Thank you!
Instead of normalising the hit point, I tried translating it to the world origin (as if the sphere center was there) and obtained the following result:
I'm using a 256x256 resolution texture by the way.
It's unclear what you mean by "normalizing" the hit point since there's nothing that normalizes it in the code you posted, but you mentioned that your hit point is in world space.
Also, you didn't say what texture mapping you're trying to implement, but I assume you want your U and V texture coordinates to represent latitude and longitude on the sphere's surface.
Your first problem is that converting Cartesian to spherical coordinates requires that the sphere is centered at the origin in the Cartesian space, which isn't true in world space. If the hit point is in world space, you have to subtract the sphere's world-space center point to get the effective hit point in local coordinates. (You figured this part out already and updated the question with a new image.)
Your second problem is that the way you're calculating theta requires that the the sphere have a radius of 1, which isn't true even after you move the sphere's center to the origin. Remember your trigonometry: the argument to acos is the ratio of a triangle's side to its hypotenuse, and is always in the range (-1, +1). In this case your Y-coordinate is the side, and the sphere's radius is the hypotenuse. So you have to divide by the sphere's radius when calling acos. It's also a good idea to clamp the value to the (-1, +1) range in case floating-point rounding error puts it slightly outside.
(In principle you'd also have to divide the X and Z coordinates by the radius, but you're only using those for an inverse tangent, and dividing them both by the radius won't change their quotient and thus won't change phi.)
Right now your sphere intersection and texture-coordinate functions are operating in world space, but you'll probably find it useful later to implement transformation matrices, which let you transform things from one coordinate space to another. Then you can change your sphere functions to operate in a local coordinate space where the center is the origin and the radius is 1, and give each object an associated transformation matrix that maps the local coordinate space to the world coordinate space. This will simplify your ray/sphere intersection code, and let you remove the origin subtraction and radius division from GetTextureCoord (since they're always (0, 0, 0) and 1 respectively).
To intersect a ray with an object, you'd use the object's transformation matrix to transform the ray into the object's local coordinate space, do the intersection (and compute texture coordinates) there, and then transform the result (e.g. hit point and surface normal) back to world space.
I am writing software to determine the viewable locations of a camera in 3D. I have currently implement parts to find the minimum and maximum length of view based on the camera and lenses intrinsic characteristics.
I now need to work out that if the camera is placed at X,Y,Z and is pointing in a direction (two angles, one around the horizontal and one around the vertical axis) what the boundaries the camera can see at are (knowing the viewing angle). The output I would like is 4 3D locations, making a rectangle that show the minimum position, top left, top right, bottom left and bottom right. The same is also required for the maximum positions.
Can anyone help with the geometry to find these points?
Some code I have:
QVector3D CameraPerspective::GetUnitVectorOfCameraAngle()
{
QVector3D inital(0, 1, 0);
QMatrix4x4 rotation_matrix;
// rotate around z axis
rotation_matrix.rotate(_angle_around_z, 0, 0, 1);
//rotate around y axis
rotation_matrix.rotate(_angle_around_x, 1, 0, 0);
inital = inital * rotation_matrix;
return inital;
}
Coordinate CameraPerspective::GetFurthestPointInFront()
{
QVector3D camera_angle_vector = GetUnitVectorOfCameraAngle();
camera_angle_vector.normalize();
QVector3D furthest_point_infront = camera_angle_vector * _camera_information._maximum_distance_mm;
return Coordinate(furthest_point_infront + _position_of_this);
}
Thanks
A complete answer with code will be probably way too long for SO, I hope that this will be enough. In the following we work in homogeneous coordinates.
I have currently implement parts to find the minimum and maximum length of view based on the camera and lenses intrinsic characteristics.
That isn't enough to fully define your camera. You also need a field of view angle and the width/height ratio.
With all these information (near plane + far plane + fov + ratio), you can build a 4x4 matrix known as perspective matrix. Google for it or check here for some references. This matrix maps the pyramidal region of the space which your camera "sees" (usually simply called frustrum) to the [-1,1]x[-1,1]x[-1,1] cube. Call it P.
Now you need a 4x4 camera matrix which transform points in world space to points in camera space. Since you know the camera position and the camera orientation this can be constructed easily (there is no room here to full explain how transformation matrices in homogeneous coordinates work, google for it). Call this matrix C.
Now consider the matrix A = P * C.
This matrix transforms points in world coordinates to points in the perspective space. Your camera will "see" those points if they are inside the [-1,1]x[-1,1]x[-1,1] cube. But you can invert this matrix in order to map points inside the cube to points in world space. So in order to obtain the 8 points you need in world space you can simply do:
y = A^(-1) * x
Where x =
[-1,-1,-1, 1] left - bottom - near
[-1,-1, 1, 1] left - bottom - far
etc.
In my hobbyist shader-based (non-FFP) GL (3.2+ core) "engine", everything in world-space and model-space is by design "left-handed" (and to stay that way), so X-axis goes from -1 ("left") to 1 ("right"), Y from -1 ("bottom") to 1 ("top") and Z from -1 ("near") to 1 ("far").
Now, by default in OpenGL the NDC-space works the same but the clip space doesn't, from what I gather, here z extends from 1 ("near") to -1 ("far").
At the same time I want to ideally keep using the "kinda-sorta inofficial quasi-standard" matrix functions for lookat and perspective, currently defined as:
func (me *Mat4) Lookat(eyePos, lookTarget, upVec *Vec3) {
l := lookTarget.Sub(eyePos)
l.Normalize()
s := l.Cross(upVec)
s.Normalize()
u := s.Cross(l)
me[0], me[4], me[8], me[12] = s.X, u.X, -l.X, -eyePos.X
me[1], me[5], me[9], me[13] = s.Y, u.Y, -l.Y, -eyePos.Y
me[2], me[6], me[10], me[14] = s.Z, u.Z, -l.Z, -eyePos.Z
me[3], me[7], me[11], me[15] = 0, 0, 0, 1
}
// a: aspect ratio. n: near-plane. f: far-plane.
func (me *Mat4) Perspective(fovY, a, n, f float64) {
s := 1 / math.Tan(DegToRad(fovY)/2) // scaling
me[0], me[4], me[8], me[12] = s/a, 0, 0, 0
me[1], me[5], me[9], me[13] = 0, s, 0, 0
me[2], me[6], me[10], me[14] = 0, 0, (f+n)/(n-f), (2*f*n)/(n-f)
me[3], me[7], me[11], me[15] = 0, 0, -1, 0
}
So, for the lookat-part to have my world-space camera (positive-Z) work with lookat (negative-Z) as per this pseudocode:
// world-space position:
camPos := cam.Pos
// normalized direction-vector, up/right/forward are 1 not -1:
camTarget := cam.Dir
// lookat-target:
camTarget.Add(&camPos)
// invert both Z:
camPos.Z, camTarget.Z = -camPos.Z, -camTarget.Z
// compute lookat-matrix:
cam.mat.Lookat(&camPos, &camTarget, &Vec3{0, 1, 0})
That works well. Moving the camera in all 6 degrees of freedom produces correct on-screen movement and correct new camera world-space coords.
But geometry is still inverted on the Z-axis. When I position two boxes, A at (-2, 1, -2), to appear near-left and B (2, 1, 2) to appear far-right, then A appears far-left and B appears near-right. Z is still inverted here.
Now, these nodes have their own world-space coords and update from those their own model-to-world matrices. I shouldn't invert posZ there as they form a hierarchy of sub-nodes multiplying with their parents transforms and all that. They're still in model or world space, which as per my decree is to remain left-handed.
Their world-to-camera calculation happens on the CPU at my end, not in a vertex shader which just gets a single final (mvp/clip-space) matrix.
When that happens -- multiplication of world-space-object-matrix with clip-space lookat-and-projection matrix -- at that point I need to somehow invert Z.
What's the best way to do this? Or, more generally speaking, what's a common way that works? Do I have to modify the projection to accept left-handed but output-to-GL right-handed? If so, how? And then wouldn't I also have to modify lookat? Is there a smart way to do all this without having to modify the somewhat-standard lookat/projection matrices while also keeping model-transform-matrices in left-handed coords?
In Perspective, changing me[11] from -1 to 1 should invert the z axis the way you're describing. If that isn't correct, try negating me[10]. Of course, because the z axis is inverted the directions of your rotations will affected as well. If I recall right rotations around the y axis, as well as maybe the x axis will be inverted. If this is the case you should be able to just negate the rotations to counteract it.
(This is all in ortho mode, origin is in the top left corner, x is positive to the right, y is positive down the y axis)
I have a rectangle in world space, which can have a rotation m_rotation (in degrees).
I can work with the rectangle fine, it rotates, scales, everything you could want it to do.
The part that I am getting really confused on is calculating the rectangles world coordinates from its local coordinates.
I've been trying to use the formula:
x' = x*cos(t) - y*sin(t)
y' = x*sin(t) + y*cos(t)
where (x, y) are the original points,
(x', y') are the rotated coordinates,
and t is the angle measured in radians
from the x-axis. The rotation is
counter-clockwise as written.
-credits duffymo
I tried implementing the formula like this:
//GLfloat Ax = getLocalVertices()[BOTTOM_LEFT].x * cosf(DEG_TO_RAD( m_orientation )) - getLocalVertices()[BOTTOM_LEFT].y * sinf(DEG_TO_RAD( m_orientation ));
//GLfloat Ay = getLocalVertices()[BOTTOM_LEFT].x * sinf(DEG_TO_RAD( m_orientation )) + getLocalVertices()[BOTTOM_LEFT].y * cosf(DEG_TO_RAD( m_orientation ));
//Vector3D BL = Vector3D(Ax,Ay,0);
I create a vector to the translated point, store it in the rectangles world_vertice member variable. That's fine. However, in my main draw loop, I draw a line from (0,0,0) to the vector BL, and it seems as if the line is going in a circle from the point on the rectangle (the rectangles bottom left corner) around the origin of the world coordinates.
Basically, as m_orientation gets bigger it draws a huge circle around the (0,0,0) world coordinate system origin. edit: when m_orientation = 360, it gets set back to 0.
I feel like I am doing this part wrong:
and t is the angle measured in radians
from the x-axis.
Possibly I am not supposed to use m_orientation (the rectangles rotation angle) in this formula?
Thanks!
edit: the reason I am doing this is for collision detection. I need to know where the coordinates of the rectangles (soon to be rigid bodies) lie in the world coordinate place for collision detection.
What you do is rotation [ special linear transformation] of a vector with angle Q on 2d.It keeps vector length and change its direction around the origin.
[linear transformation : additive L(m + n) = L(m) + L(n) where {m, n} € vector , homogeneous L(k.m) = k.L(m) where m € vector and k € scalar ] So:
You divide your vector into two pieces. Like m[1, 0] + n[0, 1] = your vector.
Then as you see in the image, rotation is made on these two pieces, after that your vector take
the form:
m[cosQ, sinQ] + n[-sinQ, cosQ] = [mcosQ - nsinQ, msinQ + ncosQ]
you can also look at Wiki Rotation
If you try to obtain eye coordinates corresponding to your object coordinates, you should multiply your object coordinates by model-view matrix in opengl.
For M => model view matrix and transpose of [x y z w] is your object coordinates you do:
M[x y z w]T = Eye Coordinate of [x y z w]T
This seems to be overcomplicating things somewhat: typically you would store an object's world position and orientation separately from its set of own local coordinates. Rotating the object is done in model space and therefore the position is unchanged. The world position of each coordinate is the same whether you do a rotation or not - add the world position to the local position to translate the local coordinates to world space.
Any rotation occurs around a specific origin, and the typical sin/cos formula presumes (0,0) is your origin. If the coordinate system in use doesn't currently have (0,0) as the origin, you must translate it to one that does, perform the rotation, then transform back. Usually model space is defined so that (0,0) is the origin for the model, making this step trivial.