I have a texture that I am wrapping around a sphere similar to this image on Wikipedia.
https://upload.wikimedia.org/wikipedia/commons/0/04/UVMapping.png
What I am trying to achieve is to take a UV coordinate from the texture let's say (0.8,0.8) which would roughly be around Russia in the above example.
With this UV somehow calculate the rotation I would need to apply to the sphere to have that UV centred on the sphere.
Could someone point me in the right direction of the math equation I would need to calculate this?
Edit - it was pointed out that I am actually looking for the rotation of the sphere so the uv is centered towards the camera. So starting with the rotation of 0,0,0 my camera is pointed at the uv (0,0.5)
Thanks
This particular type of spherical mapping has the very convenient property that the UV coordinates are linearly proportional to the corresponding polar coordinates.
Let's assume for convenience that:
UV (0.5, 0.5) corresponds to the Greenwich Meridian line / Equator - i.e. (0° N, 0° E)
The mesh is initially axis-aligned
The texture is centered at spherical coordinates (θ, φ) = (π/2, 0) - i.e. the X-axis
A diagram to demonstrate:
Using the boundary conditions:
U = 0 -> φ = -π
U = 1 -> φ = +π
V = 1 -> θ = 0
V = 0 -> θ = π
We can deduce the required equations, and the corresponding direction vector r in the sphere's local space:
Assuming the sphere has rotation matrix R and is centered at c, simply use lookAt with:
Position c + d * (R * r)
Direction -(R * r)
Related
I was trying to place a sphere on the 3D space from the user selected point on 2d screen space. For this iam trying to calculate 3d point from 2d point using the below technique and this technique not giving the correct solution.
mousePosition.x = ((clickPos.clientX - window.left) / control.width) * 2 - 1;
mousePosition.y = -((clickPos.clientY - window.top) / control.height) * 2 + 1;
then Iam multiplying the mousePositionwith Inverse of MVP matrix. But getting random number at result.
for calculating MVP Matrix :
osg::Matrix mvp = _camera->getViewMatrix() * _camera->getProjectionMatrix();
How can I proceed? Thanks.
Under the assumption that the mouse position is normalized in the range [-1, 1] for x and y, the following code will give you 2 points in world coordinates projected from your mouse coords: nearPoint is the point in 3D lying on the camera frustum near plane, farPointon the frustum far plane.
Than you can compute a line passing by these points and intersecting that with your plane.
// compute the matrix to unproject the mouse coords (in homogeneous space)
osg::Matrix VP = _camera->getViewMatrix() * _camera->getProjectionMatrix();
osg::Matrix inverseVP;
inverseVP.invert(VP);
// compute world near far
osg::Vec3 nearPoint(mousePosition.x, mousePosition.x, -1.0f);
osg::Vec3 farPoint(mousePosition.x, mousePosition.x, 1.0f);
osg::Vec3 nearPointWorld = nearPoint * inverseVP;
osg::Vec3 farPointWorld = farPoint * inverseVP;
The target is to draw a shape, lets say a triangle, pixel-perfect (vertices shall be specified in pixels) and be able to transform it in the 3rd dimension.
I've tried it with a orthogonal projection matrix and everything works fine, but the shape doesn't have any depth - if I rotate it around the Y axis it looks like I would just scale it around the X axis. (because a orthogonal projection obviously behaves like this). Now I want to try it with a perspective projection. But with this projection, the coordinate system changes completely, and due to this I can't specify my triangles verticies with pixels. Also if the size of my window changes, the size of shape changes too (because of the changed coordinate system).
Is there any way to change the coordinate system of the perspective projection so that I can specify my vertices like if I would use the orthogonal projection? Or do anyone have a Idea how to achieve the target described in the first sentence?
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
Perspective Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
where:
aspect = w / h
tanFov = tan( fov_y * 0.5 );
prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov
I assume that the view matrix is the identity matrix, and thus the view space coordinates are equal to the world coordinates.
If you want to draw a polygon, where the vertex coordinates are translated 1:1 into pixels, then you have to draw the polygon in parallel plane to the viewport. This means all points have to be draw with the same depth. The depth has to choose that way, that the transformation of a point in normalized device coordinates, by the inverse projection matrix gives the vertex coordinates in pixel. Note, the homogeneous coordinates given by the transformation with the inverse projection matrix, have to be divided by the w component of the homogeneous coordinates, to get cartesian coordinates.
This means, that the depth of the plane depends on the field of view angle of the projection:
Assuming you set up a perspective projection like this:
float vp_w = .... // width of the viewport in pixel
float vp_h = .... // height of the viewport in pixel
float fov_y = ..... // field of view angle (y axis) of the view port in degrees < 180°
gluPerspective( fov_y, vp_w / vp_h, 1.0, vp_h*2.0f );
Then the depthZ of the plane with a 1:1 relation of vertex coordinates and pixels, will be calculated like this:
float angRad = fov_y * PI / 180.0;
float depthZ = -vp_h / (2.0 * tan( angRad / 2.0 ));
Note, the center point of the projection to the view port is (0,0), so the bottom left corner point of the plane is (-vp_w/2, -vp_h/2, depthZ) and the top right corner point is (vp_w/2, vp_h/2, depthZ). Ensure, that the near plane of the perspective projetion is less than -depthZ and the far plane is greater than -depthZ.
See further:
Both depth buffer and triangle face orientation are reversed in OpenGL
Transform the modelMatrix
I have a ray with an origin (x,y,z) and a direction (dx, dy, dz) given in homogeneous eye space coordinates:
p = (x,y,z,1) + t * (dx, dy, dz, 0)
What I need to calculate is a positive value for t that for a given pixel distance n results in a point n pixels away from the screen projection of (x,y,z). How can I achieve this?
Regards
Projecting a point into a plane is obtaining the intersection with this plane of the ray through the point and with the direction of the view. Let's say that view direction is vector v.
Put the origin of your ray (let's call it O{x,y,z}) in the line from the camera perpendicular to the near plane of projection. Let's call its projection P. Then a second point (that you expressed as S=O+t·d) will project into the near plane at point T. You need 't' that makes distance PT = n.
If you do the cross product c = vxd you get a distance in near plane. Remember vxd= |v||d|sin(a). If both v and d are normalized that distance is the sin of the angle between v and d.
If d is not normalized (dnn), i.e. |d|=distance(O,S), then the distance between O and S after projection on near plane is k= cross(dnn,d) which is = distance(O,S)·cross(v,d) = t·cross(v,d). Making the required value, n, same as k n= k results in t= n / cross(v,d) with both v,d normalized.
Using pixels instead of values in the near plane is just a matter of scaling properly with window sizes.
I am trying to understand the math behind the transformation from world coordinates to view coordinates.
This is the formula to calculate the matrix in view coordinates:
and here is an example, that should normally be correct...:
where b = width of the viewport and h= the height of the viewport
But I just don't know how to calculate the R matrix. How do you get Ux, Uy, Uz, Vx, Vy, etc... ? U,v and, n is the coordinatesystem fixed to the camera. And the camera is at position X0, Y0, Z0.
The matrix T is applied first. It translates some world coordinate P by minus the camera coordinate (call it C), giving the relative coordinate of P (call this Q) with respect to the camera (Q = P - C), in the world axes orientation.
The matrix R is then applied to Q. It performs a rotation to obtain the coordinates of Q in the camera's axes.
u is the horizontal view axis
v is the vertical view axis
n is the view direction axis
(all three should be normalized)
Multiplying R with Q :
multiplying with the first line of R gives DOT(Q, u). This returns the component of Q projected onto u, which is the horizontal view coordinate.
the second line gives DOT(Q, v), which similar to above gives the vertical view coordinate.
the third line gives DOT(Q, n), which is the depth view coordinate.
A diagram:
BTW These are NOT screen/viewport coordinates! They are just the coordinates in the camera/view frame. To get the perspective-corrected coordinate another matrix (the projection matrix) needs to be applied.
I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.