Distance to points on a cube and calculating a plane given a normal and a position [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have two questions that have been very lacking in answers on Google.
My first question - generating planes
I am trying to calculate the 4 vertices for a finite plane based on a provided normal, a position, and a radius. How can I do this? An example of some pseudo-code or a description of an algorithm to produce the 4 vertices of a finite plane would be much appreciated.
Furthermore, it would be useful to know how to rotate a plane with an arbitrary normal to align with another plane, such that their normals are the same, and their vertices are aligned.
My second question - distance to points on a cube
How do I calculate the distance to a point on the surface of a cube, given a vector from the centre of the cube?
This is quite hard to explain, and so my google searches on this have been hard to phrase well.
Basically, I have a cube with side length s. I have a vector from the centre of the cube v, and I want to know the distance from the centre of the cube to the point on the surface that that vector points to. Is there a generalised formula that can tell me this distance?
An answer to either of these would be appreciated, but a solution to the cube distance problem is the one that would be more convenient at this moment.
Thanks.
Edit:
I say "finite plane", what I mean is a quad. Forgive me for bad terminology, but I prefer to call it a plane, because I am calculating the quad based on a plane. The quad's vertices are just 4 points on the surface of the plane.

Second Question:
Say your vector is v=(x,y,z)
So the point where it hits the cube surface is the point where the largest coordinate in absolute value equals s, or mathematically:
(x,y,z) * (s/m)
where
m = max{ |x| , |y| , |z| }
The distance is:
|| (x,y,z) * (s/m) || = sqrt(x^2 + y^2 + z^2) * (s/max{ |x| , |y| , |z| })
We can also formulate the answer in norms:
distance = s * ||v||_2 / ||v||_inf
(These are the l2 norm and the l-infinity norm)

Related

What's the relationship between the barycentric coordinates of triangle in clip space and the one in screen space [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 months ago.
Improve this question
Suppose I have a triangle say(PC0,PC1,PC2), and its barycentric coordinates is((1,0),(0,1),(0,0)), they are all in clip space.
And now I want calculate the interpolated barycentric coordinates in screen space, how can I do that so it could be correct? I have see something like perspective correct interpolation, but I cant find a good mapping relationship bewteen them.
The conversion from screen-space barycentric (b0,b1,b2) to clip-space barycentric (B0,B1,B2) is given by:
(b0/w0, b1/w1, b2/w2)
(B0, B1, B2) = -----------------------
b0/w0 + b1/w1 + b2/w2
Where (w0,w1,w2) are the w-coordinates of the three vertices of the triangle, as set by the vertex shader. (See How exactly does OpenGL do perspectively correct linear interpolation?).
The inverse transformation is given by:
(B0*w0, B1*w1, B2*w2)
(b0, b1, b2) = -----------------------
B0*w0 + B1*w1 + B2*w2

Creating accurate 3d information using opengl? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 months ago.
Improve this question
I am interested in generating a 360 degree rendition of a real terrain model (a triangulated terrain) using OpenGL so that I can extract accurate 3D information in the way of depth, orientation(azimuth) and angle of elevation. That is, so that for each pixel I end up with accurate information about the angle of elevation, azimuth and depth as measured from the camera position. The 360 degree view would be 'stitch together' after the camera is rotated around. My questions is how accurate would the information be?
If I had a camera width of 100 pixels, a horizontal field of view of 45 degrees and rotated 8 times around, would each orientation (1/10th of degree) have the right depth and angle of elevation?
If this is not accurate due to projection, is there a way to adjust for any deviations?
Just as an illustration, the figure below shows a panoramic I created (not with OpenGL). The image has 3600 columns (one per 1/10th of a degree in azimuth where each column has the same angular unit), depth (in meters) and the elevation (not the angle of elevation). This was computed programmatically without OpenGL

Field of view for uncentered, distorted image [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Consider the following diagram and equations representing a pinhole camera:
Suppose the image size is W times H pixels, and that there is no nonlinear distortion. To compute the field of view I proceed as in the picture below:
where \tilde{H} is the image width in the image plane, not in the pixel coordinates, and s_y is the height of a pixel in the image plane units.
In an exercise I'm told to account for the fact that the principal point might not be in the image center.
How could this happen, how do we correct the FOV in this case?
Moreover, suppose the image was distorted as follows, before being projected on the pixel coordinates:
How do we account for the distortion in the FOV? How is it even defined?
The principal point may not be centered in the image for a variety of reasons, for example, the lens may be slightly decentered due to the mechanics of the mount, or the image may have been cropped.
To compute the FOV with a decentered principal point you just redo your computation separately for the angles to the left and right sides of the focal axis (for the horizontal FOV, above and below for the vertical), and add the angles up.
The FOV is defined exactly in the same way, as the angle between the light rays that project to left and right extrema of the image image row containing the principal point. To compute it you need to first undistort those pixel coordinates. For ordinary photographic lenses, where the barrel term dominates the distortion, the result is a slightly larger FOV than what you compute ignoring the distortion. Note also that, due to the nonlinearitiy of the distortion, the horizontal, vertical and diagonal FOV's are not simply related through the image aspect ratio when the distortion is taken into account.

Why taking cross product of tangent vectors gives the normal vector? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I'm having a depth image, say z = f(x, y) with (x, y) in pixels. I want to calculate the normal vector at each pixel to create the normal map.
I've been trying the approach at Calculate surface normals from depth image using neighboring pixels cross product. In this approach, basically the "input" to the cross product is two tangent vectors, which are the gradients (dz/dx) and (dy/dx). These two tangent vectors will form the tangent plane at point (x, y, f(x, y)), and then the cross product will find the normal vector to this plane.
However, I don't understand why the normal vector to this plane (3D plane (x, y, f(x, y)) will also be the normal vector to the plane in world coordinates that I'm trying to find. Is there any assumption here? How can this approach be used to find the normal vector at each pixel?
That is almost by definition of the normal to a surface. The normal is locally perpendicular to the tangent plane. And the cross-product of two non-collinear vectors is a vector that is perpendicular to the two vectors. That is why the cross-product of two non-collinear vectors of the tangent plane is perpendicular to that tangent plane. And thus it is along the normal direction.

How can I convert mouse Position from SFML to OpenGL coordinate? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want to move a box follow the mouse position but I don't know how to convert position that I get from sf::Mouse::getPosition() to the coordinate in OpenGL
If you can, try using the gluUnproject function from the GLU library. Otherwise, you will need to reimplement it by computing the inverse matrices of both modelview and projection, then apply them in reverse order (ie. reverse projection then reverse modelview) to your screen point. You may have to add an extra step to convert the window canvas coordinates back to the projection screen coordinates (that step depends on your projection setup).
I provided a sample programme using SDL and gluUnproject in that answer.
Note that:
the modelview inverse can be computed trivially by successivelly applying the opposit transformations in the reverse order.
For instance, if you set your modelview from identity first by an translation, then an rotation, all you need to do is to set it to the <-a,-b,-c> rotation, and then apply the <-x,-y,-z> translation to get the inverse modelview.
For the projection inverse matrix, the red book appendix F - pointer courtesie of that gamedev.net page (though the link is broken there) - gives a solution.
This will only provide you the matrices to unproject a point from the homogeneous opengl projection space. You need first to pick a point from that space. That point maybe chosen using the screen cordinates first transformed back into the projection space. In my example, this involves flipping the coordinates with regards to the canvas dimension (but things could be perhaps different with another projection setup) and then extending them to 3D by adding a carefully chosen z component.
That said, in the example programme of the other question, the goal was to cast a ray passing through the projected pixel into the scene, and figure out the distance from that line to points in the scene, and pick the closest one. You might be able to avoid the whole unproject business, by noticing that the mouse always move in the camera projection plan. Hence the translation vector for the object will necessary be composed of the X and Y unit vectors of the camera (I am assuming that Z is the axis perpendicular to the screen, as usual in OpenGL), both scaled by factor depending on the distance of the object to the camera.
You will get something like that:
+--------+ object translation plane
| /
| /
| /
| /
+----+ screen plane
| /
| /
| /
| /
+ camera eye position
You can get the scaling factor from the Intercept theorem, and the X and Y camera vectors from the first and second columns of the modelview matrix.
The final translation vector should be something along the lines of:
T = f * (dx * X + dy * Y)
where f is the scaling factor, X and Y the camera vectors, and <dx,dy> the mouse coordinates delta vector in the projection space.
You know your window resolution, and the mouse position relative to the window. From there you can determine a normalized coordinate in [0,1]. From this coordinate, you can then project a ray into your scene, and using the inverse of your projection*view matrix, can turn this into a world-space ray.
Then it is up to you to intersect the world space ray against your scene objects (via collision detection) to determine the "clicked on" objects (note that there may be more than one due to depth; usually you want the closest hit). This all depends on how you have organized your scene's spatial information and this is all made faster if you have some spatial partitioning structures (e.g. octree or BSP) for quick culling and simplified bounding boxes (e.g. AABBs or spheres) on your "scene objects" for a fast broad phase.
I would say more, but "the coordinate in OpenGL" is highly underspecified. Usually, you are not only interested in the coordinate, but also the "scene object" it meaningfully belongs to, and a whole bunch of other properties.