Mesh and cone intersection algorithm - c++

I am looking for an efficient algorithm for mesh (set of triangles) and cone (given by origin, direction and angle from that direction) intersection. More precisely I want to find intersection point which is closest to the cone's origin. For now all what I can think about is to intersect a mesh with several rays from the cone origin and get the closest point. (Of course some spatial structure will be constructed for mesh to reject unnecessary intersections)
Also I found the following algo with brief description:
"Cone to mesh intersection is computed on the GPU by drawing the cone geometry with the mesh and reading the minimum depth value marking the intersection point".
Unfortunately it's implementation isn't obvious for me.
So can anyone suggest something more efficient than I have or explain in more details how it can be done on GPU using OpenGL?

on GPU I would do it like this:
set view
to cones origin
directing outwards
covering the bigest circle slice
for infinite cone use max Z value of mesh vertexes in view coordinate system
clear buffers
draw mesh
but in fragment shader draw only pixels intersecting cone
|fragment.xyz-screen_middle|=tan(cone_ang/2)*fragment.z
read z-buffer
read fragments and from valid (filled) select the closest one to cones origin
[notes]
if your gfx engine can handle also output values from your fragment shader
then you can skip bullet 4 and do the min distance search inside bullet 3 instead of rendering ...
that will speed up the process considerably (need just single xyz vector)

Related

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

Surface mesh generation (triangulation) from exact points on a tube surface

What would be recommended ways to generate surface meshes of a particular kind of body given the following?
The geometric body is an extruded 3D "tube" segment. The tube segment has the following properties:
At each value of X, the cross-section is always a simple polygon in the Y-Z plane
The polygons are not guaranteed to be convex
The polygons are not necessarily constant as X is traversed; they smoothly dilate and/or change shape, and the areas of the polygons smoothly vary
The centroids of each X = const polygon, if connected together with simple line segments, would form a very smooth, well behaved "thread" with at most gentle curvature, no sharp bends, folds, or loops, etc.
The surface section is capped by the planar cross-sectional polygons at X = X_start and X = X_end
Objective:
Generate a triangulated surface mesh of the tube surface, respecting the fact that it is bounded at the start and end by flat, planar cross-sectional surfaces
The mesh should be of the tube, not a convex hull of the tube
If the tube surface mesh maintains the property that there is a flat simple polygonal cross-section formed by the vertices at X = X_start and X = X_end, then I have existing code which can mesh the end caps; the real problem I'm trying to solve is to get the 3D tube surface mesh generated. If the solution also can generate the end caps, that's fine too. However, the end cap surfaces need to be identifiable as such for output purposes.
Once the mesh is generated, it needs to be written in a format like OFF, which I think I can handle based on code included with CGAL, examples, etc. The point here is that I don't need to be able further process the mesh (e.g. deformations, add/remove points) programmatically after it is generated.
Known inputs and properties:
I have the polygonal cross-section tube surface vertices at an arbitrary number of X = const stations between X_start and X_end ; I can control the spacing in the X direction as necessary when I create/import the points
The vertices lie exactly on the tube surface and are not corrupted by any noise, joggles, sampling, approximations, etc.
I do not have any guarantees about the relative position of vertices forming each cross-sectional polygon, other than that the polygon vertices are oriented clockwise
I can generate normals for the polygonal vertices in terms of their Y-Z components, but I don't have a priori information about their normal components in the X direction
I can generate any number of vertices on the end caps if necessary
Right now the vertices are 3-space floating-point coordinate values, but if it could somehow help, I could turn each cross-section into a formal CGAL 2D arrangement
Estimated number of vertices would likely be less than 1000, definitely less than say 15K. Processing time is not a concern.
Ideals:
Ideally, the surface mesh would just use the vertices I have, without subtracting or moving any of them, but this is not a hard constraint so long as they are "close"
I need simple polygonal vertices at X_start and X_end so I can cap the surfaces as intended
Initially, CGAL's Poisson Surface Reconstruction method seemed promising, but in the end it seems like it leads to a processing pipeline that might smear the vertices I have; additionally, I don't have full 3D normal information for the points other than the end caps. Moreover, the method would seem to have issues with the sharp, distinct cross-section terminal face surfaces. Maybe I could get around the latter by putting in a bunch of benignly false vertices to extend and terminate the tube, then filter out parts of the triangulation I don't need, but there's no guarantee that the vertices at X_start and X_end would remain, and I would have to "fix-up" the triangulation crossing those planes, which seems non-trivial.
Another possibility might be to compute a full 3D volume mesh using CGAL's 3D mesh generator, but just write out the portion comprising the surface mesh. Is this reasonable? If I could retain the original input vertices, and this overall approach is reasonable, I could filter as I wrote out the triangulation to distinguish between the faces forming the end caps vs. the tube surface.
I also saw this SO question Representing a LiDAR surface using the 3D Delaunay Triangulation as basis? which seems to have some similarities (trying to just retain the input points, and some foreknowledge of the surface properties), but in the end I think my use case is too different.

Smooth cone normals

I'm trying to calculate smooth normals for a cone. In looking around for code samples and explanations, I consistently come across directions for face normals. I've posted a couple pictures below of what I'm doing. The first -- which basically just normalizes the vertex position -- gives me decently smooth shading, but the edges are "missing" and the bottom face isn't solid. The second has edges, but the shading is flat (face normals) and my light isn't reflecting off of them correctly.
The cone is built out of GL_TRIANGLES.
Click the images for larger versions.
(source: bantherewind.com)
(source: bantherewind.com)
At any point on the surface of a cone except the apex, there are two obvious kinds of tangent vectors: one tangent to the cross-sectional circle, or one up the slope. If you express the surface as a parametric equation with two parameters, you can get these tangent vectors as the two partial derivatives. Take the cross product of the tangents, and you get a normal vector. The order of the product determines whether the normal points inward or outward. Of course, the bottom face must be handled separately.
In addition to the answer by JWWalker I'd like to point out, that a vertex is a whole tuple of vector, that among other things includes position and normal. So if you have different normals at a single position, you got there different and multiple vertices.
In the case of the cone this is important, because the tip of the cone is not one single vertex, but a whole set of them (one tip vertex for each triangle the cone's coat. And then for the base circle you got at each position two vertices, the one for the triangle to the tip, and one for the base surface.
Both the tip and the edge are discontinuities and hence call for a be drawn using separate vertices.

How to create an even sphere with triangles in OpenGL?

Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.

OpenGL/GLSL: What is the best algorithm to render clouds/smoke out of volumetric data?

I would like to render the 3D volume data: Density(can be mapped to Alpha channel), Temperature(can be mapped to RGB).
Currently I am simulationg maximum intensity projection, eg: rendering the most dense/opaque pixel in the end.But this method looses the depth perception.
I would like to imitate the effect like a fire inside the smoke.
So my question is what is the techniques in OpenGL to generate images based on available data?
Any idea is welcome.
Thanks Arman.
I would try a volume ray caster first.
You can google "Volume Visualization With Ray Casting" and that should give you most of what you need. NVidia has a great sample (using openg) of ray casting through a 3D texture.
On your specific implementation, you would just need to keep stepping through the volume accumlating the temperature until you reach the wanted density.
If your volume doesn't fit in video memory, you can do the ray casting in pieces and then do a composition step.
A quick description of ray casting:
CPU:
1) Render a six sided cube in world space as the drawing primitive make sure to use depth culling.
Vertex shader:
2) In the vertex shader store off the world position of the vertices (this will interpolate per fragmet)
Fragment shader:
3) Use the interpolated position minus the camera position to get the vector of traversal through the volume.
4) Use a while loop to step through the volume from the point on the cube through the other side. 3 ways to know when to end.
A) at each step test if the point is still in the cube.
B) do a ray intersection with cube and calculate the distance between the intersections.
C) do a prerender of the cube with forward face culling and store the depths into a second texture map then just sampe at the screen pixel to get the distance.
5) accumulate while you loop and set the pixel color.