Which geometrical calculations can be accelerated using OpenGL - opengl

I need to accelerate some programs that use intensive calculations where surface calculations from the intersection between cubes, spheres and similar are needed. Using CUDA I need to specify all the formuale I need, of course, in order to analytically calculate information related to intersections. But since I only need a good approximation of the resulting surface, I read about OpenGL can calculate or estimate such surfaces. I wonder if you could give me your opinion or point me to relevant references

If you just need to render those objects, you could use the stencil buffer to evaluate whatever boolean operations you need: http://www.opengl.org/resources/code/samples/advanced/advanced97/notes/node11.html
Any quantities that could be computed from either a perspective or orthographic projection of the intersection surface could be deduced from such a rendering together with its depth buffer. If you need to extract the whole intersection, you can try using depth peeling together with stencilled CSG to extract a layered representation of the complete intersection, though it can be very inaccurate on the parts of the surface which are parallel to the viewing direction and you will need to do some extra work to stitch the layers back together:
http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf
EDIT: This will work for arbitrary, free form surfaces and is a fairly standard technique. But it does have its limitations, in that the accuracy you get will be fairly poor and you may have to project onto multiple views in order to get some adequate covering of your object. As an example, here is an application to collision detection: http://www.cs.ucl.ac.uk/staff/b.spanlang/ISBCICSOWH.pdf

OpenGL is of even less use here than CUDA or OpenCL, since it's primarily targeted at drawing triangular tesselated meshes. Of course you can do sophisticated geometrical computations in the various shader stages of modern OpenGL. The problem is, that the result of all those computations is a pixel based picture. There is a feedback mechanism to retrieve the processed vertex data, but that only gives you a mesh.
Intersections of anything planar or/and with spheres is actually quite easy and can be done analytically. The real hard stuff is intersecting freeform curved surfaces (Bezìer or NURBS). Those usually don't have a closed solution, so what you need to do is numerically aproximating a trim curve that best fits the intersection.

Related

Efficiently providing geometry for terrain physics

I have been researching different approaches to terrain systems in game engines for a bit now, trying to familiarize myself with the work. A number of the details seem straightforward, but I am getting hung up on a single detail.
For performance reasons many terrain solutions utilize shaders to generate parts or all of the geometry, such as vertex shaders to generate positions or tessellation shaders for LoD. At first I figured those approaches were exclusively for renders that weren't concerned about physics simulations.
The reason I say that is because as I understand shaders at the moment, the results of a shader computation generally are discarded at the end of the frame. So if you rely on shaders heavily then the geometry information will be gone before you could access it and send it off to another system (such as physics running on the CPU).
So, am I wrong about shaders? Can you store the results of them generating geometry to be accessed by other systems? Or am I forced to keep the terrain geometry on CPU and leave the shaders to the other details?
Shaders
You understand parts of the shaders correctly, that is: after a frame, the data is stored as a final composed image in the backbuffer.
BUT: Using transform feedback it is possible to capture transformed geometry into a vertex buffer and reuse it. Transform Feedback happens AFTER the vertex/geometry/tessellation shader, so you could use the geometry shader to generate a terrain (or visible parts of it once), push it through transform-feedback and store it.
This way, you potentially could use CPU collision detection with your terrain! You can even combine this with tessellation.
You will love this: A Framework for Real-Time, Deformable Terrain.
For the LOD and tessellation: LOD is not the prerequisite of tessellation. You can use tessellation to allow some more sophisticated effects such as adding a detail by recursive subdivision of rough geometry. Linking it with LOD is simply a very good optimization avoiding RAM-memory based LOD-mesh-levels, since you just have your "base mesh" and subdivide it (Although this will be an unsatisfying optimization imho).
Now some deeper info on GPU and CPU exclusive terrain.
GPU Generated Terrain (Procedural)
As written in the NVidia article Generating Complex Procedural Terrains Using the GPU:
1.2 Marching Cubes and the Density Function Conceptually, the terrain surface can be completely described by a single function, called the
density function. For any point in 3D space (x, y, z), the function
produces a single floating-point value. These values vary over
space—sometimes positive, sometimes negative. If the value is
positive, then that point in space is inside the solid terrain.
If the value is negative, then that point is located in empty space
(such as air or water). The boundary between positive and negative
values—where the density value is zero—is the surface of the terrain.
It is along this surface that we wish to construct a polygonal mesh.
Using Shaders
The density function used for generating the terrain, must be available for the collision-detection shader and you have to fill an output buffer containing the collision locations, if any...
CUDA
See: https://www.youtube.com/watch?v=kYzxf3ugcg0
Here someone used CUDA, based on the NVidia article, which however implies the same:
In CUDA, performing collision detection, the density function must be shared.
This will however make the transform feedback techniques a little harder to implement.
Both, Shaders and CUDA, imply resampling/recalculation of the density at at least one location, just for the collision detection of a single object.
CPU Terrain
Usually, this implies a RAM-memory stored set of geometry in the form of vertex/index-buffer pairs, which are regularly processed by the shader-pipeline. As you have the data available here, you will also most likely have a collision mesh, which is a simplified representation of your terrain, against which you perform collision.
Alternatively you could spend your terrain a set of colliders, marking the allowed paths, which is imho performed in the early PS1 Final Fantasy games (which actually don't really have a terrain in the sense we understand terrain today).
This short answer is neither extensively deep nor complete. I just tried to give you some insight into some concepts used in dozens of solutions.
Some more reading: http://prideout.net/blog/?tag=opengl-transform-feedback.

Calculate vertex normals in triangulated geometry with edge detection

No duplicate of Most efficient algorithm to calculate vertex normals from set of triangles for Gouraud shading, as the edge-detection issue is not discussed.
How to computationally calculate normals for every vertex in a triangulated geometry, to be used in a Gouraud shader for a nice display, but with keeping track of edges? Is there a free, fast and performant standard solution for this?
I have been assigned the above mentioned task to fix a routine that produces visible artefacts. The normals should be input data to a simple Gouraud shader to "smoothen" the displayed geometry at coherent face. The routine should also be able to find edges so that they could be used by some other part of the software later on and not be "smoothened over".
The data is read from .stl files that do not contain any normal information, so all face normals must be calculated using the triangles' coordinates.
This is how the geometry looks without interpolation:
This is what the interpolation algorithm does so far:
The rounded plains look quite well, but the interpolation gravely fails at places where the found edge next to a flat surface is not strong enough to trigger the edge-detection algorithm, but also not weak enough to not be visible. The consequence is a misplaced normal that is propagated through the whole triangle(s).
I wonder if there is a standard solution for this, as this problem should occur quite often when working with this type of geometry. And even if there isn't, does anyone of you know common problems with this task and how to avoid them to get some decent results?
Any help would be much appreciated!
Edit: On the algorithm:
The edge detection is not just about the triangle normal. Please consider the following example (the problem is essentially the same in 3D):
All vertices share the same angle, i.e. 30°. (The angles are not exactly correct, but you get the idea...) However, only the outer two of them should be recognised as an edge, so there has to be another measure relevant for this question. So far, I have tried a triangle's
circumcircle radius
longest edge
GTS triangle quality measure
that can modify the "minimum angle" at which two triangles are considered to share an edge. The longest edge method looks most promising (although far from perfect), but I think there is still something I overlooked...
There is a publicly available publication about this coming from Cal-Tech, in a form of a technical report, called "Discrete Differential-Geometry Operators for Triangulated 2-Manifolds".
The algorithms proposed there propose
a unified derivation that ensures accuracy and tight error bounds, leading to simple formulae that are straightforward to implement.
In the report, the algorithms are used for curvature calculations, but the curvature calculation involves an accurate mean curvature normal calculation. This information enables you to incorporate feature edge detection - the authors have used it for that purpose for feature detection on noisy meshes. Also, the mean curvature normal can be then used for shading.

Programming my own triangle rasterization for OpenGL?

I am trying to render rounded triangles to increase performance. To illustrate what I mean, see the picture below:
I tried in the CPU, now is there a way to move this algorithm somehow to the GPU? I can change the method's code that calls the fragment shader?
By the way if I can do it, then what programming language I need to re-make it to?
I am using an OpenGL 2.1 GPU with just 20GB-30GB memory bandwidth.
Read the paper Resolution Independent Curve Rendering using Programmable Graphics Hardware by Charles Loop and Jim Blinn.
Short version: assuming you have an efficient inside/outside test for your curve, render the enclosing hull shape as triangle(s), use a fragment shader to discard the pixels outside the curve.
Second the concern by Aeluned that transferring the algorithm to the GPU won't automatically make it faster.
I'm not sure exactly what you're up to, but it seems a bit dubious. You can actually end up hurting performance trying to do some of these custom calculations in a shader to render a circle or ellipse.
Modern GPU hardware can push billions of triangles a second. You're probably splitting hairs here.
In any case, if you want to 'fake' the geometry, this may be of interest to you: https://alfonse.bitbucket.io/oldtut/Illumination/Tutorial%2013.html
Well on OpenGL 2.1 you do not have geometry shaders (3.2+) so you can forget about GPU.
You can not improve rasterizing performance of convex shapes by your curved triangles
complexity of rasterization of any convex polygon is the same as any triangle of the same area
the difference is only in:
number of passing vertexes
memory transfer
with geometry shader this will be better with your triangles usage
number of boundary lines
boundary lines rasterization for filling
will be worse with your triangles usage
(need to join more triangles instead of single shape polygon)
So its not a good idea to implement this for better performance in your case.
The only thing i can think of to use this for is to ease up manual generation of shapes.
In that case just write a function glRoundedTriangle(....)
which generate the correct vertexes,colors,normals,texture coordinates from given input parameters.
how it would looked like is unknown because you did not specify the rounded triangle geometry/shape and/or input parameters (for example 3 points + 3 signed curve radiuses ?)
To improve performance in OpenGL use VBO/VAO

ExtrudeCut in OpenGl

Hi
How can I extrude cut (like solidworks) a 3D model?
Is there an easy way or I have to do some complex calculation?
What you want to do is part of a discipline called Constructive Solid Geometry (CSG) and it's about one of the trickiest subjects of 3D graphics and processing. There are several approaches how to tackle the problem:
If you're just interested in rendering CSG in a raytracer things get actually quite easy: At every ray/surface intersection you increment/decrement a counter. CSG combinations can also be transformed into surface count. By compariring ray intersection counter and CSG surface count you can apply the CSG operations on the traced ray
If you're interested on doing CSG on triangulated models, the most common approach is to build BSP trees from the geometry and apply the CSG operations on the BSP. Then from the resulting BSP you recreate the mesh. This is how it's implemented in mesh based modellers (take a look at Blender's source code, which does exactly this)
CSG on analytical surfaces is extremely difficult. There are no closed solutions for the intersection of curves or curved surfaces. The best approach is to numerically find a number of sampling points in the intersection and fit a curve along the intersection. This can get numerically unstable.
Tesselation Phase Processing (this is what I implemented (or even invented maybe) for my 3D engine): When rendering curves or curved patches on 3D hardware, one usually must tesselate them into triangular meshes before. In this tesselation phase you can test if the edges of a newly created triangle intersect with curves/curved surfaces; use a few iterations in a Newton zero crossing solver to find the point of intersection of both curves/surfaces and store this as a sampling point at the edge for both patches involved (so that the tesselation of the other surface will share its vertices' positions with the first surface). After the first tesselation stage use a relaxation method (basically apply a Laplacian) on the vertices, while constraining them to the surface (remember that your surfaces are mathematical exact and it's very easy to fiddle with the variables of the surface, but use the resulting positions as metric). It works very well as long as not intersections with ordinary triangulated meshes are to be considered (each triangle of the mesh had to be turned into a surface patch, slowing down the method)
You tagged this OpenGL, so to get this straight: OpenGL can't help you there, as OpenGL is just drawing triangles, not processing complex geometry.
Citing OpenGl faq:
What is OpenGL?
OpenGL stands for Open Graphics
Library. It is an API for doing 3D
graphics.
In more specific terms, it is an API
that is used to "draw triangles on
your scene". In this age of GPUs, it
is about talking to the GPU so that it
does the job of drawing. It does not
deal with file formats. It does not
open bmp, png and any image format. It
does not open 3d object formats like
obj, max, maya. It does not do
animation. It does not handle
keyboard, mouse and any input devices.
It does not create a window, and so
on.
All that stuff should be handled by an
external library (GLUT is one example
that is used for creating and
destroying a window and handling mouse
and keyboard).
GL has gone through a number of
versions.
So the answer is no. Things like extrude cut are complex operations. You have to implement it by your own, ore use third party libraries.

OpenGL, applying texture from image to isosurface

I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)