Creating and implementing a metaball skin system in OpenGL - c++

I'd like to create or find an open source metaball skin system, which uses metaballs, voxelization to create meshes. I have 3 years experience in C\C++ and OpenGL. Yet I don't have an idea on how to tackle this system. What I want is a way to implement or a way to structure a system like this. Any ideas/thoughts?
Here's a link of liner note of someone who was working on an engine that uses this system:
http://chrishecker.com/My_liner_notes_for_spore

I have 3 years experience in C\C++ and OpenGL. Yet I don't have an idea on how to tackle this system.
I find this very hard to believe.
What you need:
Marching cubes algorithm. Or marching tetrahedra, if you worry about patents.
Meta balls formula
Meta particle system consist of several "meta balls", which "emit" "energy".
"Energy" emmited by meteball can be positive/negative, and normally fall off with a distance from metaball. Falloff is kinda similar to point light, except that instead of point any mathematcial body can be used (as long as you can calculate distance to it).
For any given point in space "energy" of metaball system can be, for example, sum of all energies (they fall off with distance, remember?) of all meta bodies in the system.
Now, given this "energy field", you visualize metabody system as if you would visualize iso surface (or any other procedural surface).
For any point of space, if "energy" is below certain thresold, this point is outside of "mesh" formed by meta particle.
If "energy" is above that thresold, point is inside the "mesh" formed by meta particles.
Two points give you segment, and knowing line segment, you can calculate point at which segment crosses "surface" of the mesh.
Now, just walk through certain region of space using "marching cubes" algorithm, and you'll get a mesh.
Normals are calculated by sampling energy levels at small offsets (x/y/z +- 0.01, for example) of the point that lies on the surface of mesh.
If this explanation is hard, then I think nvidia sdk had examples for marching cubes algorithm. Read example, apply same ideas to meta particles.
Here's what you can get with this algorithm:

Related

B-Spline for any number of control points

I am currently working on a soft body system using numeric spring physics and I have finally got that working. My issue is that everything is currently in straight lines.
I am aiming to replicate something similar to the game "The floor is Jelly" and everything work except the smooth corners and deformation which currently are straight and angular.
I have tried using Cubic Bezier equations but that just means every 3 nodes I have a new curve. Is there an equation for Bezier splines that take in n number of control points that will work with loop of vec2's (so node[0] is the first and last control point).
Sorry I don't any code to show for this but i'm completely stumped and googling is bringing up nothing.
Simply google "B-spline library" will give you many references. Having said this, B-spline is not your only choice. You can use cubic Hermite spline (which is defined by a series of points and derivatives) (see link for details) as well.
On the other hand, you can also continue using straight lines in your system and create a curve interpolating the straight line vertices just for display purpose. To create an interpolating curve thru a series of data points, Catmull-Rom spline is a good choice for easy implementation. This approach is likely to have a better performance than really using a B-spline curve in your system.
I would use B-splines for this problem since they can represent smooth curves with minimal number of control points. In addition finding the approximate smooth surface for a given data set is a simple linear algebra problem.
I have written a simple B-spline C++ library (includes Bezier curves as well) that I am using for scientific computations, here:
https://github.com/feevos/bsplines
it can accept arbitrary number of control points / multiplicities and give you back a basis. However, creating the B-spline curve that fits your data is something you have to do.
A great implementation of B-splines (but no Bezier curves) exists also in GNU GSL (
https://www.gnu.org/software/gsl/manual/html_node/Basis-Splines.html). Again here you have to implement the control points to be 2/3D for the given basis, and fix the boundary conditions to fit your data.
More information on open/closed curves and B-splines here:
https://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/index.html

How do you store voxel data?

I've been looking online and I'm impressed by the capabilities of using voxel data, especially for terrain building and manipulation. The problem is that voxels are never clearly explained on any site that i visited or how to use/implement them. All i find is that voxels are volumetric data. Please provide a more complete answer; what is volumetric data. It may seem like a simple question but I'm still unsure.
Also, how would you implement voxel data? (I aim to implement this into a c++ program.) What sort of data type would you use to store the voxel data to enable me to modify the contents at run time as fast as possible. I have looked online and i couldn't find anything which explained how to store the data. Lists of objects, arrays, ect...
How do you use voxels?
EDIT:
Since I'm just beginning with voxels, I'll probably start by using it to only model simple objects but I will eventually be using it for rendering terrain and world objects.
In essence, voxels are a three-dimensional extension of pixels ("volumetric pixels"), and they can indeed be used to represent volumetric data.
What is volumetric data
Mathematically, volumetric data can be seen as a three-dimensional function F(x,y,z). In many applications this function is a scalar function, i.e., it has one scalar value at each point (x,y,z) in space. For instance, in medical applications this could be the density of certain tissues. To represent this digitally, one common approach is to simply make slices of the data: imagine images in the (X,Y)-plane, and shifting the z-value to have a number of images. If the slices are close to eachother, the images can be displayed in a video sequence as for instance seen on the wiki-page for MRI-scans (https://upload.wikimedia.org/wikipedia/commons/transcoded/4/44/Structural_MRI_animation.ogv/Structural_MRI_animation.ogv.360p.webm). As you can see, each point in space has one scalar value which is represented as a grayscale.
Instead of slices or a video, one can also represent this data using voxels. Instead of dividing a 2D plane in a regular grid of pixels, we now divide a 3D area in a regular grid of voxels. Again, a scalar value can be given to each voxel. However, visualizing this is not as trivial: whereas we could just give a gray value to pixels, this does not work for voxels (we would only see the colors of the box itself, not of its interior). In fact, this problem is caused by the fact that we live in a 3D world: we can look at a 2D image from a third dimension and completely observe it; but we cannot look at a 3D voxel space and observe it completely as we have no 4th dimension to look from (unless you count time as a 4th dimension, i.e., creating a video).
So we can only look at parts of the data. One way, as indicated above, is to make slices. Another way is to look at so-called "iso-surfaces": we create surfaces in the 3D space for which each point has the same scalar value. For a medical scan, this allows to extract for instance the brain-part from the volumetric data (not just as a slice, but as a 3D model).
Finally, note that surfaces (meshes, terrains, ...) are not volumetric, they are 2D-shapes bent, twisted, stretched and deformed to be embedded in the 3D space. Ideally they represent the border of a volumetric object, but not necessarily (e.g., terrain data will probably not be a closed mesh). A way to represent surfaces using volumetric data, is by making sure the surface is again an iso-surface of some function. As an example: F(x,y,z) = x^2 + y^2 + z^2 - R^2 can represent a sphere with radius R, centered around the origin. For all points (x',y',z') of the sphere, F(x',y',z') = 0. Even more, for points inside the sphere, F < 0, and for points outside of the sphere, F > 0.
A way to "construct" such a function is by creating a distance map, i.e., creating volumetric data such that every point F(x,y,z) indicates the distance to the surface. Of course, the surface is the collection of all the points for which the distance is 0 (so, again, the iso-surface with value 0 just as with the sphere above).
How to implement
As mentioned by others, this indeed depends on the usage. In essence, the data can be given in a 3D matrix. However, this is huge! If you want the resolution doubled, you need 8x as much storage, so in general this is not an efficient solution. This will work for smaller examples, but does not scale very well.
An octree structure is, afaik, the most common structure to store this. Many implementations and optimizations for octrees exist, so have a look at what can be (re)used. As pointed out by Andreas Kahler, sparse voxel octrees are a recent approach.
Octrees allow easier navigating to neighbouring cells, parent cells, child cells, ... (I am assuming now that the concept of octrees (or quadtrees in 2D) are known?) However, if many leaf cells are located at the finest resolutions, this data structure will come with a huge overhead! So, is this better than a 3D array: it somewhat depends on what volumetric data you want to work with, and what operations you want to perform.
If the data is used to represent surfaces, octrees will in general be much better: as stated before, surfaces are not really volumetric, hence will not require many voxels to have relevant data (hence: "sparse" octrees). Refering back to the distance maps, the only relevant data are the points having value 0. The other points can also have any value, but these do not matter (in some cases, the sign is still considered, to denote "interior" and "exterior", but the value itself is not required if only the surface is needed).
How to use
If by "use", you are wondering how to render them, then you can have a look at "marching cubes" and its optimizations. MC will create a triangle mesh from volumetric data, to be rendered in any classical way. Instead of translating to triangles, you can also look at volume rendering to render a "3D sampled data set" (i.e., voxels) as such (https://en.wikipedia.org/wiki/Volume_rendering). I have to admit that I am not that familiar with volume rendering, so I'll leave it at just the wiki-link for now.
Voxels are just 3D pixels, i.e. 3D space regularly subdivided into blocks.
How do you use them? It really depends on what you are trying to do. A ray casting terrain game engine? A medical volume renderer? Something completely different?
Plain 3D arrays might be the best for you, but it is memory intensive. As BWG pointed out, octree is another popular alternative. Search for Sparse Voxel Octrees for a more recent approach.
In popular usage during the 90's and 00's, 'voxel' could mean somewhat different things, which is probably one reason you have been finding it hard to find consistent information. In technical imaging literature, it means 3D volume element. Oftentimes, though, it is used to describe what is somewhat-more-clearly termed a high-detail raycasting engine (as opposed to the low-detail raycasting engine in Doom or Wolfenstein). A popular multi-part tutorial lives in the Flipcode archives. Also check out this brief one by Jacco.
There are many old demos you can find out there that should run under emulation. They are good for inspiration and dissection, but tend to use a lot of assembly code.
You should think carefully about what you want to support with your engine: car-racing, flying, 3D objects, planets, etc., as these constraints can change the implementation of your engine. Oftentimes, there is not a data structure, per se, but the terrain heightfield is represented procedurally by functions. Otherwise, you can use an image as a heightfield. For performance, when rendering to the screen, think about level-of-detail, in other words, how many actual pixels will be taken up by the rendered element. This will determine how much sampling you do of the heightfield. Once you get something working, you can think about ways you can blend pixels over time and screen space to make them look better, while doing as little rendering as possible.

Ray-mesh intersection or AABB tree implementation in C++ with little overhead?

Can you recommend me...
either a proven lightweight C / C++ implementation of an AABB tree?
or, alternatively, another efficient data-structure, plus a lightweight C / C++ implementation, to solve the problem of intersecting a large number of rays with a large number of triangles?
"Large number" means several 100k for both rays and triangles.
I am aware that AABB trees are part of the CGAL library and probably of game physics libraries like Bullet. However, I don't want the overhead of an enormous additional library in my project. Ideally, I'd like to use a small float-type templated header-only implementation. I would also go for something with a bunch of CPP files, as long as it integrated easily in my project. Dependency on boost is ok.
Yes, I have googled, but without success.
I should mention that my application context is mesh processing, and not rendering. In a nutshell, I'm transferring the topology of a reference mesh to the geometry of a mesh from a 3D scan. I'm shooting rays from vertices and along the normals of the reference mesh towards the 3D scan, and I need to recover the intersection of these rays with the scan.
Edit
Several answers / comments pointed to nearest-neighbor data structures. I have created a small illustration regarding the problems that arise when ray-mesh intersections are approached with nearest neighbor methods. Nearest neighbors methods can be used as heuristics that work in many cases, but I'm not convinced that they actually solve the problem systematically, like AABB trees do.
While this code is a bit old and using the 3DS Max SDK, it gives a fairly good tree system for object-object collision deformations in C++. Can't tell at a glance if it is Quad-tree, AABB-tree, or even OBB-tree (comments are a bit skimpy too).
http://www.max3dstuff.com/max4/objectDeform/help.html
It will require translation from Max to your own system but it may be worth the effort.
Try the ANN library:
http://www.cs.umd.edu/~mount/ANN/
It's "Approximate Nearest Neighbors". I know, you're looking for something slightly different, but here's how you can use this to speed up your data processing:
Feed points into ANN.
Query a user-selectable (think of this as a "per-mesh knob") radius around each vertex that you want to ray-cast from and find out the mesh vertices that are within range.
Select only the triangles that are within that range, and ray trace along the normal to find the one you want.
By judiciously choosing the search radius, you will definitely get a sizable speed-up without compromising on accuracy.
If there's no real time requirements, I'd first try brute force.
1M * 1M ray->triangle tests shouldn't take much more than a few minutes to run (in CPU).
If that's a problem, the second best thing to do would be to restrict the search area by calculating a adjacency graph/relation between the triangles/polygons in the target mesh. After an initial guess fails, one can try the adjacent triangles. This of course relies on lack of self occlusion / multiple hit points. (which I think is one interpretation of "visibility doesn't apply to this problem").
Also depending on how pathological the topologies are, one could try environment mapping the target mesh on a unit cube (each pixel would consists of a list of triangles projected on it) and test the initial candidate by a single ray->aabb test + lookup.
Given the feedback, there's one more simple option to consider -- space partitioning to simple 3D grid, where each dimension can be subdivided by the histogram of the x/y/z locations or even regularly.
100x100x100 grid is of very manageable size of 1e6 entries
the maximum number of cubes to visit is proportional to the diameter (max 300)
There are ~60000 extreme cells, which suggests an order of 10 triangles per cell
caveats: triangles must be placed on every cell they occupy
-- a conservative algorithm places them to cells they don't belong to; large triangles will probably require clipping and reassembly.

library for beam tracing (beam intersection) on a 3D Polygon model

I want to simulate a laser scanner which emits laser beam onto a 3D model to measure distance or other features from the model. The 3D model consists of vertices in xyz coordinate and faces; each vertex has also some user defined features.
The method should be simple. I define a view point and view vector (i.e. laser beam); what I need to do is checking the first vertex or the first face which is intersected with the view vector, then I can measure the distance and evaluate feature from the nearest vertices.
Is there any available library or tools to do that?
What you are talking about is, in a very literal sense, ray tracing. The maths and code behind doing this is not particularly complicated, especially if you don't have to consider reflections. There's a tutorial for doing exactly this in C++ here; triangle intersection is almost as simple as sphere intersection, and you can completely ignore the surface properties. If you don't want to write your own code (but seriously, it's maybe a hundred lines to do what you're looking for), there's a hint as to how to get Povray to do what you're after here.
EDIT: More maths, including triangle intersection, is here.

Implementing Marching Cube Algorithm?

From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/