From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/
Related
I've been looking online and I'm impressed by the capabilities of using voxel data, especially for terrain building and manipulation. The problem is that voxels are never clearly explained on any site that i visited or how to use/implement them. All i find is that voxels are volumetric data. Please provide a more complete answer; what is volumetric data. It may seem like a simple question but I'm still unsure.
Also, how would you implement voxel data? (I aim to implement this into a c++ program.) What sort of data type would you use to store the voxel data to enable me to modify the contents at run time as fast as possible. I have looked online and i couldn't find anything which explained how to store the data. Lists of objects, arrays, ect...
How do you use voxels?
EDIT:
Since I'm just beginning with voxels, I'll probably start by using it to only model simple objects but I will eventually be using it for rendering terrain and world objects.
In essence, voxels are a three-dimensional extension of pixels ("volumetric pixels"), and they can indeed be used to represent volumetric data.
What is volumetric data
Mathematically, volumetric data can be seen as a three-dimensional function F(x,y,z). In many applications this function is a scalar function, i.e., it has one scalar value at each point (x,y,z) in space. For instance, in medical applications this could be the density of certain tissues. To represent this digitally, one common approach is to simply make slices of the data: imagine images in the (X,Y)-plane, and shifting the z-value to have a number of images. If the slices are close to eachother, the images can be displayed in a video sequence as for instance seen on the wiki-page for MRI-scans (https://upload.wikimedia.org/wikipedia/commons/transcoded/4/44/Structural_MRI_animation.ogv/Structural_MRI_animation.ogv.360p.webm). As you can see, each point in space has one scalar value which is represented as a grayscale.
Instead of slices or a video, one can also represent this data using voxels. Instead of dividing a 2D plane in a regular grid of pixels, we now divide a 3D area in a regular grid of voxels. Again, a scalar value can be given to each voxel. However, visualizing this is not as trivial: whereas we could just give a gray value to pixels, this does not work for voxels (we would only see the colors of the box itself, not of its interior). In fact, this problem is caused by the fact that we live in a 3D world: we can look at a 2D image from a third dimension and completely observe it; but we cannot look at a 3D voxel space and observe it completely as we have no 4th dimension to look from (unless you count time as a 4th dimension, i.e., creating a video).
So we can only look at parts of the data. One way, as indicated above, is to make slices. Another way is to look at so-called "iso-surfaces": we create surfaces in the 3D space for which each point has the same scalar value. For a medical scan, this allows to extract for instance the brain-part from the volumetric data (not just as a slice, but as a 3D model).
Finally, note that surfaces (meshes, terrains, ...) are not volumetric, they are 2D-shapes bent, twisted, stretched and deformed to be embedded in the 3D space. Ideally they represent the border of a volumetric object, but not necessarily (e.g., terrain data will probably not be a closed mesh). A way to represent surfaces using volumetric data, is by making sure the surface is again an iso-surface of some function. As an example: F(x,y,z) = x^2 + y^2 + z^2 - R^2 can represent a sphere with radius R, centered around the origin. For all points (x',y',z') of the sphere, F(x',y',z') = 0. Even more, for points inside the sphere, F < 0, and for points outside of the sphere, F > 0.
A way to "construct" such a function is by creating a distance map, i.e., creating volumetric data such that every point F(x,y,z) indicates the distance to the surface. Of course, the surface is the collection of all the points for which the distance is 0 (so, again, the iso-surface with value 0 just as with the sphere above).
How to implement
As mentioned by others, this indeed depends on the usage. In essence, the data can be given in a 3D matrix. However, this is huge! If you want the resolution doubled, you need 8x as much storage, so in general this is not an efficient solution. This will work for smaller examples, but does not scale very well.
An octree structure is, afaik, the most common structure to store this. Many implementations and optimizations for octrees exist, so have a look at what can be (re)used. As pointed out by Andreas Kahler, sparse voxel octrees are a recent approach.
Octrees allow easier navigating to neighbouring cells, parent cells, child cells, ... (I am assuming now that the concept of octrees (or quadtrees in 2D) are known?) However, if many leaf cells are located at the finest resolutions, this data structure will come with a huge overhead! So, is this better than a 3D array: it somewhat depends on what volumetric data you want to work with, and what operations you want to perform.
If the data is used to represent surfaces, octrees will in general be much better: as stated before, surfaces are not really volumetric, hence will not require many voxels to have relevant data (hence: "sparse" octrees). Refering back to the distance maps, the only relevant data are the points having value 0. The other points can also have any value, but these do not matter (in some cases, the sign is still considered, to denote "interior" and "exterior", but the value itself is not required if only the surface is needed).
How to use
If by "use", you are wondering how to render them, then you can have a look at "marching cubes" and its optimizations. MC will create a triangle mesh from volumetric data, to be rendered in any classical way. Instead of translating to triangles, you can also look at volume rendering to render a "3D sampled data set" (i.e., voxels) as such (https://en.wikipedia.org/wiki/Volume_rendering). I have to admit that I am not that familiar with volume rendering, so I'll leave it at just the wiki-link for now.
Voxels are just 3D pixels, i.e. 3D space regularly subdivided into blocks.
How do you use them? It really depends on what you are trying to do. A ray casting terrain game engine? A medical volume renderer? Something completely different?
Plain 3D arrays might be the best for you, but it is memory intensive. As BWG pointed out, octree is another popular alternative. Search for Sparse Voxel Octrees for a more recent approach.
In popular usage during the 90's and 00's, 'voxel' could mean somewhat different things, which is probably one reason you have been finding it hard to find consistent information. In technical imaging literature, it means 3D volume element. Oftentimes, though, it is used to describe what is somewhat-more-clearly termed a high-detail raycasting engine (as opposed to the low-detail raycasting engine in Doom or Wolfenstein). A popular multi-part tutorial lives in the Flipcode archives. Also check out this brief one by Jacco.
There are many old demos you can find out there that should run under emulation. They are good for inspiration and dissection, but tend to use a lot of assembly code.
You should think carefully about what you want to support with your engine: car-racing, flying, 3D objects, planets, etc., as these constraints can change the implementation of your engine. Oftentimes, there is not a data structure, per se, but the terrain heightfield is represented procedurally by functions. Otherwise, you can use an image as a heightfield. For performance, when rendering to the screen, think about level-of-detail, in other words, how many actual pixels will be taken up by the rendered element. This will determine how much sampling you do of the heightfield. Once you get something working, you can think about ways you can blend pixels over time and screen space to make them look better, while doing as little rendering as possible.
I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .
I have various point clouds defining RT-STRUCTs called ROI from DICOM files. DICOM files are formed by tomographic scanners. Each ROI is formed by point cloud and it represents some 3D object.
The goal is to get 2D curve which is formed by plane, cutting ROI's cloud point. The problem is that I can't just use points which were intersected by plane. What I probably need is to intersect 3D concave hull with some plane and get resulting intersection contour.
Is there any libraries which have already implemented these operations? I've found PCL library and probably it should be able to solve my problem, but I can't figure out how to achieve it with PCL. In addition I can use Matlab as well - we use it through its runtime from C++.
Has anyone stumbled with this problem already?
P.S. As I've mentioned above, I need to use a solution from my C++ code - so it should be some library or matlab solution which I'll use through Matlab Runtime.
P.P.S. Accuracy in such kind of calculations is really important - it will be used in a medical software intended for work with brain tumors, so you can imagine consequences of an error (:
You first need to form a surface from the point set.
If it's possible to pick a 2d direction for the points (ie they form a convexhull in one view) you can use a simple 2D Delaunay triangluation in those 2 coordinates.
otherwise you need a full 3D surfacing function (marching cubes or Poisson)
Then once you have the triangles it's simple to calculate the contour line that a plane cuts them.
See links in Mesh generation from points with x, y and z coordinates
Perhaps you could just discard the points that are far from the plane and project the remaining ones onto the plane. You'll still need to reconstruct the curve in the plane but there are several good methods for that. See for instance http://www.cse.ohio-state.edu/~tamaldey/curverecon.htm and http://valis.cs.uiuc.edu/~sariel/research/CG/applets/Crust/Crust.html.
Okay, i understand that voxels are just basically a volumetric version of a pixel.
After that, I have no idea what to even look for.
Googling doesn't show any tutorials, I can't find a book on it anywhere, I can't find anything even having to do with the basic idea of what a voxel really is.
I know much of the C++ library, and have the basics of OpenGL down.
Can someone point me in the right direction?
EDIT: I guess I'm just confused on how to implement them? Sorry for being a pain, it's just that I can't really find anything that I can easily correlate to... I think I was imagining a voxel being relevant to a vector in which you can actually store data.
a voxel can be represented as ANY 3D shape? For example, say I wanted the shape to be a cylinder. Is this possible, or do they have to link like cubes?
Minecraft is a good example of using voxels. In Minecraft each voxel is a cube.
To see a C++ example you can look at the Minecraft clone Minetest-c55. This is open source so you can read all of the source code to see how its done.
Being cubes is not a requirement of voxels. They could be pyramids or any other shape that can fit together.
I suspect that you are looking for information on Volume Rendering techniques (since you mention voxels and OpenGL). You can find plenty of simple rendering code in C++, and more advanced OpenGL shaders as well with a little searching on that term.
In the simplest possible implementation, a voxel space is just a 3 dimensional Array. For solids you could use a single bit per voxel: 1 == filled and 0 == empty. You use implicit formulas to make shapes, e.g. A sphere is all the voxels within a radius from the center voxel.
Voxels are not really compatible with polygon-based 3d rendering, but they are widely used in image analysis, medical imaging, computer vision...
Let's Make a Voxel Engine on Google Sites might help one to get started creating a voxel based engine:
https://sites.google.com/site/letsmakeavoxelengine/
In addition to that there are presentations of the results on Youtube worth checking:
http://www.youtube.com/watch?v=nH_bHqury9Q&list=PL3899B2CEE4CD4687
Typically a voxel is a position in some 3D space that has a volume (analogous to the area that a pixel contains.
Just like in an image, where a pixel contains some scalar value (grayscale) or vector of values (like in a color image where the vector is either the red, green, and blue components, or hue, saturation, and value components) the entries for a voxel can have some scale or vector of values.
A couple natural examples of volumetric images that contains voxels are 3D medical imagines such as CT, MRI, 3D ultrasound etc.
Mathematically speaking a 3D image is a function from some voxel space to some set of numbers.
look for voxlap or try this http://www.html5code.com/gallery/voxel-rain/ or write your own code. Yes, a voxel can be reduced to a 3d coordinate (which can be implied by it's position in the file structure) and a graphical representation which can be anything (cube, sphere, picture, color ...). Just like a pixel is a 2D coordinate with a color index.
You only need to parse your file and render the corresponding voxels. Sadly, there is no 'right' file format although voxlaps file formats seem pretty neat.
good luck
I'd like to make a game where the terrain is not even and is based on a png. How s this done in theory, given the object's vec2 and its angle, because if for instance there is a hill, the character will rotate based on the angle of the hill. Thanks
2d like mario
I think you are talking about a heightmap which is you PNG which is then converted to a 3D triangle mesh. You need to use the information from the mesh (or PNG color value) to calculate the current height where you should place your character.
If this is a flying character your pretty much done here, but in your case you need to calculate the normal vector of the current triangle the character is standing on. This is pretty simple using the cross product of the two triangle vectors (V2 - V1) x (V3 - V1). That should be your characters angle as well. You could maybe average this vector by including normals from the surrounding trangles as well.
Btw, when you have the normals of the triangle you can apply some basic shading to the ground as well.
Added: The OP changed the question to be a 2D problem. The above approach still works, but it much easier in 2D.
Use the height values not as triangles but as lines (silhouette) and calculate the normal of the current line instead. That is, create a vector, v, between the current height value and the next. Then the normal of that vector is n = <-v.y, v.x>. Use that as the angle of your character.
You need a mapping function that converts the PNG data to a 3-d representation... This mapping function can be simple, as in simply interpreting greyscale values in the PNG as altitude or via human guidance, or it can be complex, as in shadow detection used in advanced computer vision algorithms. In either case, you would then move your character based on the data gleaned thru the mapping function.
In c++, I would suggest looking for a 3d-gaming engine that support more than just plain terrains.
You could start with this list
That is of course, if you're not trying to start from scratch, in which case you need to look for algorithms first.
Edit:
since it's the game and not the terrain that is 2d, if you want to make your environment out of an image, you're in for quite some edge-detection.