Marching Cubes, voxels, need a bit of suggestions - c++

I'm trying to construct a proper destructible terrain, just for research purposes.
Well, everything went fine, but resolution is not satisfying me enough.
I have seen a lot of examples how people implement MC algorithm, but most of them,
as far as I understand, uses functions to triangulate final mesh, which is not
appropriate for me.
I will try briefly to explain how I'm constructing my terrain, and maybe someone
of you will give me suggestion how to improve, or to increase resolution of final terrain.
1) Precalculating MC triangles.
I'm running simple loop through MC lookup tables for each case(0-255) and calculating triangles
in rage: [0,0,0] - [1,1,1].
No problems here.
2) Terrain
I have terrain class, which stores my voxels.
In general, it looks like this:
int size = 32;//Size of each axis.
unsigned char *voxels = new unsigned char[(size * size * size)/8];
So, each axis is 32 units of size long, but, I store voxel information per bit.
Meaning if bit is turned on (1), there is something, and there should be draw something.
I have couple of functions:
TurnOn(x,y,z);
TurnOff(x,y,z);
to turn location of voxel on or off. (Helps to work with bits).
Once terrain is allocated, I'm running perlin noise, and turning bits on or off.
My terrain class has one more function, to extract Marching Cubes case number (0-255) from x,y,z location:
unsigned char GetCaseNumber(x,y,z);
by determining if neighbours of that voxel is turned on or off.
No problems here.
3) Rendering part
I'm looping for each axis, extracting case number, then getting precalculated triangles by case,
translating to x,y,z coordinates, and drawing those triangles.
no problems here.
So result looks like this:
But as you can see, in any single location, resolution is not comparable to for example this:
(source: angelfire.com)
I have seen in MC examples that people are using something called "iso values", which I don't understand.
Any suggestions how to improve my work, or what is iso values, and how to implement it in uniform grid would be truly lovely.

The problem is that your voxels are a binary mask (just on or off).
This is great for the "default" marching cubes algorithm, but it it does mean you get sharp edges in your mesh.
The smooth example is probably generated from smooth scalar data.
Imagine that if your data varies smoothly between 0 and 1.0, and you set your threshold to 0.5. Now, after you detect which configuration a given cube is, you look at the all the vertices generated.
Say, that you have a vertex on an edge between two voxels, one with value 0.4 and the other 0.7. Then you move the vertex to the position where you would get exactly 0.5 (the threshold) when interpolating between 0.4 and 0.7. So it will be closer to the 0.4 vertex.
This way, each vertex is exactly on the interpolated iso surface and you will generate much smoother triangles.
But it does require that your input voxels are scalar (and vary smoothly). If your voxels are bi-level (all either 0 or 1), this will produce the same triangles as you got earlier.
Another idea (not the answer to your question but perhaps useful):
To just get smoother rendering, without mathematical correctness, it could be worthwile to compute an average normal vector for each vertex, and use that normal for each triangle connecting to it. This will hide the sharp edges.

Related

Normal of point via its location on STL mesh model

Can someone tell me the best way to estimate the normal at a point on CAD STL geometry?
This is not exactly a question on code, but rather about efficiency and approach.
I have used an approach in which I compare the point whose normal needs to be estimated with all the triangles in the mesh and check to see if it lies inside the triangle using the barycentric coordinates test. (If the value of each barycentric coordinate lies between 0 and 1, the point lies inside.) This post explains it
https://math.stackexchange.com/questions/4322/check-whether-a-point-is-within-a-3d-triangle
Then I compute the normal of that triangle to get the point normal.
The problem with my approach is that, if I have some 1000 points, and if the mesh has say, 500 triangles, that would mean doing some 500X1000 checks. This takes a lot of time.
Is there an efficient data structure or approach I could use, to pinpoint the right triangle? Or a library that could get the work done?
A relatively easy solution is by using a grid: decompose the space in a 3D array of voxels, and for every voxel keep a list of the triangles that interfere with it.
By interfere, I mean that there is a nonempty intersection between the voxel and the bounding box of the triangle. (When you know the bounding box, it is straight forward to tell what voxels it covers.)
When you want to test a point, find the voxel it belongs to and compare to the list of triangles. You will achieve a speedup equal to N/M, where M is the average number of triangles per voxel.
The voxel size should be chosen carefully. Too small will result in a too big data structure; too large will make the method ineffective. If possible, adjust to "a few" triangles per voxel. (Use the average triangle size - square root of double area - as a starting value.)
For better efficiency, you can compute the exact intersections between the triangles and the voxels, using a 3D polygon clipping algorithm (rather than a mere bounding box test), but this is more complex to implement.

Computing normals for squares

I've got a model that I've loaded from a JSON file (stored as each tile /w lots of bools for height, slope, smooth, etc.). I've then computed face normals for all of it's faces and copied them to each of their verticies. What I now want to do (have been trying for days) is to smooth the vertex normals, in the simplest way possible. What I'm trying to do is set each vertex normal to a normalized sum of it's surrounding face normals. Now, my problem is this:
The two circled vertices should end up with perfectly mirrored normals. However, the one on the left has 2 light faces and 4 dark faces. The one on the right has 1 light face and 6 dark faces. As such, they both end up with completely different normals.
What I can't work out is how to do this properly. What faces should I be summing up? Or perhaps there is a completely different method I should be using? All of my attempts so far have come up with junk and / or consisted of hundreds of (almost certainly pointless) special cases.
Thanks for any advice, James
Edit: Actually, I just had a thought about what to try next. Would only adding a percentage of each triangle based on it's angle work (if that makes sense). I mean, for the left, clockwise: x1/8, x1/8, x1/4, x1/8, x1/8, x1/4 ???
And then not normalize it?
That solution worked wonderfully. Final result:
Based on the image it looks like you might want to take the average of all unique normals of all adjacent faces. This avoids double counting faces with the same normal.

Interpolate color between voxels

I have a 3D texture containing voxels and I am ray tracing and, everytime i hit a voxel i display the color. The result is nice but you can clearly see the different blocks being separated by one another. i would like to get a smoothing color going from one voxel to the other so I was thinking of doing interpolation.
My problem is that when I hit the voxel I am not sure which other neighbouring voxels to extract the colors from because i don't know if the voxel is part of a wall parallel to some axis or if it is a floor or an isolate part of the scene. Ideally I would have to get, for every voxel, the 26 neighbouring voxels, but that can be quite expensive. Is there any fast and approximate solution for such thing?
PS: I notice that in minecraft there is smooth shadows that form when voxels are placed near each other, maybe that uses a technique that might be adapted for this purpose?

How can I deterministically detect the shader fragment location in its 2x2 pixel quad?

I've been trying to utilize the techniques in Eric Penner's "Shader Amortization using
Pixel Quad Message Passing" from GPU Pro 2, Chapter VI.2. The basic idea is that modern GPU's process fragment shaders in 2x2 fragment quads, and you can use ddx() and ddy() to get the value of some_var at all four fragments as long as the following hold:
Your GPU supports high-quality derivatives
You know which fragment you're processing (top-left, top-right, bottom-left, bottom-right)
This opens up a lot of opportunities for fragment shader optimization (like distributing texture fetches over a 2x2 pixel quad) that you'd need Compute Shaders to beat.
My problem is this:
I can't deterministically detect which fragment I'm processing. Ideally, each fragment block would start at even-numbered output pixel coords like (0, 0), (2, 0), ... (1024, 1024), ..., so you'd just need to check whether the output pixel x and y coords are even or odd to know which fragment you're currently processing. The method Penner uses in the book assumes this works...but it seems to be going wrong for me.
Unfortunately, my 2x2 fragment quads appear to be starting in nondeterministic places: I've seen them start at (even, even), (even, odd), and (odd, even). I can't remember if I've seen (odd, odd) or not, but anyway, the arrangement seems to depend on a myriad of factors I don't understand, including the output resolution and shader specifics. (I'm testing on an 8800 GTS, in case anyone's wondering.)
Does anyone know what might be causing this nondeterminism or have any documentation on it? I understand there's virtually no official standardization in this area, but I'm more interested in how things work in practice on modern desktop-level GPU's, and I'm hoping there's a way to get this technique to work. If no one knows how to reason about the even/odd start behavior, does anyone know any other way of determining the current fragment's relative location in its 2x2 quad?
Thanks :)
As it turns out, the premise of my question was mostly wrong:
The 2x2 fragment quads DO almost always start on even pixel numbers...as long as the output resolution is even-numbered.
If the output resolution is odd-numbered (a possibility with the underlying program I'm working with), things can get more complicated, for obvious reasons. I don't expect there's any uniformity here across drivers/GPU's/etc. either, but my current tests (which themselves may still be buggy) appear to demonstrate 2x2 pixel quads starting at an odd pixel along the dimension with odd resolution, at least when the odd dimension is horizontal.
All of this weirdness helped obscure my bigger issue: The code I used to detect the fragment's location in the pixel quad was buggy. I tested by setting the texture coordinates equal within a pixel quad (set to the pixel quad center)...or so I thought. However, I calculated the screen coordinates based on a full-screen quad where the uv mapping has the +v axis pointing downward. The screenspace origin starts at the bottom-left, because it's based on the top-right quadrant of Cartesian coordinates, and I accidentally forgot to invert the v-coordinate of the uv offset I used to find the pixel quad center. Many of my nondeterministic observations came from failing to check my assumptions while debugging and misinterpreting things as a result, particularly in combination with odd resolutions.
This was an embarrassing mistake I should have caught a lot sooner, but I figured I'd detail it as a warning to others to always double-check the direction of your vertical axis when you're dealing with opposite-facing coordinate frames. ;)
UPDATE:
I ran across a situation where 2x2 pixel quads started on even pixel numbers even when the resolution was odd. Thanks to the nondeterminism under odd resolutions, I had to work out another solution:
If you're deriving your screen pixel numbers from the uv coords of a fullscreen quad (for post-processing), the fragment location derived from this is only useful for arranging/placing shared samples between fragments, etc., not for the quad-pixel communication itself. You'll need to have screen pixel numbers with respect to the screenspace origin for that. You can derive these from vertex positions, or you can use ddx().x and ddy().y on the uv-based pixel numbers to find out their screen direction and mirror the fragment position in the appropriate direction from there.
Calculate the fragment location based on your screen pixel numbers (with respect to the true screenspace origin) and the assumption 2x2 pixel quads start on even pixels. (If you used uv-based pixel numbers, now is the time to mirror things.)
Do a ddx().x and ddy().y on the fragment location, and if they're negative in either direction, you know the pixel quad starts at an odd pixel number in that direction...so mirror in that direction.
If you calculate two fragment positions, one based on a uv origin and one based on a screen origin, use the uv-based one for reasoning about uv-based sample placement, and use the screen-based one for actually obtaining the values of a variable at neighboring fragments.
Profit.
I'll post a link to my working MIT-licensed code once I release it on Github, along with usage examples (the speedup is unfortunately not what I expected, but whatever ;)). I'm just waiting to get done with a larger shader I'll be uploading along with it.

How to render a textured polygon on top of another?

Let's say i have 2 textured triangles.
I want to draw one triangle over the other one, such that the top one is basically laying on top of the second one.
Now technically they are on the same plane, but they do not share the same "space" (they do not intersect), though visually it is tough to tell at a certain distance.
Basically when these triangles are very close together (in parallel) i see texture "artifacts". I should ONLY see the triangle that is on top. But what im seeing is that the triangle in the background tends to "bleed" through.
Is there a way to alleviate this side effect, like increasing the depth precision or something? Maybe even increase the tessellation of the triangles?
* Update *
I am using vertex and index buffers. This is using OpenGL ES on iPhone.
I dont know if this picture will help or make things worse. But here it is. Two triangles very close to each other along the Z-axis (but not touching). (NOTE: the normal vector for these triangles are going straight towards you).
You can increase the depth precision up to 32 bits per pixel. However, if the 2 triangles are coplanar, that likely won't fix the problem. If they aren't coplanar (it's really hard to tell from your description what you're talking about), then increasing the depth precision might help. If you're using FBOs for your drawing, simply create the depth texture with 32-bits per component by using GL_DEPTH_COMPONENT32 for the internal format. There are several examples here. If you're not using FBOs, please describe how you create your context (also what OS you're on - Windows, OS X, Linux?).
You could try changing the Depth Buffer function to something more appropriate...
glDepthFunc(GL_ALWAYS) - Essentially disables depth testing
glDepthFunc(GL_GEQUAL) - Overwrites when greater OR equal
If they are too close (assuming they are parallel, not on the same plane), you will get precision errors (like banding artifacts=. Try adding some small offset to the top polygon using glPolygonOffsset: http://www.opengl.org/sdk/docs/man/xhtml/glPolygonOffset.xml Check this simple tutorial: http://www.felixgers.de/teaching/jogl/polygonOffset.html
EDIT: Also try increasing precision as #user1118321 says.
What you are describing is called Z-Fighting (http://en.wikipedia.org/wiki/Z-fighting).
Sadly depth buffers only have limited precision, so if the difference in depth of two polygons is smaller than the precision of the depth buffer, you can't predict which polygon will pass the depth test and be drawn.
As others have said, you can increase the precision of the depth buffer so that polygons have to be closer to each other before the z-fighting artifacts occur, or you can disable the depth test so you are ensured that polygons rendered wont be blocked by anything previously drawn.