Computing normals for squares - c++

I've got a model that I've loaded from a JSON file (stored as each tile /w lots of bools for height, slope, smooth, etc.). I've then computed face normals for all of it's faces and copied them to each of their verticies. What I now want to do (have been trying for days) is to smooth the vertex normals, in the simplest way possible. What I'm trying to do is set each vertex normal to a normalized sum of it's surrounding face normals. Now, my problem is this:
The two circled vertices should end up with perfectly mirrored normals. However, the one on the left has 2 light faces and 4 dark faces. The one on the right has 1 light face and 6 dark faces. As such, they both end up with completely different normals.
What I can't work out is how to do this properly. What faces should I be summing up? Or perhaps there is a completely different method I should be using? All of my attempts so far have come up with junk and / or consisted of hundreds of (almost certainly pointless) special cases.
Thanks for any advice, James
Edit: Actually, I just had a thought about what to try next. Would only adding a percentage of each triangle based on it's angle work (if that makes sense). I mean, for the left, clockwise: x1/8, x1/8, x1/4, x1/8, x1/8, x1/4 ???
And then not normalize it?
That solution worked wonderfully. Final result:

Based on the image it looks like you might want to take the average of all unique normals of all adjacent faces. This avoids double counting faces with the same normal.

Related

Why do I have this strange seam between polygons?

I have two simple cube-shaped primitives that are pushed together. Where the polygons connect, there is a razor-thin seam where the polygon edges match up (see pic, red arrow).
Each face owns its own vertices, they are not shared with indicia. I have confirmed with debugging that the coordinates of the vertices at each end of the seam that should be occupying the same position ARE occupying the same position/normal/uv. The winding of the faces that join together are the same. I have even adjusted the code to MANUALLY COPY the positions, normals, and UV of the vertices in question just in case there was some floating point error that was too small to be display.
Can anyone explain what's going on here? Is there a way to fix it without literally joining those vertices into a single vertex and indexing it?
I've included a wireframe pic in the screenshot as well. I can tell from the wireframe that the two lines, though overlapping, are a bit off. But with all coordinates at the same value, what is it??

Algorithm to correct wrong normal direction on 3D model

i am new to 3D math, but i am facing a problem that when i am importing a model to CAD program - (single sided light/ open scene graph) - there are a lot of faces from meshs, Black !
When i flip the normal manually for each of the faces i got the correct material and texture.
my question is, if i know vertices table and normals table for every mesh in the model, can i write an algorithm that will correct all wrong normal direction automatically, i mean it must detect the wrong normals without any help form users and correct them.
i thought about an idea that needs image processing, i know nothing about image processing, so if you can help with from where i should start to achieve this :
first i will assume every blacked face is a wrong normal.
second i will direct a light form the camera to the mesh, and if are all the pixels in black then flip the normal.
and do this for all meshs.
i think i will have a speed issue but that all what i think about.
thanks.
the wrong red plane and the black one, they are in the same model and both of them must have red color, but as i mentioned the black one his normals are flipped.
OSG has a visitor class in osgUtil that can calculate all the vertex normals for you, called smoothingVisitor. You just need to point it at your model after you've loaded it, eg:
// recalculate normals with the smoothing visitor
osgUtil::SmoothingVisitor sv;
loadedModel->accept(sv);
This will clobber existing vertex normals, or create new ones if they don't exist. Just looking quickly at it, it looks like it assumes your geometry is triangles, not quads as you've drawn, but I could be wrong.
I suspect something is up with the indexing, if not then it is probably better to ignore the normal's table entirely. I'm not really sure about the details, but I assume it is 1 normal/vertex, so approaching it would involve calculating poligon normals, which you can do with:
normalize(cross_product(vert0 - vert1, vert0 - vert2));
After that averaging the normals of the poligons sharing the vertex should do.
Calculating the dot product of the freshly calculated normal and the original normal would reveal if the original normal is way off.
Anyways for your problem StackOverflow isn't really the one, there are other Stack Exchange sites that are actually specialized for similar problems. Also there are way too few informations about the issue so helping isn't easy as well.

Marching Cubes, voxels, need a bit of suggestions

I'm trying to construct a proper destructible terrain, just for research purposes.
Well, everything went fine, but resolution is not satisfying me enough.
I have seen a lot of examples how people implement MC algorithm, but most of them,
as far as I understand, uses functions to triangulate final mesh, which is not
appropriate for me.
I will try briefly to explain how I'm constructing my terrain, and maybe someone
of you will give me suggestion how to improve, or to increase resolution of final terrain.
1) Precalculating MC triangles.
I'm running simple loop through MC lookup tables for each case(0-255) and calculating triangles
in rage: [0,0,0] - [1,1,1].
No problems here.
2) Terrain
I have terrain class, which stores my voxels.
In general, it looks like this:
int size = 32;//Size of each axis.
unsigned char *voxels = new unsigned char[(size * size * size)/8];
So, each axis is 32 units of size long, but, I store voxel information per bit.
Meaning if bit is turned on (1), there is something, and there should be draw something.
I have couple of functions:
TurnOn(x,y,z);
TurnOff(x,y,z);
to turn location of voxel on or off. (Helps to work with bits).
Once terrain is allocated, I'm running perlin noise, and turning bits on or off.
My terrain class has one more function, to extract Marching Cubes case number (0-255) from x,y,z location:
unsigned char GetCaseNumber(x,y,z);
by determining if neighbours of that voxel is turned on or off.
No problems here.
3) Rendering part
I'm looping for each axis, extracting case number, then getting precalculated triangles by case,
translating to x,y,z coordinates, and drawing those triangles.
no problems here.
So result looks like this:
But as you can see, in any single location, resolution is not comparable to for example this:
(source: angelfire.com)
I have seen in MC examples that people are using something called "iso values", which I don't understand.
Any suggestions how to improve my work, or what is iso values, and how to implement it in uniform grid would be truly lovely.
The problem is that your voxels are a binary mask (just on or off).
This is great for the "default" marching cubes algorithm, but it it does mean you get sharp edges in your mesh.
The smooth example is probably generated from smooth scalar data.
Imagine that if your data varies smoothly between 0 and 1.0, and you set your threshold to 0.5. Now, after you detect which configuration a given cube is, you look at the all the vertices generated.
Say, that you have a vertex on an edge between two voxels, one with value 0.4 and the other 0.7. Then you move the vertex to the position where you would get exactly 0.5 (the threshold) when interpolating between 0.4 and 0.7. So it will be closer to the 0.4 vertex.
This way, each vertex is exactly on the interpolated iso surface and you will generate much smoother triangles.
But it does require that your input voxels are scalar (and vary smoothly). If your voxels are bi-level (all either 0 or 1), this will produce the same triangles as you got earlier.
Another idea (not the answer to your question but perhaps useful):
To just get smoother rendering, without mathematical correctness, it could be worthwile to compute an average normal vector for each vertex, and use that normal for each triangle connecting to it. This will hide the sharp edges.

OpenGL lighting question?

Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)

OpenGL texturing via vertex alphas, how to avoid following diagonal lines?

http://img136.imageshack.us/img136/3508/texturefailz.png
This is my current program. I know it's terribly ugly, I found two random textures online ('lava' and 'paper') which don't even seem to tile. That's not the problem at the moment.
I'm trying to figure out the first steps of an RPG. This is a top-down screenshot of a 10x10 heightmap (currently set to all 0s, so it's just a plane), and I texture it by making one pass per texture per quad, and each vertex has alpha values for each texture so that they blend with OpenGL.
The problem is that, notice how the textures trend along diagonals, and even though I'm drawing with GL_QUAD, this is presumably because the quads are turned into sets of two triangles and then the alpha values at the corners have more weight along the hypotenuses... But I wasn't expecting that to matter at all. By drawing quads, I was hoping that even though they were split into triangles at some low level, the vertex alphas would cause the texture to radiate in a circular outward gradient from the vertices.
How can I fix this to make it look better? Do I need to scrap this and try a whole different approach? IS there a different approach for something like this? I'd love to hear alternatives as well.
Feel free to ask questions and I'll be here refreshing until I get a valid answer, so I'll comment as fast as I can.
Thanks!!
EDIT:
Here is the kind of thing I'd like to achieve. No I'm obviously not one of the billions of noobs out there "trying to make a MMORPG", I'm using it as an example because it's very much like what I want:
http://img300.imageshack.us/img300/5725/runescapehowdotheytile.png
How do you think this is done? Part of it must be vertex alphas like I'm doing because of the smooth gradients... But maybe they have a list of different triangle configurations within a tile, and each tile stores which configuration it uses? So for example, configuration 1 is a triangle in the topleft and one in the bottomright, 2 is the topright and bottomleft, 3 is a quad on the top and a quad on the bottom, etc? Can you think of any other way I'm missing, or if you've got it all figured out then please share how they do it!
The diagonal artefacts are caused by having all of your quads split into triangles along the same diagonal. You define points [0,1,2,3] for your quad. Each quad is split into triangles [0,1,2] and [1,2,3]. Try drawing with GL_TRIANGLES and alternating your choice of diagonal. There are probably more efficient ways of doing this using GL_TRIANGLE_STRIP or GL_QUAD_STRIP.
i think you are doing it right, but you should increase the resolution of your heightmap a lot to get finer tesselation!
for example look at this heightmap renderer:
mdterrain
it shows the same artifacts at low resolution but gets better if you increase the iterations
I've never done this myself, but I've read several guides (which I can't find right now) and it seems pretty straight-forward and can even be optimized by using shaders.
Create a master texture to control the mixing of 4 sub-textures. Use the r,g,b,a components of the master texture as a percentage mix of each subtextures ( lava, paper, etc, etc). You can easily paint a master texture using paint.net, photostop, gimp and just paint into each color channel. You can compute the resulting texture before hand using all 5 textures OR you can calculate the result on the fly with a fragment shader. I don't have a good example of either, but I think you can figure it out given how far you've come.
The end result will be "pixel" pefect blending (depends on the textures resolution and filtering) and will avoid the vertex blending issues.