Recreating Blender's Edge Split Algorithm - opengl

I'm working on an OpenGL project right now. I use per-vertex normals for my lighting calculations which causes them to be automatically smoothed (I think this is called gara-something shading).
This beginning to work on some low-poly design using hardcoded models - but they look weird because of the auto-smoothing.
Blender has a mesh edge-split option that does exactly what I'm looking for. It turns models
from this
to this
Instead of having to rewrite my render system to allow for both per-vertex normal smooth lighting and per-face normal hard lighting, I was wondering if anyone knows how Blender's edge-split algorithm works so I can recreate it in my hardcoded models.
Using Blender I've compared normally exported .obj's with their edge-splitted counterparts and there's no way for me to fully understand the difference.
Many thanks.

I don't have enough points for a comment, so here goes an answer..
In general, I agree with lfgtm: if each vertex has 1 normal, but the vertex is shared (reused, i.e. indexed) by several triangles, then you cannot achieve a 'flat' look or have split edges because you miss the extra normals. You would have to duplicate the vertex.
But since you mention OBJ export, have you tried ticking the "Smooth groups" option in the OBJ export options:
This will create smoothing groups according to the edges you have marked sharp (In edit mode: Mesh -> Edges -> Mark sharp). If your engine supports these smoothing groups in the OBJ file, you will keep this sharp edge effect. You can preview these sharp edges in Blender using the Edge Split modifier (tick the "Sharp Edges" option):

Blender's edge split modifier achieves it's result by breaking the mesh, if you apply the edge split modifier you get a mesh with faces that are not connected.
What you want to look into is the difference between smooth and flat shading
If you google "opengl flat shading" you will find several items like this one suggesting the use of glShadeModel(GL_FLAT);

That would be Gouraud shading. There's no geometry processing going on here by looks of things. It's simply a different shading model.
In OpenGL, for each triangle, you need to draw all three vertices with the same normal, this normal is the cross product of the two triangle edges. You will then see a mesh with the same flat-shading as your image.
If each vertex has its own normal, and these normals are not all equal in the triangle, OpenGL will interpolate the difference across the surface of the face producing a much smoother transition of the shading from vertex to vertex. This is smooth (gouraud shading).
So in summary, you can achieve this by changing how you draw the normals in relation to the triangles.

Related

OpenGL - texturing mapping 3D object

I have model of skull loaded from .obj file based on this tutorial . As long as I understand texture mapping of cube (make triangle on texture in range of [0,1], select one of six side, select triangle of two triangles on this side and map it with your triangle from texture), I have problem with thinking for any solution to texture mapping my skull. There are few thousands of triangles on it and I think that texture mapping them manually is more than wrong.
Is there any solution for this problem? I'll appreciate any piece of code since it may tell me more than just description of solution.
You can generate your UV coordinates automatically, but this will probably produce badly looking ouput except for very simple textures.
For detailed textures that have eyes, ears, etc., you need to crate your UV coordinates by hand in some 3d modeling tool like is Blender 3d, 3DS Max etc... There is a lot of tutorials all over the internet how to do that. (https://www.youtube.com/watch?v=eCGGe4jLo3M)

Working with .off in opengl with normals

How can i put the right normals to the object? There is a way to transform from .off into an object that contains normals?
OFF files usually don't support special attributes per face or per vertex and thus have no normals.
The simplest thing you can do is do simply calculate the normals yourself using one of the well-known and easy to implement face-normal algorithms. Here are a few examples.
You can also look at doing normals via a texture (aka bump mapping). You will basically need to generate a normal map of your models in a 3D program and be using some kind of perpixel shading and bind your shades to use the normal texture.
The advantage to doing it that way is you can have higher detail without increasing the number of polygons. You can do things like generate a single grainy bumpy stone pattern on a single flat polygon.
Also if you generate your normals procedurally you can't mix flat and smooth shading.

OpenGL/GLSL varying vectors: How to avoid starburst around vertices?

In OpenGL 2.1, I'm passing a position and normal vector to my vertex shader. The vertex shader then sets a varying to the normal vector, so in theory it's linearly interpolating the normals across each triangle. (Which I understand to be the foundation of Phong shading.)
In the fragment shader, I use the normal with Lambert's law to calculate the diffuse reflection. This works as expected, except that the interpolation between vertices looks funny. Specifically, I'm seeing a starburst affect, wherein there are noticeable "hot spots" along the edges between vertices.
Here's an example, not from my own rendering but demonstrating the exact same effect (see the gold sphere partway down the page):
http://pages.cpsc.ucalgary.ca/~slongay/pmwiki-2.2.1/pmwiki.php?n=CPSC453W11.Lab12
Wikipedia says this is a problem with Gauraud shading. But as I understand it, by interpolating the normals and running my lighting calculation per-fragment, I'm using the Phong model, not Gouraud. Is that right?
If I were to use a much finer mesh, I presume that these starbursts would be much less noticeable. But is adding more triangles the only way to solve this problem? I would think there would be a way to get smooth interpolation without the starburst effect. (I've certainly seen perfectly smooth shading on rough meshes elsewhere, such as in 3d Studio Max. But maybe they're doing something more sophisticated than just interpolating normals.)
It is not the exact same effect. What you are seeing is one of two things.
The result of not normalizing the normals before using them in your fragment shader.
An optical illusion created by the collision of linear gradients across the edges of triangles. Really.
The "Gradient Matters" section at the bottom of this page (note: in the interest of full disclosure, that's my tutorial) explains the phenomenon in detail. Simple Lambert diffuse reflectance using interpolated normals effectively creates a more-or-less linear light across a triangle. A triangle with a different set of normals will have a different gradient. It will be C0 continuous (the colors along the edges are the same), but not C1 continuous (the colors along the two gradients change at different rates).
Human vision picks up on gradient differences like these and makes them stand out. Thus, we see them as hard-edges when in fact they are not.
The only real solution here is to either tessellate the mesh further or use normal maps created from a finer version of the mesh instead of interpolated normals.
You don't show your code, so its impossible to tell, but the most likely problem would be unnormalized normals in your fragment shader. The normals calculated in your vertex shader are interpolated, which results in vectors that are not unit length -- so you need to renormalize them in the fragment shader before you calculate your fragment lighting.

OpenGL texturing via vertex alphas, how to avoid following diagonal lines?

http://img136.imageshack.us/img136/3508/texturefailz.png
This is my current program. I know it's terribly ugly, I found two random textures online ('lava' and 'paper') which don't even seem to tile. That's not the problem at the moment.
I'm trying to figure out the first steps of an RPG. This is a top-down screenshot of a 10x10 heightmap (currently set to all 0s, so it's just a plane), and I texture it by making one pass per texture per quad, and each vertex has alpha values for each texture so that they blend with OpenGL.
The problem is that, notice how the textures trend along diagonals, and even though I'm drawing with GL_QUAD, this is presumably because the quads are turned into sets of two triangles and then the alpha values at the corners have more weight along the hypotenuses... But I wasn't expecting that to matter at all. By drawing quads, I was hoping that even though they were split into triangles at some low level, the vertex alphas would cause the texture to radiate in a circular outward gradient from the vertices.
How can I fix this to make it look better? Do I need to scrap this and try a whole different approach? IS there a different approach for something like this? I'd love to hear alternatives as well.
Feel free to ask questions and I'll be here refreshing until I get a valid answer, so I'll comment as fast as I can.
Thanks!!
EDIT:
Here is the kind of thing I'd like to achieve. No I'm obviously not one of the billions of noobs out there "trying to make a MMORPG", I'm using it as an example because it's very much like what I want:
http://img300.imageshack.us/img300/5725/runescapehowdotheytile.png
How do you think this is done? Part of it must be vertex alphas like I'm doing because of the smooth gradients... But maybe they have a list of different triangle configurations within a tile, and each tile stores which configuration it uses? So for example, configuration 1 is a triangle in the topleft and one in the bottomright, 2 is the topright and bottomleft, 3 is a quad on the top and a quad on the bottom, etc? Can you think of any other way I'm missing, or if you've got it all figured out then please share how they do it!
The diagonal artefacts are caused by having all of your quads split into triangles along the same diagonal. You define points [0,1,2,3] for your quad. Each quad is split into triangles [0,1,2] and [1,2,3]. Try drawing with GL_TRIANGLES and alternating your choice of diagonal. There are probably more efficient ways of doing this using GL_TRIANGLE_STRIP or GL_QUAD_STRIP.
i think you are doing it right, but you should increase the resolution of your heightmap a lot to get finer tesselation!
for example look at this heightmap renderer:
mdterrain
it shows the same artifacts at low resolution but gets better if you increase the iterations
I've never done this myself, but I've read several guides (which I can't find right now) and it seems pretty straight-forward and can even be optimized by using shaders.
Create a master texture to control the mixing of 4 sub-textures. Use the r,g,b,a components of the master texture as a percentage mix of each subtextures ( lava, paper, etc, etc). You can easily paint a master texture using paint.net, photostop, gimp and just paint into each color channel. You can compute the resulting texture before hand using all 5 textures OR you can calculate the result on the fly with a fragment shader. I don't have a good example of either, but I think you can figure it out given how far you've come.
The end result will be "pixel" pefect blending (depends on the textures resolution and filtering) and will avoid the vertex blending issues.

Techniques to smooth face edges in OpenGL

When I light my human model in OpenGL using face normals it is very apparent where each face is on the model. The lighting becomes considerably smoother using vertex normals but still noticeable. Is there any technique available to smooth organic models without adding additional vertices (ex. subdivisions)?
If you are seeing single faces with per vertex normals you might have forgotten to enable smooth facing:
glShadeModel(GL_SMOOTH);
If that's not helping make sure all vertices with the same position also have the same normal. This should be done by blender automatically however. It's also possible that your model has just too few polygons. If that's the case you either have to use more polygons or do what basszero said. Create a normal map and use that together with a GLSL shader.
If you've got access to a higher resolution model/mesh or perhaps a decent 3d modeling tool that can create a highres model via subdivisions or other fancy methods...
GLSL per-pixel-lighting + normal maps
UPDATE:
http://www.blender.org/development/release-logs/blender-246/render-baking/
Yes there are many different techniques. One is to first calculate a normal for each face (which you probably are using for the vertices in the face) and use these normals to calculate new normals for each vertex.
Here's one where you calculate the average of five adjacent triangles.
Cheers !