Techniques to smooth face edges in OpenGL - opengl

When I light my human model in OpenGL using face normals it is very apparent where each face is on the model. The lighting becomes considerably smoother using vertex normals but still noticeable. Is there any technique available to smooth organic models without adding additional vertices (ex. subdivisions)?

If you are seeing single faces with per vertex normals you might have forgotten to enable smooth facing:
glShadeModel(GL_SMOOTH);
If that's not helping make sure all vertices with the same position also have the same normal. This should be done by blender automatically however. It's also possible that your model has just too few polygons. If that's the case you either have to use more polygons or do what basszero said. Create a normal map and use that together with a GLSL shader.

If you've got access to a higher resolution model/mesh or perhaps a decent 3d modeling tool that can create a highres model via subdivisions or other fancy methods...
GLSL per-pixel-lighting + normal maps
UPDATE:
http://www.blender.org/development/release-logs/blender-246/render-baking/

Yes there are many different techniques. One is to first calculate a normal for each face (which you probably are using for the vertices in the face) and use these normals to calculate new normals for each vertex.
Here's one where you calculate the average of five adjacent triangles.
Cheers !

Related

How to draw a sphere in D3D11, given position and radius?

To draw a sphere, one does not need to know anything else but it's position and radius. Thus, rendering a sphere by passing a triangle mesh sounds very inefficient unless you need per-vertex colors or other such features. Despite googling, searching D3D11 documentation and reading Introduction to 3D Programming with DirectX 11, I failed to understand
Is it possible to draw a sphere by passing only the position and radius of it to the GPU?
If not, what is the main principle I have misunderstood?
If yes, how to do it?
My ultimate goal is to pass more parameters later on which will be used by a shader effect.
You will need to implement Geometry Shader. This shader should take Sphere center and radius as input and emit a banch of vertices for rasterization. In general this is called point sprites.
One option would be to use tessellation.
https://en.wikipedia.org/wiki/Tessellation_(computer_graphics)
Most of the mess will be generated on the gpu side.
Note:
In the end you still have more parameters sent to the shaders because the sphere will be split into triangles that will be each rendered individually on the screen.
But the split is done on the gpu side.
While you can create a sphere from a point & vertex on the GPU, it's generally not very efficient. With higher-end GPUs you could use Hardware Tessellation, but even that would be better done a different way.
The better solution is to use instancing and render lots of the same VB/IB of sphere geometry scaled to different positions and sizes.

Recreating Blender's Edge Split Algorithm

I'm working on an OpenGL project right now. I use per-vertex normals for my lighting calculations which causes them to be automatically smoothed (I think this is called gara-something shading).
This beginning to work on some low-poly design using hardcoded models - but they look weird because of the auto-smoothing.
Blender has a mesh edge-split option that does exactly what I'm looking for. It turns models
from this
to this
Instead of having to rewrite my render system to allow for both per-vertex normal smooth lighting and per-face normal hard lighting, I was wondering if anyone knows how Blender's edge-split algorithm works so I can recreate it in my hardcoded models.
Using Blender I've compared normally exported .obj's with their edge-splitted counterparts and there's no way for me to fully understand the difference.
Many thanks.
I don't have enough points for a comment, so here goes an answer..
In general, I agree with lfgtm: if each vertex has 1 normal, but the vertex is shared (reused, i.e. indexed) by several triangles, then you cannot achieve a 'flat' look or have split edges because you miss the extra normals. You would have to duplicate the vertex.
But since you mention OBJ export, have you tried ticking the "Smooth groups" option in the OBJ export options:
This will create smoothing groups according to the edges you have marked sharp (In edit mode: Mesh -> Edges -> Mark sharp). If your engine supports these smoothing groups in the OBJ file, you will keep this sharp edge effect. You can preview these sharp edges in Blender using the Edge Split modifier (tick the "Sharp Edges" option):
Blender's edge split modifier achieves it's result by breaking the mesh, if you apply the edge split modifier you get a mesh with faces that are not connected.
What you want to look into is the difference between smooth and flat shading
If you google "opengl flat shading" you will find several items like this one suggesting the use of glShadeModel(GL_FLAT);
That would be Gouraud shading. There's no geometry processing going on here by looks of things. It's simply a different shading model.
In OpenGL, for each triangle, you need to draw all three vertices with the same normal, this normal is the cross product of the two triangle edges. You will then see a mesh with the same flat-shading as your image.
If each vertex has its own normal, and these normals are not all equal in the triangle, OpenGL will interpolate the difference across the surface of the face producing a much smoother transition of the shading from vertex to vertex. This is smooth (gouraud shading).
So in summary, you can achieve this by changing how you draw the normals in relation to the triangles.

Difference between tessellation shaders and Geometry shaders

I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/

Working with .off in opengl with normals

How can i put the right normals to the object? There is a way to transform from .off into an object that contains normals?
OFF files usually don't support special attributes per face or per vertex and thus have no normals.
The simplest thing you can do is do simply calculate the normals yourself using one of the well-known and easy to implement face-normal algorithms. Here are a few examples.
You can also look at doing normals via a texture (aka bump mapping). You will basically need to generate a normal map of your models in a 3D program and be using some kind of perpixel shading and bind your shades to use the normal texture.
The advantage to doing it that way is you can have higher detail without increasing the number of polygons. You can do things like generate a single grainy bumpy stone pattern on a single flat polygon.
Also if you generate your normals procedurally you can't mix flat and smooth shading.

OpenGL/GLSL varying vectors: How to avoid starburst around vertices?

In OpenGL 2.1, I'm passing a position and normal vector to my vertex shader. The vertex shader then sets a varying to the normal vector, so in theory it's linearly interpolating the normals across each triangle. (Which I understand to be the foundation of Phong shading.)
In the fragment shader, I use the normal with Lambert's law to calculate the diffuse reflection. This works as expected, except that the interpolation between vertices looks funny. Specifically, I'm seeing a starburst affect, wherein there are noticeable "hot spots" along the edges between vertices.
Here's an example, not from my own rendering but demonstrating the exact same effect (see the gold sphere partway down the page):
http://pages.cpsc.ucalgary.ca/~slongay/pmwiki-2.2.1/pmwiki.php?n=CPSC453W11.Lab12
Wikipedia says this is a problem with Gauraud shading. But as I understand it, by interpolating the normals and running my lighting calculation per-fragment, I'm using the Phong model, not Gouraud. Is that right?
If I were to use a much finer mesh, I presume that these starbursts would be much less noticeable. But is adding more triangles the only way to solve this problem? I would think there would be a way to get smooth interpolation without the starburst effect. (I've certainly seen perfectly smooth shading on rough meshes elsewhere, such as in 3d Studio Max. But maybe they're doing something more sophisticated than just interpolating normals.)
It is not the exact same effect. What you are seeing is one of two things.
The result of not normalizing the normals before using them in your fragment shader.
An optical illusion created by the collision of linear gradients across the edges of triangles. Really.
The "Gradient Matters" section at the bottom of this page (note: in the interest of full disclosure, that's my tutorial) explains the phenomenon in detail. Simple Lambert diffuse reflectance using interpolated normals effectively creates a more-or-less linear light across a triangle. A triangle with a different set of normals will have a different gradient. It will be C0 continuous (the colors along the edges are the same), but not C1 continuous (the colors along the two gradients change at different rates).
Human vision picks up on gradient differences like these and makes them stand out. Thus, we see them as hard-edges when in fact they are not.
The only real solution here is to either tessellate the mesh further or use normal maps created from a finer version of the mesh instead of interpolated normals.
You don't show your code, so its impossible to tell, but the most likely problem would be unnormalized normals in your fragment shader. The normals calculated in your vertex shader are interpolated, which results in vectors that are not unit length -- so you need to renormalize them in the fragment shader before you calculate your fragment lighting.