I can't seem to understand the OpenGL pipeline process from a vertex to a pixel.
Can anyone tell me how important are vertex normals on these two shading techinques? As far as i know, in gouraud, lighting is calculated at each vertex, then the result color is interpolated across the polygon between vertices (is this done in fragment operations, before rasterizing?), and phong shading consists of interpolating first the vertices normals and then calculating the illumination on each of these normals.
Another thing is when bump mapping is applied to lets say a plane (2 triangles) and a brick texture as diffuse with its respect bump map, all of this with gouraud shading.
Bump mapping consist on altering the normals by a gradient depending on a bump map. But what normals does it alter and when (at the fragment shader?) if there are only 4 normals (4 vertices = plane), and all 4 are the same. In Gouraud you interpolate the color of each vertex after the illumination calculation, but this calculation is done after altering the normals.
How does the lighting work?
Vertex normals are absoloutely essential for both Gouraud and Phong shading.
In Gouraud shading the lighting is calculated per vertex and then interpolated across the triangle.
In Phong shading the normal is interpolated across the triangle and then the calculation is done per-pixel/fragment.
Bump-mapping refers to a range of different technologies. When doing normal mapping (probably the most common variety these days) the normals, bi-tangent (often erroneously called bi-normal) and tangent are calculated per-vertex to build a basis matrix. This basis matrix is then interpolated across the triangle. The normal retrieved from the normal map is then transformed by this basis matrix and then the lighting is performed per pixel.
There are extensions to the normal mapping technique above that allow bumps to hide other bumps behind them. This is, usually, performed by storing a height map along with the normal map and then ray marching through the height map to find parts that are being obscured. This technique is called Relief Mapping.
There are other older forms such as DUDV bump mapping (Which was implemented in DirectX 6 as Environment Mapped, bump mapping or EMBM).
You also have emboss bump mapping which was a really early way of doing bump mapping
Edit: In answer to your comment, emboss bump mapping CAN be performed on gouraud shaded triangles. Other forms of bump-mapping are, necessarily, per-pixel (due to the fact they work by modifying the surface normals on a per-pixel (or, at least, per-texel) basis). I wouldn't be surprised if there were other methods that can be performed with per-vertex lighting but I can't think of any off the top of my head. The results will look pretty rubbish compared to doing it on a per-pixel basis, though.
Re: Tangents and Bi-Tangents are actually quite simple once you get your head round them (took me years though, tbh ;)). Any 3D coordinate frame can be defined by a set of vectors that form an orthogonal basis matrix. By setting up the normal, tangent and bi-tangent per vertex you are merely setting up the coordinate frame at each vertex. From this you have the ability to transform a world or object space vector into the triangle's own coordinate frame. From here you can simply translate a light vector (or position) into the coordinate frame of a given pixel on the surface of the triangle. This then means that the normals in the normal map don't need to be stored in the object's space and hence as those triangles move around (when being animated, for example) the normals are already being handled in their own local space.
Normal mapping, one of the techniques to simulate bumped surfaces basically perturbs the per-pixel normals before you compute the light equation on that pixel.
For example, one way to implement requires you to interpolate surface normals and binormal (2 of the tangent space basis) and compute the third per-pixel (2+1 vectors which are the tangent basis). This technique also requires to interpolate the light vector. With those 3 (2+1 computed) vectors (named tagent space basis) you have a way to change the light vector from object space into tagent space. This is because these 3 vectors can be arranged as a 3x3 matrix which can be used to change the basis of your light direction vector.
Then it is simply a matter of using that tagent-space light vector and compute the light equation per pixel, where it most basic form would be a dot product between the tagent-space light vector and the normal map (your bump texture).
This is how a normal maps looks like (the normal component is stored in each channel of the texture and is already in tangent space):
This is one way, you can compute things in view space but the above is more easy to understand.
Old bump mapping was way simpler and was also kind of a fake effect.
All bump mapping techniques operate at pixel level, as they perturb in one way or other, how the surface is rendered. Even the old emboss bump mapping did some computation per pixel.
EDIT: I added a few more clarifications, when I have some spare minutes I will try to add some math and examples. Although there are great resources out there that explain this in great detail.
First of all, you don't need to understand the whole graphics pipeline to write a simple shader :). But, of course, you should know whats going on. You could read the graphics pipeline chapter in real-time rendering, 3rd edition (möller, hofmann, akenine-moller). What you describe is per-vertex and per-fragment lighting. For both calculations the vertex normals are part of the equation. For the bump mapping shader you alter the interpolated normals. So after rasterization you have fragments where missing data has to be caculated to determine the final pixel color.
Related
I have question about usage of multiple shadowmaps in deferred shading. I have implemented a single shadowmap in forward shading.
In forward rendering, in the vertex shader of each object I calculated it's position in lightspace and compared it to the shadowmap in the fragment shader. I can see that working with multiple maps with an array of projection matrices and an array of shadowmaps as uniforms.
In the case of deferred shading I was wondering what is the common practice. The way I see it there are a few options:
In the deferred shading, for each pixel I calculate it's position in each lightspace and compare it to the corresponding lightmap. (that way I do the calculation for each fragment and each matrix which might too be expensive?)
In forward rendering I calculate the position of each vertex in each projection and there is a G-buffer output for each position. I then do the comparison in the deferred shading. (that way I do the computation of the position only once per vertex instead of once per pixel but I have a shadowmap and lightspace position for each shadow which seems suboptimal)
A bit like 2. But I do the verification in forward rendering. That way I can store many booleans if it's in the light or not for each shadow in one int texture. The problem is that I can't do soft shadows that way.
Maybe something better?
To synthesise: 1 needs many matrices multipication but is easy to implement. 2 needs few matrices multiplication but many textures and outputs (which is limited by the graphic card). and 3 needs few output and few calculations per pixel. But I can't' get soft shadows because the result is an array of boolean.
I am not doing it really for better performance but mostly to learn new stuffs. I'm open to suggestions. Maybe I'm misunderstanding something. Is there a standard way to do it?
I'm currently trying to implement bump mapping which requires to have a "tangent space". I read through some tutorials, specifically the following two:
http://www.terathon.com/code/tangent.html
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/
Both tutorials avoid expensive matrix computation in the fragment shader which would be required if the shading computation would happen in the camera space as usual (as I'm used to, at least).
They introduce the tangent space which might be different per vertex (or even per fragment if the surface is smoothed). If I understand it correctly, for efficient bump mapping (i.e. to minimize computations in the fragment shader), they convert everything needed for light computation into this tangent space using the vertex shader. But I wonder if model space is a good alternative to compute light shading in.
My questions regarding this topic are:
For shading computation in tangent space, what exactly do I pass between vertex and fragment shaders? Do I really need to convert light positions in tangent space, requiring O(number of lights) of varying variables? This will for example not work for deferred shading or if the light positions aren't known for some other reason in the vertex shader. There has to be a (still efficient) alternative, which I guess is shading computation in model space.
If I pass model space varyings, is it a good idea to still perform shading computations in tangent space, i.e. convert light positions in fragment shader? Or is it better to perform shading computations in model space? Which will be faster? (In both cases I need a TBN matrix, but one case requires a model-to-tangent transform, the other a tangent-to-model transform.)
I currently pass per-vertex normal, tangent and bitangent (orhtonormal) to the vertex shader. If I understand it correctly, the orthonormalization is only required if I want to quickly build a model-to-tangent space matrix which requires inversion of a matrix containing the TBN vectors. If they are orthogonal, this is simply a transposition. But if I don't need vectors in the tangent space, I don't need an inversion but simply the original TBN vectors in a matrix which is then the tangent-to-model matrix. Wouldn't this simplify everything?
Normal mapping is usually done in tangent space because the normal maps are given in this space. So if you pre-transform the (relatively little) input data to tangent space in the vertex shader, you don't need extra computation in the fragment shader. That requires that all input data is available, of course. I haven't done bump mapping with deferred shading, but using the model space seems to be a good idea. World space would probably be even better, because you'll need world space vectors in the end to render to the G-buffers.
If you pass model space vectors, I would recommend to perform the calculations in this space. Then the fragment shader would have to transform one normal from tangent space to model space. In the other case it would have to transform n light attributes from model space to tangent space, which should take n times longer.
If you don't need the inverse TBN matrix, a non-orthonormal coordinate system should be fine. At least I don't see any reason, why it should not.
If you subdivide a cylinder into an 8-sided prism, calculating vertex normals based on their position ("smooth shading"), it looks pretty good.
If you subdivide a cone into an 8-sided pyramid, calculating normals based on their position, you get stuck on the tip of the cone (technically the vertex of the cone, but let's call it the tip to avoid confusion with the mesh vertices).
For each triangular face, you want to match the normals along both edges. But because you can only specify one normal at each vertex of a triangle, you can match one edge or the other, but not both. You can compromise by choosing a tip normal that is the average of the two edges, but now none of your edges look good. Here is a detail of what choosing the average normal for each tip vertex looks like.
In a perfect world, the GPU could rasterize a true quad, not just triangles. Then we could specify each face with a degenerate quad, allowing us to specify a different normal for the two adjoining edges of each triangle. But all we have to work with are triangles... We can cut the cone into multiple "stacks", so that the edge discontinuities are only visible at the tip of the cone rather than along the whole thing, but there will still be a tip!
Anybody have any tricks for smooth-shaded low-poly cones?
I was struggling with cones in modern OpenGL (i.e. shaders) made up from triangles a bit but then I found a surprisingly simple solution! I would say it is much better and simpler than what is suggested in the currently accepted answer.
I have an array of triangles (obviously each has 3 vertices) which form the cone surface. I did not care about the bottom face (circular base) as this is really straightforward. In all my work I use the following simple vertex structure:
position: vec3 (was automatically converted to vec4 in the shader by adding 1.0f as the last element)
normal_vector: vec3 (was kept as vec3 in the shaders as it was used for calculation dot product with the light direction)
color: vec3 (I did not use transparency)
In my vertex shader I was only transforming the vertex positions (multiplying by projection and model-view matrix) and also transforming the normal vectors (multiplying by transformed inverse of model-view matrix). Then the transformed positions, normal vectors and untransformed colors were passed to fragment shader where I calculated the dot product of light direction and normal vector and multiplied this number with the color.
Let me start with what I did and found unsatisfactory:
Attempt#1: Each cone face (triangle) was using a constant normal vector, i.e. all vertices of one triangle had the same normal vector.
This was simple but did not achieve smooth lighting, each face had a constant color because all fragments of the triangle had the same normal vector. Wrong.
Attempt#2: I calculated the normal vector for each vertex separately. This was easy for the vertices on the circular base of the cone but what should be used for the tip of the cone? I used the normal vector of the whole triangle (i.e. the same value as in attempt#). Well this was better because I had smooth lighting in the part closer to the base of the cone but not smooth near the tip. Wrong.
But then I found the solution:
Attempt#3: I did everything as in attempt#2 except I assigned the normal vector in the cone-tip vertices equal to zero vector vec3(0.0f, 0.0f, 0.0f). This is the key to the trick! Then this zero normal vector is passed to the fragment shader, (i.e. between vertex and fragment shaders it is automatically interpolated with the normal vectors of the other two vertices). Of course then you need to normalize the vector in the fragment (!) shader because it does not have constant size of 1 (which I need for the dot product). So I normalize it - of course this is not possible for the very tip of the cone where the normal vector has the size of zero. But it works for all other points. And that's it.
There is one important thing to remember, either you can only normalize the normal vector in the fragment shader. Sure you will get error if you try to normalize vector of zero size in C++. So If you need normalization before entering into fragment shader for some reason make sure you exclude the normal vectors of size of zero (i.e. the tip of the cone or you will get error).
This produces smooth shading of the cone in all points except the very point of the cone-tip. But that point is just not important (who cares about one pixel...) or you can handle it in a special way. Another advantage is that you can use even very simple shader. The only change is to normalize the normal vectors in the fragment shader rather than in vertex shader or even before.
Yes, it certainly is a limitation of triangles. I think showing the issue as you approach a cone from a cylinder makes the problem quite clear:
Here's some things you could try...
Use quads (as #WhitAngl says). To hell with new OpenGL, there is a use for quads after all.
Tessellate a bit more evenly. Setting the normal at the tip to a common up vector removes any harsh edges, though looks a bit strange against the unlit side. Unfortunately this goes against your question title, low polygon cone.
Making sure your cone is centred around the object space origin (or procedurally generating it in the vertex shader), use the fragment position to generate the normal...
in vec2 coneSlope; //normal x/z magnitude and y
in vec3 objectSpaceFragPos;
uniform mat3 normalMatrix;
void main()
{
vec3 osNormal = vec3(normalize(objectSpaceFragPos.xz) * coneSlope.x, coneSlope.y);
vec3 esNormal = normalMatrix * osNormal;
...
}
Maybe there's some fancy tricks you can do to reduce fragment shader ops too.
Then there's the whole balance of tessellating more vs more expensive shaders.
A cone is a fairly simple object and, while I like the challenge, in practice I can't see this being an issue unless you want lots of cones. In which case you might get into geometry shaders or instancing. Better yet you could draw the cones using quads and raycast implicit cones in the fragment shader. If the cones are all on a plane you could try normal mapping or even parallax mapping.
I'm working on a Minecraft-like engine as a hobby project to see how far the concept of voxel terrains can be pushed on modern hardware and OpenGL >= 3. So, all my geometry consists of quads, or squares to be precise.
I've built a raycaster to estimate ambient occlusion, and use the technique of "bent normals" to do the lighting. So my normals aren't perpendicular to the quad, nor do they have unit length; rather, they point roughly towards the space where least occlusion is happening, and are shorter when the quad receives less light. The advantage of this technique is that it just requires a one-time calculation of the occlusion, and is essentially free at render time.
However, I run into trouble when I try to assign different normals to different vertices of the same quad in order to get smooth lighting. Because the quad is split up into triangles, and linear interpolation happens over each triangle, the result of the interpolation clearly shows the presence of the triangles as ugly diagonal artifacts:
The problem is that OpenGL uses barycentric interpolation over each triangle, which is a weighted sum over 3 out of the 4 corners. Ideally, I'd like to use bilinear interpolation, where all 4 corners are being used in computing the result.
I can think of some workarounds:
Stuff the normals into a 2x2 RGB texture, and let the texture processor do the bilinear interpolation. This happens at the cost of a texture lookup in the fragment shader. I'd also need to pack all these mini-textures into larger ones for efficiency.
Use vertex attributes to attach all 4 normals to each vertex. Also attach some [0..1] coefficients to each vertex, much like texture coordinates, and do the bilinear interpolation in the fragment shader. This happens at the cost of passing 4 normals to the shader instead of just 1.
I think both these techniques can be made to work, but they strike me as kludges for something that should be much simpler. Maybe I could transform the normals somehow, so that OpenGL's interpolation would give a result that does not depend on the particular triangulation used.
(Note that the problem is not specific to normals; it is equally applicable to colours or any other value that needs to be smoothly interpolated across a quad.)
Any ideas how else to approach this problem? If not, which of the two techniques above would be best?
As you clearly understands, the triangle interpolation that GL will do is not what you want.
So the normal data can't be coming directly from the vertex data.
I'm afraid the solutions you're envisioning are about the best you can achieve. And no matter what you pick, you'll need to pass down [0..1] coefficients from the vertex to the shader (including 2x2 textures. You need them for texture coordinates).
There are some tricks you can do to somewhat simplify the process, though.
Using the vertex ID can help you out with finding which vertex "corner" to pass from vertex to fragment shader (our [0..1] values). A simple bit test on the lowest 2 bits can let you know which corner to pass down, without actual vertex data input. If packing texture data, you still need to pass an identifier inside the texture, so this may be moot.
if you use 2x2 textures to allow the interpolation, there are (were?) some gotchas. Some texture interpolators don't necessarily give a high precision interpolation if the source is in a low precision to begin with. This may require you to change the texture data type to something of higher precision to avoid banding artifacts.
Well... as you're using Bent normals technique, the best way to increase result is to pre-tessellate mesh and re-compute with mesh with higher tessellation.
Another way would be some tricks within pixel shader... one possible way - you can actually interpolate texture on your own (and not use built-in interpolator) in pixel shader, which could help you a lot. And you're not limited just to bilinear interpolation, you could do better, F.e. bicubic interpolation ;)
In OpenGL 2.1, I'm passing a position and normal vector to my vertex shader. The vertex shader then sets a varying to the normal vector, so in theory it's linearly interpolating the normals across each triangle. (Which I understand to be the foundation of Phong shading.)
In the fragment shader, I use the normal with Lambert's law to calculate the diffuse reflection. This works as expected, except that the interpolation between vertices looks funny. Specifically, I'm seeing a starburst affect, wherein there are noticeable "hot spots" along the edges between vertices.
Here's an example, not from my own rendering but demonstrating the exact same effect (see the gold sphere partway down the page):
http://pages.cpsc.ucalgary.ca/~slongay/pmwiki-2.2.1/pmwiki.php?n=CPSC453W11.Lab12
Wikipedia says this is a problem with Gauraud shading. But as I understand it, by interpolating the normals and running my lighting calculation per-fragment, I'm using the Phong model, not Gouraud. Is that right?
If I were to use a much finer mesh, I presume that these starbursts would be much less noticeable. But is adding more triangles the only way to solve this problem? I would think there would be a way to get smooth interpolation without the starburst effect. (I've certainly seen perfectly smooth shading on rough meshes elsewhere, such as in 3d Studio Max. But maybe they're doing something more sophisticated than just interpolating normals.)
It is not the exact same effect. What you are seeing is one of two things.
The result of not normalizing the normals before using them in your fragment shader.
An optical illusion created by the collision of linear gradients across the edges of triangles. Really.
The "Gradient Matters" section at the bottom of this page (note: in the interest of full disclosure, that's my tutorial) explains the phenomenon in detail. Simple Lambert diffuse reflectance using interpolated normals effectively creates a more-or-less linear light across a triangle. A triangle with a different set of normals will have a different gradient. It will be C0 continuous (the colors along the edges are the same), but not C1 continuous (the colors along the two gradients change at different rates).
Human vision picks up on gradient differences like these and makes them stand out. Thus, we see them as hard-edges when in fact they are not.
The only real solution here is to either tessellate the mesh further or use normal maps created from a finer version of the mesh instead of interpolated normals.
You don't show your code, so its impossible to tell, but the most likely problem would be unnormalized normals in your fragment shader. The normals calculated in your vertex shader are interpolated, which results in vectors that are not unit length -- so you need to renormalize them in the fragment shader before you calculate your fragment lighting.