How are colour-values interpolated in a Gouraud-shaderscript? - opengl

The gouraud-shader computes lighting at the corners of each triangle and linearly interpolates the resulting colours for each pixel covered by the triangle.
How would you program this certain interpolation (in GLSL-code) ?
Do you even have to code this yourself or does OpenGL interpolate (internally) single-handedly ?

From https://www.opengl.org/wiki/Type_Qualifier_(GLSL) :
Interpolation qualifiers
Certain inputs and outputs can use interpolation qualifiers. These are for any values which could be interpolated as a result of rasterization. These include:
Vertex shader outputs
Tessellation control shader inputs (to match with outputs from the
VS)
Tessellation evaluation shader outputs
Geometry shader inputs (to match with outputs from the TES/VS) and
outputs
Fragment shader inputs
Interpolation qualifiers control how interpolation of values happens across a triangle or other primitive. There are three basic interpolation qualifiers.
flat​
The value will not be interpolated. The value given to the fragment shader is the value from the Provoking Vertex for that primitive.
noperspective​
The value will be linearly interpolated in window-space. This is usually not what you want, but it can have its uses.
smooth​
The value will be interpolated in a perspective-correct fashion. This is the default if no qualifier is present.
Since smooth is the default one, OpenGL interpolates it for you. If you would not want the interpolation, you would define your color in your shaders with keyword flat, something like this:
flat vec3 color;

Related

How to deduce the triangle vertices in fragment shader

If I draw triangles using OpenGL, how do I deduce their vertices for each fragment? Sending the position from vertex shader interpolates it, leading to the loss of information.
From a geometry shader you may access all three vertices of your triangle, thus you may pass them to the fragment shader via in/out (aka varying) variables. To prevent them from interpolating, just use flat interpolation qualifier.

Opengl: Quadratic interplation between vertex and fragment shader

Is it possible to implement iterpolation that is of a higher order than linear when passing data from vertex to fragment shaders? Ideally I would like some form of quadratic interpolation, but that would require access to vertices beyond the corners of the face being interpolated across.
The short answer is: no.
I do not think that there is a native support for interpolation other than linear if it comes to attributes passed from vertices to the fragment shader.
However you could incorporate a trick to have non linear interpolation by using geometry shader an inserting interpolated vertices in-between. Or, if you want so have some kind of distribution of values along the interpolated line, you can use a predefined 1D texture that will contain the interpolation curve you need to use in the fragment shader.

Calculate normals for plane inside fragment shader

I have a situation where I need to do light shading. I don't have a vertex shader so I can't interpolate normals into my fragment shader. Also I have no ability to pass in a normal map. Can I generate normals completely in the fragment shader based,for example on fragment coordinates? The geometry is always planar in my case.
And to extend on what I am trying to do:
I am using the NV_path_rendering extension which allows rendering pure vector graphics on GPU. The problem is that only the fragment stage is accessible via shader which basically means - I can't use a vertex shader with NV_Path objects.
Since your shapes are flat and NV_PATH require compat profile you can pass normal through on of built-in varyings gl_Color or gl_SecondaryColor
Extension description says that there is some kind of interpolation:
Interpolation of per-vertex data (section 3.6.1). Path primitives have neither conventional vertices nor per-vertex data. Instead fragments generate interpolated per-fragment colors, texture coordinate sets, and fog coordinates as a linear function of object-space or eye-space path coordinate's or using the current color, texture coordinate set, or fog coordinate state directly.
http://developer.download.nvidia.com/assets/gamedev/files/GL_NV_path_rendering.txt
Here's a method which "sets the normal as the face normal", without knowing anything about vertex normals (as I understand it).
https://stackoverflow.com/a/17532576/738675
I have a three.js demo working here:
http://meetar.github.io/three.js-normal-map-0/index6.html
My implementation is getting vertex position data from the vertex shader, but it sounds like you're able to get that through other means.

Vertex shader vs Fragment Shader [duplicate]

This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.

Can someone explain how this code transforms something from per vertex lighting to per pixel?

In a tutorial there was a diffuse value calculation of the type
float diffuse_value = max(dot(vertex_normal, vertex_light_position), 0.0);
..on the vertex shader.
That was supposed to be making per vertex lighting if later on the fragment shader..
gl_FragColor = gl_Color * diffuse_value;
Then when he moved the first line - appropriately (by outputting vertex_normal and vertex_light_position to fragment) - to the the fragment shader, it is supposed to be transforming the method to "per pixel shading".
How is that so? The first method appears to be doing the diffuse_value calculation every pixel anyway!
diffuse_value in the first case is computed in the vertex shader. So it's only done per vertex.
After the vertex shader outputs values, the rasterizer takes those values (3 per triangle for each vector) and interpolates (in a perspective correct manner) them to provide different values for each pixel. As it happens, interpolating vectors like that (the normal and the light direction vectors) is not proper, because it loses their normalized property. Many implementations will actually normalize the vectors first thing in the fragment shader.
But it's worse to interpolate the dot of the 2 vectors (what the vector lighting effectively does). Say for example that your is N=+Z for all your vertices and L=norm(Z-X) on one and L=norm(Z+X) on another.
N.L = 1/sqrt(2) for both vertices.
Interpolating that will give you a flat lighting, whereas actually interpolating N and L separately and renormalizing will give you the result you'd expect, a lighting that peaks exactly in the middle of the polygon. (because the interpolation of norm(Z-X) and norm(Z+X) will give exactly Z once normalized).
Well ... Code in a vertex shader is only evaluated per-vertex, with the input values of that vertex.
But when moved to a fragment shader, it is evaluated per-fragment, i.e. per pixel, with input values appropriately interpolated between vertices.
At least that is my understanding, I'm quite rusty with shader programming though.
If diffuse_value is computed in vertex shader, that means it is computed per vertex. Then, it is linearly interpolated on pixels of triangle and feed into pixel shader. (If you don't have per-pixel normals, that's all you can do.) Then, in pixel shader, polygon color (interpolated too) is modulated with that diffuse_value.