If i use opengl 3.2+ with compatibility context and have a fragment shader, is it necessary to have a vertex shader? I would like to know if per vertex lighting calculation and other per vertex calculations can be done by the fixed function pipeline and i can just use the fragment shader.
Also what implications would this have for per-vertex attribute binding locations?
if per vertex lighting calculation and other per vertex calculations
can be done by the fixed function pipeline
They can be done if you use fixed pipeline lights.Otherwise, part of it (like transformed normals,uv's and positions) must be computed elsewhere before being passed to the fragment shader.This "elsewhere" is called vertex shader.So yes,if you don't use fixed pipeline lightning system you must use vertex and fragment shader to process it.
Also,if you use fixed pipeline lightning you can still use shaders where you can access fixed light and material properties.But I see no point doing so unless you wish to break the defaul behavior.
Related
I need to pass some variables directly from vertex shader to fragment shader but my pipeline also contains a TCS a TES and A GS that simply do passthrough stuff.
I already know that fragment shader expects to recieve values for its "in" variables from the last linked shader of the program, in my case the Geometry Shader, but I don't want to do MVP and normal calculations there.
How can I output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
Is that even possible?
How can i output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
You don't.
Each stage can only access values provided by the previous active stage in the pipeline. If you want to communicate from the VS to the FS, then every stage between them must shepherd those values through themselves. After all:
my pipeline also contains a TCS a TES
If you're doing tessellation, then how exactly could a VS directly communicate with an FS? The fragment shaders inputs are per-fragment values generated by doing rasterization on the primitive being rendered. But since tessellation is active, the primitives the VS is operating on don't exist anymore; only the post-tessellation primitives exist.
So if the VS's primitives are all gone, how do the tessellated primitives get values? For a vertex that didn't exist until the tessellator activated, from where would it get a vertex value to be rasterized and interpolated across the generated primitive?
The job of figuring that out is given to the TES. It will use the values output from the VS (sent through the TCS if present) and interpolate/generate them in accord with the tessellation interpolation scheme it is coded with. That is what the TES is for.
The GS is very much the same way. Geometry shaders can take one primitive and turn it into twenty. It can discard entire primitives. How could the VS possibly communicate vertex information to a fragment shader through a GS which may just drop that primitive on the floor or create 30 separate ones? Or convert triangles into lines?
So there's not even a conceptual way for the VS to provide values to the FS through other shader pipelines.
As far as I understand, the color of a pixel is determined by the fragment shader. Why do we need a vertex shader then? Is there anything a fragment shader cannot do (or cannot easily do) but a vertex shader can do (easily)?
But I still can't quite understand why it is named a "shader".
Because that's what programs executed as part of the rendering process are called. The Renderman interface specification was one of the first programmable rendering processes, and they called all of their programmable elements "shaders", even though they didn't all compute colors.
And therefore, "shader" has become the term used for describing any such program.
Vertex shaders convert vertex data, creating a 1:1 mapping from input vertices to output vertices. Fragment shaders operate on fragments. An FS invocation has no control over where it will be executed. They are generated in the location that the rasterizer says they go, and the FS has no way to affect this.
By contrast, a vertex shader has complete control over where the vertices will go.
In both the OpenGL and Direct3D rendering pipelines, the geometry shader is processed after the vertex shader and before the fragment/pixel shader. Now obviously processing the geometry shader after the fragment/pixel shader makes no sense, but what I'm wondering is why not put it before the vertex shader?
From a software/high-level perspective, at least, it seems to make more sense that way: first you run the geometry shader to create all the vertices you want (and dump any data only relevant to the geometry shader), then you run the vertex shader on all the vertices thus created. There's an obvious drawback in that the vertex shader now has to be run on each of the newly-created vertices, but any logic that needs to be done there would, in the current pipelines, need to be run for each vertex in the geometry shader, presumably; so there's not much of a performance hit there.
I'm assuming, since the geometry shader is in this position in both pipelines, that there's either a hardware reason, or a non-obvious pipeline reason that it makes more sense.
(I am aware that polygon linking needs to take place before running a geometry shader (possibly not if it takes single points as inputs?) but I also know it needs to run after the geometry shader as well, so wouldn't it still make sense to run the vertex shader between those stages?)
It is basically because "geometry shader" was a pretty stupid choice of words on Microsoft's part. It should have been called "primitive shader."
Geometry shaders make the primitive assembly stage programmable, and you cannot assemble primitives before you have an input stream of vertices computed. There is some overlap in functionality since you can take one input primitive type and spit out a completely different type (often requiring the calculation of extra vertices).
These extra emitted vertices do not require a trip backwards in the pipeline to the vertex shader stage - they are completely calculated during an invocation of the geometry shader. This concept should not be too foreign, because tessellation control and evaluation shaders also look very much like vertex shaders in form and function.
There are a lot of stages of vertex transform, and what we call vertex shaders are just the tip of the iceberg. In a modern application you can expect the output of a vertex shader to go through multiple additional stages before you have a finalized vertex for rasterization and pixel shading (which is also poorly named).
It is available as output variable in all shaders except fragment shader. So which shader stage must write it? Is its value taken from the last shader stage that wrote it?
Also please explan what is the purpose of having the value gl_ClipDistance in fragment shader?
As long as you only work with vertex and fragment shaders, you write it in the vertex shader. According to the GLSL spec, geometry and tessellation shaders can write it as well.
The fragment shader can read the value. Based on the way I read the documentation, it would give you the interpolated value of the clip distance for your fragment.
Considering it is really only useful for clipping... you need to write it in the last stage of vertex processing in your GLSL program. Currently there is only one stage that does not fall under the category of vertex processing, so whatever comes immediately before the fragment shader needs to output this.
If you are using a geometry shader, that would be where you write it. Now, generally in a situation like this you might also write it in the vertex shader that runs before the geometry shader, passing it through. You don't have to do anything like that, but that is typical. Since it is part of gl_PerVertex, it is designed to be passed through multiple vertex processing stages that way.
Name
gl_ClipDistance — provides a forward-compatible mechanism for vertex clipping
Description
[...]
The value of gl_ClipDistance (or the gl_ClipDistance member of the gl_out[] array, in the case of the tessellation control shader) is undefined after the vertex, tessellation control, and tessellation evaluation shading stages if the corresponding shader executable does not write to gl_ClipDistance.
If you do not write to it in the final vertex processing stage, then it becomes undefined immediately before clipping occurs.
I introduced a geometry shader to my OpenGL application. My shaders have quite a few of "varying" variables that I pass from the vertex shader to the fragment shader. Now, having introduced the geometry shader I have to manually pass every varying value in the geometry shader for each vertex. Is there a way to avoid that and do things "automatically"?
No.
As soon as you introduce a geometry shader in your pipeline, if you want to pass variables from the vertex shader to the fragment shader you have to pass them manually, creating an input variable from the vertex shader and an output variable to the fragment shader. I don't know which GLSL version you're using, but you might want to check section 4.3.4 of the GLSL 3.30 spec.
No, because there's no sensible way to do that for anything except a noop geometry shader, and if your geometry shader isn't doing anything to the geometry, why is it enabled in the first place?
In general, a geometry shader takes a number of vertexes as input and produces a (different) number of vertexes as output. So which input vertex(es) should be mapped to which output vertex(es) 'automatically'?