Why is the geometry shader processed after the vertex shader? - opengl

In both the OpenGL and Direct3D rendering pipelines, the geometry shader is processed after the vertex shader and before the fragment/pixel shader. Now obviously processing the geometry shader after the fragment/pixel shader makes no sense, but what I'm wondering is why not put it before the vertex shader?
From a software/high-level perspective, at least, it seems to make more sense that way: first you run the geometry shader to create all the vertices you want (and dump any data only relevant to the geometry shader), then you run the vertex shader on all the vertices thus created. There's an obvious drawback in that the vertex shader now has to be run on each of the newly-created vertices, but any logic that needs to be done there would, in the current pipelines, need to be run for each vertex in the geometry shader, presumably; so there's not much of a performance hit there.
I'm assuming, since the geometry shader is in this position in both pipelines, that there's either a hardware reason, or a non-obvious pipeline reason that it makes more sense.
(I am aware that polygon linking needs to take place before running a geometry shader (possibly not if it takes single points as inputs?) but I also know it needs to run after the geometry shader as well, so wouldn't it still make sense to run the vertex shader between those stages?)

It is basically because "geometry shader" was a pretty stupid choice of words on Microsoft's part. It should have been called "primitive shader."
Geometry shaders make the primitive assembly stage programmable, and you cannot assemble primitives before you have an input stream of vertices computed. There is some overlap in functionality since you can take one input primitive type and spit out a completely different type (often requiring the calculation of extra vertices).
These extra emitted vertices do not require a trip backwards in the pipeline to the vertex shader stage - they are completely calculated during an invocation of the geometry shader. This concept should not be too foreign, because tessellation control and evaluation shaders also look very much like vertex shaders in form and function.
There are a lot of stages of vertex transform, and what we call vertex shaders are just the tip of the iceberg. In a modern application you can expect the output of a vertex shader to go through multiple additional stages before you have a finalized vertex for rasterization and pixel shading (which is also poorly named).

Related

OpenGL Lighting Shader

I can't understand concept of smaller shaders in OpenGL. How does it work? For example: do I need to create one shader for positioning object in space and then shader another shader for lighting or what? Could someone explain this to me? Thanks in advance.
This is a very complex topic, especially since your question isn't very specific. At first, there are various shader stages (vertex shader, pixel shader, and so on). A shader program consists of different shader stages, at least a pixel and a vertex shader (except for compute shader programs, which are each single compute shaders). The vertex shader calculates the possition of the points on screen, so here the objects are being moved. The pixel shader calculates the color of each pixel, that is covered by the rendered geometry your vertex shader produced. Now, in terms of lighting, there are different ways of doing it:
Forward Shading
This is the straight-forward way, where you simply calculate the lighting in pixel shader of the same shader program, that moves to objects. This is the oldest way of calculating lighting, and the easiest one. However, it's abilities are very limited.
Deffered Shading
For ages, this is the go-to variant in games. Here, you have one shader program (vertex + pixel shader) that renders the geometrie on one (or multiple) textures (so it moves the objects, but it doesn't save the lit color, but rather things like the base color and surface normals into the texture), and then an other shader program that renders a quad on screen for each light you want to render, the pixel shader of this shader program reads the informations previously rendered in the textur by the first shader program, and uses it to render the lit objects on an other textur (which is then the final image). In constrast to forward shading, this allows (in theory) any number of lights in the scene, and allows easier usage of shadow maps
Tiled/Clustered Shading
This is a rather new and very complex way of calculating lighting, that can be build on top of deffered or forward shading. It basicly uses compute shaders to calculate an accelleration-structure on the gpu, which is then used draw huge amount of lights very fast. This allows to render thousands of lights in a scene in real time, but using shadow maps for these lights is very hard, and the algorithm is way more complex then the previous ones.
Writing smaller shaders means to separate some of your shader functionalities in another files. Then if you are writing a big shader which contains lightning algorithms, antialiasing algorithms, and any other shader computation algorithm, you can separate them in smaller shader files (light.glsl, fxaa.glsl, and so on...) and you have to link these files in your main shader file (the shader file which contains the void main() function) since in OpenGL a vertex array can only have one shader program (composition of vertex shader, fragment shader, geometry shader, etc...) during the rendering pipeline.
The way of writing smaller shader depends also on your rendering algorithm (forward rendering, deffered rendering, or forward+ rendering).
It's important to notice that writing a lot of shader will increase the shader compilation time, and also, writing a big shader with a lot of uniforms will also slow things down...

OpenGL, C++, In Out variables

I need to pass some variables directly from vertex shader to fragment shader but my pipeline also contains a TCS a TES and A GS that simply do passthrough stuff.
I already know that fragment shader expects to recieve values for its "in" variables from the last linked shader of the program, in my case the Geometry Shader, but I don't want to do MVP and normal calculations there.
How can I output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
Is that even possible?
How can i output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
You don't.
Each stage can only access values provided by the previous active stage in the pipeline. If you want to communicate from the VS to the FS, then every stage between them must shepherd those values through themselves. After all:
my pipeline also contains a TCS a TES
If you're doing tessellation, then how exactly could a VS directly communicate with an FS? The fragment shaders inputs are per-fragment values generated by doing rasterization on the primitive being rendered. But since tessellation is active, the primitives the VS is operating on don't exist anymore; only the post-tessellation primitives exist.
So if the VS's primitives are all gone, how do the tessellated primitives get values? For a vertex that didn't exist until the tessellator activated, from where would it get a vertex value to be rasterized and interpolated across the generated primitive?
The job of figuring that out is given to the TES. It will use the values output from the VS (sent through the TCS if present) and interpolate/generate them in accord with the tessellation interpolation scheme it is coded with. That is what the TES is for.
The GS is very much the same way. Geometry shaders can take one primitive and turn it into twenty. It can discard entire primitives. How could the VS possibly communicate vertex information to a fragment shader through a GS which may just drop that primitive on the floor or create 30 separate ones? Or convert triangles into lines?
So there's not even a conceptual way for the VS to provide values to the FS through other shader pipelines.

A few questions about shaders

I am using opengl shaders.
Does count of uniforms affect shader performance? If I pass 5 uniforms or 50 will it matter?
Does each shader has its own area where it working on? Or each shader can draw at any point of my application?
I often create vertex shader just to pass attributes to fragment shader. What benefit of vertex shader and why not just pass attributes in fragment?
I would guess it doesn't (and if it does, only a very minor one). But I don't have any evidence for that, so I might be wrong. This is almost certainly driver-specific.
A shader does not draw anything. A shader just processes data. In the pipeline, the rasterizer produces the fragments that are covered by your shape. And these are the fragments that you can potentially draw to. The fragment shader calculates the color (and possibly depth) and the rest of the pipeline decides what to do with the result (either updating the frame buffer, blending, or discarding it altogether). Each draw call can potentially produce a framebuffer update everywhere, not just at some specific locations.
This is perfectly fine if the application requires it. The main difference is that vertex shaders process vertices and fragment shaders process fragments. Usually, there are much more fragments than vertices, so the fragment shader is called more often than the vertex shader. Therefore, you should do as much work in the vertex shader as possible. Of course, there are things that you just cannot calculate in a vertex shader.

Why do we need vertex shader in OpenGL?

As far as I understand, the color of a pixel is determined by the fragment shader. Why do we need a vertex shader then? Is there anything a fragment shader cannot do (or cannot easily do) but a vertex shader can do (easily)?
But I still can't quite understand why it is named a "shader".
Because that's what programs executed as part of the rendering process are called. The Renderman interface specification was one of the first programmable rendering processes, and they called all of their programmable elements "shaders", even though they didn't all compute colors.
And therefore, "shader" has become the term used for describing any such program.
Vertex shaders convert vertex data, creating a 1:1 mapping from input vertices to output vertices. Fragment shaders operate on fragments. An FS invocation has no control over where it will be executed. They are generated in the location that the rasterizer says they go, and the FS has no way to affect this.
By contrast, a vertex shader has complete control over where the vertices will go.

which shader stage must write gl_ClipDistance[x]

It is available as output variable in all shaders except fragment shader. So which shader stage must write it? Is its value taken from the last shader stage that wrote it?
Also please explan what is the purpose of having the value gl_ClipDistance in fragment shader?
As long as you only work with vertex and fragment shaders, you write it in the vertex shader. According to the GLSL spec, geometry and tessellation shaders can write it as well.
The fragment shader can read the value. Based on the way I read the documentation, it would give you the interpolated value of the clip distance for your fragment.
Considering it is really only useful for clipping... you need to write it in the last stage of vertex processing in your GLSL program. Currently there is only one stage that does not fall under the category of vertex processing, so whatever comes immediately before the fragment shader needs to output this.
If you are using a geometry shader, that would be where you write it. Now, generally in a situation like this you might also write it in the vertex shader that runs before the geometry shader, passing it through. You don't have to do anything like that, but that is typical. Since it is part of gl_PerVertex, it is designed to be passed through multiple vertex processing stages that way.
Name
gl_ClipDistance — provides a forward-compatible mechanism for vertex clipping
Description
[...]
The value of gl_ClipDistance (or the gl_ClipDistance member of the gl_out[] array, in the case of the tessellation control shader) is undefined after the vertex, tessellation control, and tessellation evaluation shading stages if the corresponding shader executable does not write to gl_ClipDistance.
If you do not write to it in the final vertex processing stage, then it becomes undefined immediately before clipping occurs.