As far as I understand, the color of a pixel is determined by the fragment shader. Why do we need a vertex shader then? Is there anything a fragment shader cannot do (or cannot easily do) but a vertex shader can do (easily)?
But I still can't quite understand why it is named a "shader".
Because that's what programs executed as part of the rendering process are called. The Renderman interface specification was one of the first programmable rendering processes, and they called all of their programmable elements "shaders", even though they didn't all compute colors.
And therefore, "shader" has become the term used for describing any such program.
Vertex shaders convert vertex data, creating a 1:1 mapping from input vertices to output vertices. Fragment shaders operate on fragments. An FS invocation has no control over where it will be executed. They are generated in the location that the rasterizer says they go, and the FS has no way to affect this.
By contrast, a vertex shader has complete control over where the vertices will go.
Related
I need to pass some variables directly from vertex shader to fragment shader but my pipeline also contains a TCS a TES and A GS that simply do passthrough stuff.
I already know that fragment shader expects to recieve values for its "in" variables from the last linked shader of the program, in my case the Geometry Shader, but I don't want to do MVP and normal calculations there.
How can I output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
Is that even possible?
How can i output a variable directly to the fragment shader from vertex shader? (skipping the rest of the shaders in the middle)
You don't.
Each stage can only access values provided by the previous active stage in the pipeline. If you want to communicate from the VS to the FS, then every stage between them must shepherd those values through themselves. After all:
my pipeline also contains a TCS a TES
If you're doing tessellation, then how exactly could a VS directly communicate with an FS? The fragment shaders inputs are per-fragment values generated by doing rasterization on the primitive being rendered. But since tessellation is active, the primitives the VS is operating on don't exist anymore; only the post-tessellation primitives exist.
So if the VS's primitives are all gone, how do the tessellated primitives get values? For a vertex that didn't exist until the tessellator activated, from where would it get a vertex value to be rasterized and interpolated across the generated primitive?
The job of figuring that out is given to the TES. It will use the values output from the VS (sent through the TCS if present) and interpolate/generate them in accord with the tessellation interpolation scheme it is coded with. That is what the TES is for.
The GS is very much the same way. Geometry shaders can take one primitive and turn it into twenty. It can discard entire primitives. How could the VS possibly communicate vertex information to a fragment shader through a GS which may just drop that primitive on the floor or create 30 separate ones? Or convert triangles into lines?
So there's not even a conceptual way for the VS to provide values to the FS through other shader pipelines.
I am using opengl shaders.
Does count of uniforms affect shader performance? If I pass 5 uniforms or 50 will it matter?
Does each shader has its own area where it working on? Or each shader can draw at any point of my application?
I often create vertex shader just to pass attributes to fragment shader. What benefit of vertex shader and why not just pass attributes in fragment?
I would guess it doesn't (and if it does, only a very minor one). But I don't have any evidence for that, so I might be wrong. This is almost certainly driver-specific.
A shader does not draw anything. A shader just processes data. In the pipeline, the rasterizer produces the fragments that are covered by your shape. And these are the fragments that you can potentially draw to. The fragment shader calculates the color (and possibly depth) and the rest of the pipeline decides what to do with the result (either updating the frame buffer, blending, or discarding it altogether). Each draw call can potentially produce a framebuffer update everywhere, not just at some specific locations.
This is perfectly fine if the application requires it. The main difference is that vertex shaders process vertices and fragment shaders process fragments. Usually, there are much more fragments than vertices, so the fragment shader is called more often than the vertex shader. Therefore, you should do as much work in the vertex shader as possible. Of course, there are things that you just cannot calculate in a vertex shader.
In both the OpenGL and Direct3D rendering pipelines, the geometry shader is processed after the vertex shader and before the fragment/pixel shader. Now obviously processing the geometry shader after the fragment/pixel shader makes no sense, but what I'm wondering is why not put it before the vertex shader?
From a software/high-level perspective, at least, it seems to make more sense that way: first you run the geometry shader to create all the vertices you want (and dump any data only relevant to the geometry shader), then you run the vertex shader on all the vertices thus created. There's an obvious drawback in that the vertex shader now has to be run on each of the newly-created vertices, but any logic that needs to be done there would, in the current pipelines, need to be run for each vertex in the geometry shader, presumably; so there's not much of a performance hit there.
I'm assuming, since the geometry shader is in this position in both pipelines, that there's either a hardware reason, or a non-obvious pipeline reason that it makes more sense.
(I am aware that polygon linking needs to take place before running a geometry shader (possibly not if it takes single points as inputs?) but I also know it needs to run after the geometry shader as well, so wouldn't it still make sense to run the vertex shader between those stages?)
It is basically because "geometry shader" was a pretty stupid choice of words on Microsoft's part. It should have been called "primitive shader."
Geometry shaders make the primitive assembly stage programmable, and you cannot assemble primitives before you have an input stream of vertices computed. There is some overlap in functionality since you can take one input primitive type and spit out a completely different type (often requiring the calculation of extra vertices).
These extra emitted vertices do not require a trip backwards in the pipeline to the vertex shader stage - they are completely calculated during an invocation of the geometry shader. This concept should not be too foreign, because tessellation control and evaluation shaders also look very much like vertex shaders in form and function.
There are a lot of stages of vertex transform, and what we call vertex shaders are just the tip of the iceberg. In a modern application you can expect the output of a vertex shader to go through multiple additional stages before you have a finalized vertex for rasterization and pixel shading (which is also poorly named).
It is available as output variable in all shaders except fragment shader. So which shader stage must write it? Is its value taken from the last shader stage that wrote it?
Also please explan what is the purpose of having the value gl_ClipDistance in fragment shader?
As long as you only work with vertex and fragment shaders, you write it in the vertex shader. According to the GLSL spec, geometry and tessellation shaders can write it as well.
The fragment shader can read the value. Based on the way I read the documentation, it would give you the interpolated value of the clip distance for your fragment.
Considering it is really only useful for clipping... you need to write it in the last stage of vertex processing in your GLSL program. Currently there is only one stage that does not fall under the category of vertex processing, so whatever comes immediately before the fragment shader needs to output this.
If you are using a geometry shader, that would be where you write it. Now, generally in a situation like this you might also write it in the vertex shader that runs before the geometry shader, passing it through. You don't have to do anything like that, but that is typical. Since it is part of gl_PerVertex, it is designed to be passed through multiple vertex processing stages that way.
Name
gl_ClipDistance — provides a forward-compatible mechanism for vertex clipping
Description
[...]
The value of gl_ClipDistance (or the gl_ClipDistance member of the gl_out[] array, in the case of the tessellation control shader) is undefined after the vertex, tessellation control, and tessellation evaluation shading stages if the corresponding shader executable does not write to gl_ClipDistance.
If you do not write to it in the final vertex processing stage, then it becomes undefined immediately before clipping occurs.
Taking the standard opengl 4.0+ functions & specifications into consideration; i've seen that geometries and shapes can be created in either two ways:
making use of VAO & VBO s.
using shader programs.
which one is the standard way of creating shapes?? are they consistent with each other? or they are two different ways for creating geometry and shapes?
Geometry is loaded into the GPU with VAO & VBO.
Geometry shaders produce new geometry based on uploaded. Use them to make special effects like particles, shadows(Shadow Volumes) in more efficient way.
tessellation shaders serve to subdivide geometry for some effects like displacement mapping.
I strongly (like really strongly) recommend you reading this http://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
VAOs and VBOs how about what geometry to draw (specifying per-vertex data). Shader programs are about how to draw them (which program gets applied to each provided vertex, each fragment and so on).
Let's lay out the full facts.
Shaders need input. Without input that changes, every shader invocation will produce exactly the same values. That's how shaders work. When you issue a draw call, a number of shader invocations are launched. The only variables that will change from invocation to invocation within this draw call are in variables. So unless you use some sort of input, every shader will produce the same outputs.
However, that doesn't mean you absolutely need a VAO that actually contains things. It is perfectly legal (though there are some drivers that don't support it) to render with a VAO that doesn't have any attributes enabled (though you have to use array rendering, not indexed rendering). In which case, all user-defined inputs to the vertex shader (if any) will be filled in with context state, which will be constant.
The vertex shader does have some other, built-in per-vertex inputs generated by the system. Namely gl_VertexID. This is the index used by OpenGL to uniquely identify this particular vertex. It will be different for every vertex.
So you could, for example, fetch geometry data yourself based on this index through uniform buffers, buffer textures, or some other mechanism. Or you can procedurally generate vertex data based on the index. Or something else. You could pass that data along to tessellation shaders for them to tessellate the generated data. Or to geometry shaders to do whatever it is you want with those. However you want to turn that index into real data is up to you.
Here's an example from my tutorial series that generates vertex data from nothing more than an index.
i've seen that geometries and shapes can be created in either two ways:
Not either. In modern OpenGL-4 you need both data and programs.
VBOs and VAOs do contain the raw geometry data. Shaders are the programs (usually executed on the GPU) that turn the raw data into pixels on the screen.
Vertex shaders can be used to displace vertices, or to generate them from a builtin formula and the vertex index, which is available as a built in attribute in later open gl versions.
The difference between vertex and geometry shaders is that vertex shader is a 1:1 mapping, while geometry shader can create more vertices -- can be utilized in automatic Level of Detail generation for e.g. NURBS or perlin noise based terrains etc.