How vertex and fragment shaders communicate in OpenGL? - opengl

I really do not understand how fragment shader works.
I know that
vertex shader runs once per vertices
fragment shader runs once per fragment
Since fragment shader does not work per vertex but per fragment how can it send data to the fragment shader? The amount of vertices and amount of fragments are not equal.
How can it decide which fragment belong to which vertex?

To make sense of this, you'll need to consider the whole render pipeline. The outputs of the vertex shader (besides the special output gl_Position) is passed along as "associated data" of the vertex to the next stages in the pipeline.
While the vertex shader works on a single vertex at a time, not caring about primitives at all, further stages of the pipeline do take the primitive type (and the vertex connectivity info) into account. That's what typically called "primitive assembly". Now, we still have the single vertices with the associated data produced by the VS, but we also know which vertices are grouped together to define a basic primitive like a point (1 vertex), a line (2 vertices) or a triangle (3 vertices).
During rasterization, fragments are generated for every pixel location in the output pixel raster which belongs to the primitive. In doing so, the associated data of the vertices defining the primitve can be interpolated across the whole primitve. In a line, this is rather simple: a linear interpolation is done. Let's call the endpoints A and B with each some associated output vector v, so that we have v_A and v_B. Across the line, we get the interpolated value for v as v(x)=(1-x) * v_A + x * v_B at each endpoint, where x is in the range of 0 (at point A) to 1 (at point B). For a triangle, barycentric interpolation between the data of all 3 vertices is used. So while there is no 1:1 mapping between vertices and fragments, the outputs of the VS still define the values of the corrseponding input of the FS, just not in a direct way, but indirectly by the interpolation across the primitive type used.
The formula I have given so far are a bit simplified. Actually, by default, a perspective correction is applied, effectively by modifying the formula in such a way that the distortion effects of the perspective are taken into account. This simply means that the interpolation should act as it is applied linearily in object space (before the distortion by the projection was applied). For example, if you have a perspective projection and some primitive which is not parallel to the image plane, going 1 pixel to the right in screen space does mean moving a variable distance on the real object, depending on the distance of the actual point to the camera plane.
You can disable the perspective correction by using the noperspective qualifier for the in/out variables in GLSL. Then, the linear/barycentric interpolation is used as I described it.
You can also use the flat qualifier, which will disable the interpolation entirely. In that case, the value of just one vertex (the so called "provoking vertex") is used for all fragments of the whole primitive. Integer data can never by automatically interpolated by the GL and has to be qualified as flat when sent to the fragment shader.

The answer is that they don't -- at least not directly. There's an additional thing called "the rasterizer" that sits between the vertex processor and the fragment processor in the pipeline. The rasterizer is responsible for collecting the vertexes that come out of the vertex shader, reassembling them into primitives (usually triangles), breaking up those triangles into "rasters" of (partially) coverer pixels, and sending these fragments to the fragment shader.
This is a (mostly) fixed-function piece of hardware that you don't program directly. There are some configuration tweaks you can do that affects what it treats as a primitive and what it produces as fragments, but for the most part its just there between the vertex shader and fragment shader doing its thing.

Related

Patch and Tessellating primitive in OpenGL [duplicate]

When not using tessellation shaders, you can pass a primitive type (GL_TRIANGLES, GL_TRIANGLE_STRIP, etc.) to let OpenGL know how the vertex stream is representing geometry faces.
Using tessellation shaders, GL_PATCHES replaces these primitive type enums. This makes sense in my head when processing patches of size 3 or 4, and setting the corresponding layout in the TES to triangles or quads.
But if I have a patch of size 16 (some tutorials do this), how does the TPG know what 3 or 4 vertices form what faces? I've read several times that the order in your vertex buffer does not matter when GL_PATCHES is used, but surely there must be a point where a specific set of 3 vertices is considered a triangle (to pass to e.g. the geometry shader). How is this decided?
The actual tessellation hardware doesn't actually use your patch data. The tessellation unit, the part of the pipeline that actually generates triangles (or lines), operates on an abstract patch. If your TES specifies that the abstract patch is a quad, then the tessellator will perform tessellation on a unit quad.
How many patch vertices you have is completely irrelevant to this process.
The TES is a bit like a vertex shader; it gets called once per vertex. But not per vertex in the GL_PATCHES primitive. It's once per-vertex of the tessellated primitive. So, if you tessellate a quad, and your outer levels are (4, 4, 4, 4) and the inner levels are (4, 4), that will generate 25 vertices. The TES will be called that many times (at least; it can be called multiple times for the same vertex), and it will be told where within the abstract patch that particular vertex is.
The job of the TES is to decide how to use the abstract patch coordinate and the real patch vertex data to generate the actual per-vertex data to be used by the rendering pipeline. For example, if you're creating a Bezier patch, you would have 16 patch vertices. The TES's job would be to take the abstract patch coordinate and use bicubic Bezier interpolation to interpolate the 16 positions on the patch to that particular location in the abstract patch. That becomes the output position. Normals, texture coordinates, and other data can be computed similarly.
The short answer is: The TPG (Tessellation Primitive Generator) doesn't know/care at all about the patch size.
The TPG only cares about the specified input layout of the tessellation evaluation shader (triangle, quad, isoline) and the tessellation levels set by the control shader. It does not have access to the patch itself but generates the tessellation coordinates only based on the type and level used in this tessellation.
The user is than in the tessellation evaluation shader responsible for establishing the relation between the per control point parameters (passed from the tessellation control shader, donated by in/out, somehow similar to varyings) and the gl_TessCoord (coming from the TPG).
Note, that the amount of per control point parameters and the amount of tessellation coordinates is not necessarily the same.
For further reading, the following articles might help:
Primitive Processing in Open GL, especially Figure 8.1

Optimize operation between fragment and vertex shader

I am learning to make a graphical engine with OpenGL. I wanted to know, should repetitive operations be moved from the vertex shader to the fragment shader, since from what I understood the vertex shader is only run once per vertex?
For instance, when normalizing a vector for the light direction, since this light is the same in the entire vertex should it be moved to the vertex shader, instead of calculating it for every pixel? Is there a particular reason to keep it in the fragment shader?
If the calculation is exactly the same: yes, it should usually be more efficient to do it in the vertex shader than the fragment shader. Some situations where it might not be more efficient:
when drawing geometry that results in fewer shaded pixels than transformable vertices -- either due to dense geometry or extreme discards/occlusion. If this is the case, usually you would want to address it by switching to lower level-of-detail geometry or smarter geometry culling.
when doing the calculation in the vertex shader requires you to send more data to the fragment shader in order to use the calculation's results. Sending more data can be slower because it requires more memory manipulation and because the rasterizer needs to interpolate more "varying" values across each polygon.
For light calculations, specifically, be mindful that moving calculations from the fragment shader to the vertex shader can affect the quality of your rendering. Particularly, normalized direction vectors at each vertex can become shorter after "varying" interpolation, which can slightly darken triangle interiors if used directly without renormalization. And, of course, moving the entire lighting calculation to the vertex shader has even more drastic effects.
But how visible these effects are depends on the frequency of textures, the resolution of geometry, the size on screen, how far away the lights are, etc. -- in some cases, the quality/performance tradeoff may make sense.

Access world-space primitive size in fragment shader

It is essential for my fragment shader to know the world-space size of the primitive it belongs to. It is intended to be used solely for rendering rectangles (=triangles pair).
Naturally, I can compute its width and height on CPU and pass it as uniform value, but using such shader can be uncomfortable in long run - one have to remember what and how to compute it or search documentation. Is there any "automated" way of finding primitive size?
I have an idea of using a kind of pass-through geometry shader for doing this (since it is the only part of pipeline I know of that have access to whole primitive), but would that a good idea?
Is there any "automated" way of finding primitive size?
No because the concept of "primitive size" depends entirely on you, namely how your shaders work. As far as OpenGL is concerned, there's not even such a thing as world space. There's just clip space and NDC space and your vertex shaders take a formless bunch of data, called vertex attributes, and throw it out into clip space.
Since the vertex shader operates on a per-vertex base and don't see the other vertices (except if you pass them in as another vertex attribute, with a shifted index; if doing it that way, the output varying must be flat per vertex and the computation being skipped for 0 != vertex_ID % 3 for triangles) the only viable shader stages to do fully automate this is using a geometry or tesselation shader, doing that preparative calculation, emitting the result as a scalar vertex attribute.

Vertex shader vs Fragment Shader [duplicate]

This question already has answers here:
What are Vertex and Pixel shaders?
(6 answers)
Closed 5 years ago.
I've read some tutorials regarding Cg, yet one thing is not quite clear to me.
What exactly is the difference between vertex and fragment shaders?
And for what situations is one better suited than the other?
A fragment shader is the same as pixel shader.
One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons.
The fragment shader on the other hand takes care of how the pixels between the vertices look. They are interpolated between the defined vertices following specific rules.
For example: if you want your polygon to be completely red, you would define all vertices red. If you want for specific effects like a gradient between the vertices, you have to do that in the fragment shader.
Put another way:
The vertex shader is part of the early steps in the graphic pipeline, somewhere between model coordinate transformation and polygon clipping I think. At that point, nothing is really done yet.
However, the fragment/pixel shader is part of the rasterization step, where the image is calculated and the pixels between the vertices are filled in or "coloured".
Just read about the graphics pipeline here and everything will reveal itself:
http://en.wikipedia.org/wiki/Graphics_pipeline
Vertex shader is done on every vertex, while fragment shader is done on every pixel. The fragment shader is applied after vertex shader. More about the shaders GPU pipeline link text
Nvidia Cg Tutorial:
Vertex transformation is the first processing stage in the graphics hardware pipeline. Vertex transformation performs a sequence of math operations on each vertex. These operations include transforming the vertex position into a screen position for use by the rasterizer, generating texture coordinates for texturing, and lighting the vertex to determine its color.
The results of rasterization are a set of pixel locations as well as a set of fragments. There is no relationship between the number of vertices a primitive has and the number of fragments that are generated when it is rasterized. For example, a triangle made up of just three vertices could take up the entire screen, and therefore generate millions of fragments!
Earlier, we told you to think of a fragment as a pixel if you did not know precisely what a fragment was. At this point, however, the distinction between a fragment and a pixel becomes important. The term pixel is short for "picture element." A pixel represents the contents of the frame buffer at a specific location, such as the color, depth, and any other values associated with that location. A fragment is the state required potentially to update a particular pixel.
The term "fragment" is used because rasterization breaks up each geometric primitive, such as a triangle, into pixel-sized fragments for each pixel that the primitive covers. A fragment has an associated pixel location, a depth value, and a set of interpolated parameters such as a color, a secondary (specular) color, and one or more texture coordinate sets. These various interpolated parameters are derived from the transformed vertices that make up the particular geometric primitive used to generate the fragments. You can think of a fragment as a "potential pixel." If a fragment passes the various rasterization tests (in the raster operations stage, which is described shortly), the fragment updates a pixel in the frame buffer.
Vertex Shaders and Fragment Shaders are both feature of 3-D implementation that does not uses fixed-pipeline rendering. In any 3-D rendering vertex shaders are applied before fragment/pixel shaders.
Vertex shader operates on each vertex. If you have a fixed polygon mesh and you want to deform it in a shader, you have to implement it in vertex shader. I.e. any physical change in vertex appearances can be done in vertex shaders.
Fragment shader takes the output from the vertex shader and associates colors, depth value of a pixel, etc. After these operations the fragment is send to Framebuffer for display on the screen.
Some operation, as for example lighting calculation, you can perform in vertex shader as well as fragment shader. But fragment shader provides better result than the vertex shader.
In rendering images via 3D hardware you typically have a mesh (point, polygons, lines) these are defined by vertices. To manipulate vertices individually typically for motions in a model or waves in an ocean you can use vertex shaders. These vertices can have static colour or colour assigned by textures, to manipulate vertex colours you use fragment shaders. At the end of the pipeline when the view goes to screen you can also use fragment shaders.

C++ shader optimization question

Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized