It is essential for my fragment shader to know the world-space size of the primitive it belongs to. It is intended to be used solely for rendering rectangles (=triangles pair).
Naturally, I can compute its width and height on CPU and pass it as uniform value, but using such shader can be uncomfortable in long run - one have to remember what and how to compute it or search documentation. Is there any "automated" way of finding primitive size?
I have an idea of using a kind of pass-through geometry shader for doing this (since it is the only part of pipeline I know of that have access to whole primitive), but would that a good idea?
Is there any "automated" way of finding primitive size?
No because the concept of "primitive size" depends entirely on you, namely how your shaders work. As far as OpenGL is concerned, there's not even such a thing as world space. There's just clip space and NDC space and your vertex shaders take a formless bunch of data, called vertex attributes, and throw it out into clip space.
Since the vertex shader operates on a per-vertex base and don't see the other vertices (except if you pass them in as another vertex attribute, with a shifted index; if doing it that way, the output varying must be flat per vertex and the computation being skipped for 0 != vertex_ID % 3 for triangles) the only viable shader stages to do fully automate this is using a geometry or tesselation shader, doing that preparative calculation, emitting the result as a scalar vertex attribute.
Related
I am learning to make a graphical engine with OpenGL. I wanted to know, should repetitive operations be moved from the vertex shader to the fragment shader, since from what I understood the vertex shader is only run once per vertex?
For instance, when normalizing a vector for the light direction, since this light is the same in the entire vertex should it be moved to the vertex shader, instead of calculating it for every pixel? Is there a particular reason to keep it in the fragment shader?
If the calculation is exactly the same: yes, it should usually be more efficient to do it in the vertex shader than the fragment shader. Some situations where it might not be more efficient:
when drawing geometry that results in fewer shaded pixels than transformable vertices -- either due to dense geometry or extreme discards/occlusion. If this is the case, usually you would want to address it by switching to lower level-of-detail geometry or smarter geometry culling.
when doing the calculation in the vertex shader requires you to send more data to the fragment shader in order to use the calculation's results. Sending more data can be slower because it requires more memory manipulation and because the rasterizer needs to interpolate more "varying" values across each polygon.
For light calculations, specifically, be mindful that moving calculations from the fragment shader to the vertex shader can affect the quality of your rendering. Particularly, normalized direction vectors at each vertex can become shorter after "varying" interpolation, which can slightly darken triangle interiors if used directly without renormalization. And, of course, moving the entire lighting calculation to the vertex shader has even more drastic effects.
But how visible these effects are depends on the frequency of textures, the resolution of geometry, the size on screen, how far away the lights are, etc. -- in some cases, the quality/performance tradeoff may make sense.
As far as I understand, the color of a pixel is determined by the fragment shader. Why do we need a vertex shader then? Is there anything a fragment shader cannot do (or cannot easily do) but a vertex shader can do (easily)?
But I still can't quite understand why it is named a "shader".
Because that's what programs executed as part of the rendering process are called. The Renderman interface specification was one of the first programmable rendering processes, and they called all of their programmable elements "shaders", even though they didn't all compute colors.
And therefore, "shader" has become the term used for describing any such program.
Vertex shaders convert vertex data, creating a 1:1 mapping from input vertices to output vertices. Fragment shaders operate on fragments. An FS invocation has no control over where it will be executed. They are generated in the location that the rasterizer says they go, and the FS has no way to affect this.
By contrast, a vertex shader has complete control over where the vertices will go.
I really do not understand how fragment shader works.
I know that
vertex shader runs once per vertices
fragment shader runs once per fragment
Since fragment shader does not work per vertex but per fragment how can it send data to the fragment shader? The amount of vertices and amount of fragments are not equal.
How can it decide which fragment belong to which vertex?
To make sense of this, you'll need to consider the whole render pipeline. The outputs of the vertex shader (besides the special output gl_Position) is passed along as "associated data" of the vertex to the next stages in the pipeline.
While the vertex shader works on a single vertex at a time, not caring about primitives at all, further stages of the pipeline do take the primitive type (and the vertex connectivity info) into account. That's what typically called "primitive assembly". Now, we still have the single vertices with the associated data produced by the VS, but we also know which vertices are grouped together to define a basic primitive like a point (1 vertex), a line (2 vertices) or a triangle (3 vertices).
During rasterization, fragments are generated for every pixel location in the output pixel raster which belongs to the primitive. In doing so, the associated data of the vertices defining the primitve can be interpolated across the whole primitve. In a line, this is rather simple: a linear interpolation is done. Let's call the endpoints A and B with each some associated output vector v, so that we have v_A and v_B. Across the line, we get the interpolated value for v as v(x)=(1-x) * v_A + x * v_B at each endpoint, where x is in the range of 0 (at point A) to 1 (at point B). For a triangle, barycentric interpolation between the data of all 3 vertices is used. So while there is no 1:1 mapping between vertices and fragments, the outputs of the VS still define the values of the corrseponding input of the FS, just not in a direct way, but indirectly by the interpolation across the primitive type used.
The formula I have given so far are a bit simplified. Actually, by default, a perspective correction is applied, effectively by modifying the formula in such a way that the distortion effects of the perspective are taken into account. This simply means that the interpolation should act as it is applied linearily in object space (before the distortion by the projection was applied). For example, if you have a perspective projection and some primitive which is not parallel to the image plane, going 1 pixel to the right in screen space does mean moving a variable distance on the real object, depending on the distance of the actual point to the camera plane.
You can disable the perspective correction by using the noperspective qualifier for the in/out variables in GLSL. Then, the linear/barycentric interpolation is used as I described it.
You can also use the flat qualifier, which will disable the interpolation entirely. In that case, the value of just one vertex (the so called "provoking vertex") is used for all fragments of the whole primitive. Integer data can never by automatically interpolated by the GL and has to be qualified as flat when sent to the fragment shader.
The answer is that they don't -- at least not directly. There's an additional thing called "the rasterizer" that sits between the vertex processor and the fragment processor in the pipeline. The rasterizer is responsible for collecting the vertexes that come out of the vertex shader, reassembling them into primitives (usually triangles), breaking up those triangles into "rasters" of (partially) coverer pixels, and sending these fragments to the fragment shader.
This is a (mostly) fixed-function piece of hardware that you don't program directly. There are some configuration tweaks you can do that affects what it treats as a primitive and what it produces as fragments, but for the most part its just there between the vertex shader and fragment shader doing its thing.
I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?
The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.
Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized