Can the rendering for a pixel be terminated in a vertex shader. For example if a vertex does not meet a certain requirement cancel the rendering of that vertex?
I'll assuming you said "rendering for a vertex be terminated". And no, you can't; OpenGL is very strict about the 1:1 ratio of input vertices to outputs for a VS. Also, it wouldn't really mean what you want it to, since vertices don't get rendered. Primitives do, and a primitive can be composed of more than one vertex. What would it mean to discard a vertex in the middle of a triangle strip, for example.
This is why Geometry Shaders have the ability to "cull" primitives; they deal specifically with a primitive, not merely a single vertex. This is done by simply not emitting any vertices; GS's must explicitly emit the vertices that it wants to output.
Vertex shaders now have the ability to cull primitives. This is done using the "cull distance" feature of OpenGL 4.5. It's like gl_ClipDistance, only instead of clipping, it culls the entire primitive if one of the vertices crosses the threshold.
In theory, you can use a vertex shader to produce a degenerate (zero-area) primitive. A primitive with zero area should not result in anything rasterized, and thus no fragment will be rendered. It is not particularly intuitive, however, especially if you are using primitives that share vertices.
But no, canceling a vertex is almost meaningless. It is the fundamental unit upon which primitives are constructed. If you simply remove a single vertex, then you will alter the rasterized output in undefined ways.
Put simply, vertices are not what create pixels on screen. It is the connectivity between vertices, which creates primitives, that ultimately lead to pixels. Geometry Shaders operate on a primitive-by-primitive basis, so they are generally where you would cancel rasterization and fragment shading in a programatic fashion.
UPDATE:
It has come to my attention that you are using GL_POINTS as your primitive type. In this special case, all you have to do to prevent your vertex from going further down the pipeline is set its position somewhere outside of your camera's viewing volume. The vertex will be clipped and no rasterization or fragment shading will occur.
This is a much more efficient solution to testing for some condition in a fragment shader and then discarding, because you skip rasterization and do not have to execute a fragment shader at all. Not to mention, discard usually winds up working as a post-shader execution flag that tells the GPU to discard the result - the GPU is often forced to execute the entire shader no matter where in the shader you issue the discard instruction. Thus discard rarely gives a performance benefit, and in many cases it can disable other potentially more useful hardware optimizations. This is the nature of the way GPUs schedule their shader workload, unfortunately.
The cheapest fragment is the one you never have to process :)
You can't terminate rendering of a pixel in a vertex shader (it doesn't deal with pixels), but you can in the fragment shader using the discard instruction.
I am elaborating on Andon M. Coleman answer, which deserves IMHO to be marked as the right one.
Even though the OpenGL specification is adamant about the fact that you cannot skip the fragment shader step (unless you actually remove the whole primitive in the geometry shader, as Nicol Bolas correctly pointed out, which is a bit overkill imho), you can do it in practice by letting OpenGL cull the whole geometry, as modern GPUs have early fragment rejection optimizations which will likely produce the same effect.
And, for the records, making the whole geometry get discarded is really really easy: just write the vertex outside the (-1, -1, -1),(1,1,1) cube,
gl_Position = vec4(2.0, 2.0, 2.0, 1.0);
...and off you go!
Hope this helps
You can make alterations to the vertex stream, including the removal of vertices, but the place to do that would be in a geometry shader. If you look into geometry shaders, you may find the solution you're looking for in simply failing to 'emit' a vertex.
EDIT: If rendering a triangle strip you would probably also want to take the care to start a new primitive, when a vertex is removed; you'll see why if you investigate geom. shaders. With GL_POINTS it would be less of an issue.
And yes, if you send a triangle strip of only 2 vertices, for instance, then indeed you fail to render anything -- just as you would do if you passed in such a degenerate strip in the first place. That does not mean the vertex stream can't be altered on the GL side of things, however.
Hope that helps >Tom
set the position out of ndc
or
set flag and pass to fragment, and discard in fragment according to the flag
Related
I’ve been wondering what happens when binding a depth-only FBO (only the GL_DEPTH_ATTACHMENT gets attached and glDrawBuffer(GL_NONE) is called) for the fragment shader part. Because any color is discarded:
does OpenGL simply process vertices the regular way, call the rasterizer, apply the fragment shader for rasterized fragments, but discard any result
or does it do smarter things, like process vertices until the optional geometry shader, then cut the fragment shader part and use a dummy fragment shader in order to discard useless color computations?
Because of vendor-implementation details, I guess it might vary, but I’d like to have a better idea about that topic.
In my experience, the fragment shader will still run even if it has no outputs. This can be used for example to draw shadow maps with punch-through alpha textures, using discard.
If it does have outputs (or more outputs then are bound), then they should just be ignored. I'd imagine that a smart driver could easily skip the fragment shader entirely if it doesn't contain any discard statements.
Also perhaps look into Separate Shader Objects (https://www.opengl.org/registry/specs/ARB/separate_shader_objects.txt). It allows you to disable the stages manually.
I've read (Though never personally tested) that a complete lack of a color buffer causes strange undefined behavior, as OpenGL implementations each had to ask this question in reverse: "What /should/ we make it do when there's no color buffer?" and have no official, commonly-used-across-all-implementations answer.
The official documentation carefully avoids mentioning this situation generally.
As such, it is just recommended that you simply... not do that, and instead always have a color buffer, even if you don't use it.
I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.
Criteria: I’m using OpenGL with shaders (GLSL) and trying to stay with modern techniques (e.g., trying to stay away from deprecated concepts).
My questions, in a very general sense--see below for more detail—are as follows:
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
Background: My application draws points connected with lines in an ortho projection (vertices have varying depth in the projection). I’ve only recently started using shaders in the project (trying to get away from deprecated concepts). I understand that standard blending has ordering issues with alpha testing and depth testing: basically, if a “translucent” pixel at a higher z level is drawn first (thus blending with whatever colors were already drawn to that pixel at a lower z level), and an opaque object is then drawn at that pixel but at a lower z level, depth testing prevents changing the pixel that was already drawn for the “higher” z level, thus causing blending issues. To overcome this, you need to draw opaque items first, then translucent items in ascending z order. My gut feeling is that shaders wouldn’t provide an (efficient) way to change this behavior—am I wrong?
Further, for speed and convenience, I pass information for each vertex (along with a couple of uniform variables) to the shaders and they use the information to find a subset of the vertices that need special attention. Without doing a similar set of logic in the app itself (and slowing things down) I can’t know a priori what subset of vericies that is. Thus I send all vertices to the shader. However, when I draw “points” I’d like the shader to ignore all the vertices that aren’t in the subset it determines. I think I can get the effect by setting alpha to zero and using an alpha function in the GL context that will prevent drawing anything with alpha less than, say, 0.01. However, is there a better or more “correct” glsl way for a shader to say “just ignore this vertex”?
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Sort of. If you have access to GL 4.x-class hardware (Radeon HD 5xxx or better, or GeForce 4xx or better), then you can perform order-independent transparency. Earlier versions have techniques like depth peeling, but they're quite expensive.
The GL 4.x-class version uses essentially a series of "linked lists" of transparent samples, which you do a full-screen pass to resolve into the final sample color. It's not free of course, but it isn't as expensive as other OIT methods. How expensive it would be for your case is uncertain; it is proportional to how many overlapping pixels you have.
You still have to draw opaque stuff first, and you have to draw transparent stuff using special shader code.
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
No.
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
No in general, but yes for points. A Geometry shader can conditionally emit vertices, thus allowing you to discard any vertex for arbitrary reasons.
Discarding a vertex in non-point primitives is possible, but it will also affect the interpretation of that primitive. The reason it's simple for points is because a vertex is a primitive, while a vertex in a triangle isn't a whole primitive. You can discard lines, but discarding a vertex within a line is... of dubious value.
That being said, your explanation for why you want to do this is of dubious merit. You want to update vertex data with essentially a boolean value that says "do stuff with me" or not to. That means that, every frame, you have to modify your data to say which points should be rendered and which shouldn't.
The simplest and most efficient way to do this is to simply not render with them. That is, arrange your data so that the only thing on the GPU are the points you want to render. Thus, there's no need to do anything special at all. If you're going to be constantly updating your vertex data, then you're already condemned to dealing with streaming vertex data. So you may as well stream it in a way that makes rendering efficient.
Is it possible to achieve flat shading in OpenGL when using glDrawElements to draw objects, and if so how? The ideal way would be to calculate a normal for each triangle only once, if possible.
The solution must only use the programmable pipeline (core profile).
There are indeed ways around this without duplicating vertices, with some limitations for each one (at least those I can think of with my limited OpenGL experience).
I can see two solutions that would give you a constant value for the normal over each triangle :
declare the input as flat in your shader and pick which vertex gives its value via glProvokingVertex; fast but you'll get the normal for one vertex as the normal for the whole triangle, which might not look right
use a geometry shader taking triangles and outputing triangles to calculate a single normal per face. This is the most flexible way, allowing you to control the resulting effect, but it might be slow (and required geometry shader capable hardware, obviously)
Sadly, the only way to do that is to duplicate all your vertices, since attributes are per-vertex and not per-triangle
When you think about it, this is what we did in immediate mode...
I'm trying to make bilinear color interpolation on a quad, i succeeded with the help of my previous question on here, but it has bad performance because its requires me to repeat glBegin() and glEnd() and 4 times glUniform() before glBegin().
The question is: is it anyhow possible to apply bilinear color interpolation on a quad like this:
glBegin(GL_QUADS);
glColor4f(...); glVertexAttrib2f(uv, 0, 0); glTexCoord2f(...); glVertex3f(...);
glColor4f(...); glVertexAttrib2f(uv, 1, 0); glTexCoord2f(...); glVertex3f(...);
glColor4f(...); glVertexAttrib2f(uv, 1, 1); glTexCoord2f(...); glVertex3f(...);
glColor4f(...); glVertexAttrib2f(uv, 0, 1); glTexCoord2f(...); glVertex3f(...);
... // here can be any amount of quads without repeating glBegin()/glEnd()
glEnd();
To do this, i think i should somehow access the nearby vertex colors, but how? Or is there any other solutions for this?
I need this to work this way so i can easily switch between different interpolation shaders.
Any other solution that works with one glBegin() command is good too, but sending all corner colors per vertex isnt acceptable, unless thats the only solution here?
Edit: The example code uses immediate mode for clarity only. Even with vertex arrays/buffers the problem would be the same: i would have to split the rendering calls into 4 vertices chunks, which causes the whole speed drop here!
Long story short: You cannot do this with a vertex shader.
The interpolator (or rasterizer) is one of the components of the graphics pipeline that is not programmable. Given how the graphics pipe works, neither a vertex shader nor a fragment shader are allowed access to anything but their vertex (or fragment, respectively), for reasons of speed, simplicity, and parallelism.
The workaround is to use a texture lookup, which has already been noted in previous answers.
In newer versions of OpenGL (3.0 and up I believe?) there is now the concept of a geometry shader. Geometry shaders are more complicated to implement than the relatively simple vertex and fragment shaders, but geometry shaders are given topological information. That is, they execute on a primitive (triangle, line, quad, etc) rather than a single point. With that information, they could create additional geometry in order to resolve your alternate color interpolation method.
However, that's far more complicated than necessary. I'd stick with a 4 texel texture map and implement your logic in your fragment lookup.
Under the hood, OpenGL (and all the hardware that it drives) will do everything as triangles, so if you choose to blend colors via vertex interpolation, it will be triangular interpolation because the hardware doesn't work any other way.
If you want "quad" interpolation, you should put your colors into a texture, because in hardware a texture is always "quad" shaped.
If you really think it's the number of draws that cause your performance drop, you can try to use Instancing (Using glDrawArrayInstanced+glVertexAttribDivisor), available in GL 3.1 core.
An alternative might be point sprites, depending on your usage model (mostly, maximum size of your quads, and are they always perpendicular to the view). That's available since GL 2.0 core.
Linear interpolation with colours specified per vertex can be set up efficiently using glColorPointer. Similarly you should use glTexCoordPointer/glVertexAttribPointer/glVertexPointer to replace all those individual per-vertex calls with a single call referencing the data in an array. Then render all your quads with a single (or at most a handful of) glDrawArrays or glDrawElements call. You'll see a huge improvement from this even without VBOs (which just change where the arrays are stored).
You mention you want to change shaders (between ShaderA and ShaderB say) on a quad by quad basis. You should either:
Arrange things so you can batch all of the ShaderA quads together and all the ShaderB quads together and render all of each together with a single call. Changing shader is generally quite expensive so you want to minimise the number of changes.
or
Implement all the different shader logic you want in a single "unified" shader, but selected by another vertex attribute which selects between the different codepaths. Whether this is anywhere near as efficient as the batching approach (which is preferable) depends on whether or not each "tile" of SIMD shaders tends to have to run a mixture of paths or just one.