Can I somehow affect rasterization step in webGL? (custom draw mode or something) - glsl

Can I somehow write custom rasterization algorithm for WebGL? What I mean:
By default in WebGL you can chose drawmode, like TRIANGLES, LINES, POINTS and so on. These modes decide which pixels should be handled by fragment shader. And the question is: can I set this rules of which pixels should be handled by fragment shader by myself?

Related

How does the rasterizer create fragments?

Does the rasterizer (in OpenGL) create one fragment for each pixel the triangle is mapped to? So if we have 4 triangles and each triangle covers the whole screen (each triangle has a different z value) and my resolution is 1080*720, are there then 1080*720*4 fragments created?
I got confused with this concepts because I haven't seen it mentioned clearly somewhere. And will the fragment shader then render all these fragments or are they discarded based on the depth function settings before rendering?
Im assuming there is no multisampling.
That's pretty much the crux of it. The only complication in this case may be thrown up by depth testing, which may discard fragments if the Z test fails. So assuming each triangle is rendered in front of the proceeding triangle, then yes you'll have 1080*720*4 fragments.

GLSL shader: occlusion order and culling

I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).

Difference between tessellation shaders and Geometry shaders

I'm trying to develop a high level understanding of the graphics pipeline. One thing that doesn't make much sense to me is why the Geometry shader exists. Both the Tessellation and Geometry shaders seem to do the same thing to me. Can someone explain to me what does the Geometry shader do different from the tessellation shader that justifies its existence?
The tessellation shader is for variable subdivision. An important part is adjacency information so you can do smoothing correctly and not wind up with gaps. You could do some limited subdivision with a geometry shader, but that's not really what its for.
Geometry shaders operate per-primitive. For example, if you need to do stuff for each triangle (such as this), do it in a geometry shader. I've heard of shadow volume extrusion being done. There's also "conservative rasterization" where you might extend triangle borders so every intersected pixel gets a fragment. Examples are pretty application specific.
Yes, they can also generate more geometry than the input but they do not scale well. They work great if you want to draw particles and turn points into very simple geometry. I've implemented marching cubes a number of times using geometry shaders too. Works great with transform feedback to save the resulting mesh.
Transform feedback has also been used with the geometry shader to do more compute operations. One particularly useful mechanism is that it does stream compaction for you (packs its varying amount of output tightly so there are no gaps in the resulting array).
The other very important thing a geometry shader provides is routing to layered render targets (texture arrays, faces of a cube, multiple viewports), something which must be done per-primitive. For example you can render cube shadow maps for point lights in a single pass by duplicating and projecting geometry 6 times to each of the cube's faces.
Not exactly a complete answer but hopefully gives the gist of the differences.
See Also:
http://rastergrid.com/blog/2010/09/history-of-hardware-tessellation/

Multisampling in pipeline

In multisampling, during rasterization there are more than one sample points in each pixel and sample points are decided which constitutes the primitive.
which attributes are same for each sample in a pixel? I read somewhere that color and texture values are same but depth and stensil values for samples in a pixel are different. But as fragment shader is executed for each sample points then they should be different.
Also, When does the multiple samples are resolved in pipeline, after fragment shader? And do they linearly average out?
First you have to realize how multisampling works and why it was created.
I am going to approach this from the perspective of anti-aliasing, because that was the primary use-case for multisampling up until multisample textures were introduced in GL3.
When something is multisampled, this means that each sample point contains multiple samples. These samples may be identical to one another if a primitive has relatively uniform characteristics (e.g. has the same depth everywhere) and smart GL/hardware implementations are capable of identifying such situations and reducing memory bandwidth by reading/writing shared samples intelligently (similar to color/depth buffer compression). However, the cost in terms of required storage for a 4x MSAA framebuffer is the same as 4x SSAA because GL has to accommodate the worst-case scenario, where each of the 4 samples is unique.
Which attributes are same for each sample in a pixel?
Each fragment may cover multiple sample points for attributes such as color, texture coordinates, etc. Rather than invoking the fragment shader 4x as frequently to achieve 4x anti-aliasing, a trick was devised where each attribute would be sampled at the fragment center (this is the default) and then a single output written to each of the covered sample locations. The default behavior is somewhat lacking in the situation where the center of a fragment is not part of the actual coverage area - for this, centroid sampling was introduced... vertex attributes will be interpolated at the center of a primitive's coverage area within a fragment rather than the center of the fragment itself.
Later, when it comes time to write a color to the framebuffer, all of these samples need to be averaged to produce a single pixel; we call this multisample resolve. This works well for some things, but it does not address issues of aliasing that occurs during fragment shading itself.
Texturing occurs during the execution of a fragment shader, and this means that the sample frequency for texturing remains the same, so MSAA generally does not help with texture aliasing. Thus, while supersample anti-aliasing improves both aliasing at geometric edges and texture / shader aliasing (things like specular highlights), multisampling generally only reduces "jaggies."
I read somewhere that color and texture values are same but depth and stensil values for samples in a pixel are different.
In short, anything that is computed in the fragment shader will be the same for all covered samples. Anything that can be determined before fragment shading (e.g. depth) may vary.
Fragment tests such as depth/stencil are evaluated for each sub-sample. But multisampled depth buffers carry some restrictions. Up until D3D 10.1, hardware was not required to support multisampled depth textures so you could not sample multisampled depth buffers in a fragment shader.
But as fragment shader is executed for each sample points then they should be different.
There is a feature called sample shading, which can force an implementation of MSAA to work more like SSAA by improving the ratio between fragments shaded and samples generated during rasterization. But by default, the behavior you are describing is not multisampling.
When does the multiple samples are resolved in pipeline, after fragment shader? And do they linearly average out?
Multisample resolution occurs after fragment shading, anytime you have to write the contents of a multisampled buffer into a singlesampled buffer. This includes things like glBlitFramebuffer (...). You can also manually implement multisample resolution yourself in the fragment shader, if you use multisampled textures.
Finally, regarding the process used for multisample resolution, that is implementation-specific as is the sample layout. If you ever open your display driver's control panel and look at the myriad of anti-aliasing options available you will see multiple options for sample layout and MSAA resolve algorithm.
I would highly suggest you take a look at this article. While it is related to D3D10+ and not OpenGL, the general rules apply to both APIs (D3D9 has slightly different rules) and the quality of the diagrams is phenomenal.
In particular, pay special attention to the section on MSAA rasterization rules for triangles, which states:
For a triangle, a coverage test is performed for each sample location (not for a pixel center). If more than one sample location is covered, a pixel shader runs once with attributes interpolated at the pixel center. The result is stored (replicated) for each covered sample location in the pixel that passes the depth/stencil test.

Understanding the shader workflow in OpenGL?

I'm having a little bit of trouble conceptualizing the workflow used in a shader-based OpenGL program. While I've never really done any major projects using either the fixed-function or shader-based pipelines, I've started learning and experimenting, and it's become quite clear to me that shaders are the way to go.
However, the fixed-function pipeline makes much more sense to me from an intuitive perspective. Rendering a scene with that method is simple and procedural—like painting a picture. If I want to draw a box, I tell the graphics card to draw a box. If I want a lot of boxes, I draw my box in a loop. The fixed-function pipeline fits well with my established programming tendencies.
These all seem to go out the window with shaders, and this is where I'm hitting a block. A lot of shader-based tutorials show how to, for example, draw a triangle or a cube on the screen, which works fine. However, they don't seem to go into at all how I would apply these concepts in, for example, a game. If I wanted to draw three procedurally generated triangles, would I need three shaders? Obviously not, since that would be infeasible. Still, it's clearly not as simple as just sticking the drawing code in a loop that runs three times.
Therefore, I'm wondering what the "best practices" are for using shaders in game development environments. How many shaders should I have for a simple game? How do I switch between them and use them to render a real scene?
I'm not looking for specifics, just a general understanding. For example, if I had a shader that rendered a circle, how would I reuse that shader to draw different sized circles at different points on the screen? If I want each circle to be a different color, how can I pass some information to the fragment shader for each individual circle?
There is really no conceptual difference between the fixed-function pipeline and the programmable pipeline. The only thing shaders introduce is the ability to program certain stages of the pipeline.
On current hardware you have (for the most part) control over the vertex, primitive assembly, tessellation and fragment stages. Some operations that occur inbetween and after these stages are still fixed-function, such as depth/stencil testing, blending, perspective divide, etc.
Because shaders are actually nothing more than programs that you drop-in to define the input and output of a particular stage, you should think of input to a fragment shader as coming from the output of one of the previous stages. Vertex outputs are interpolated during rasterization and these are often what you're dealing with when you have an in variable in a fragment shader.
You can also have program-wide variables, known as uniforms. These variables can be accessed by any stage simply by using the same name in each stage. They do not vary across invocations of a shader, hence the name uniform.
Now you should have enough information to figure out this circle example... you can use a uniform to scale your circle (likely a simple scaling matrix) and you can either rely on per-vertex color or a uniform that defines the color.
You don't have shaders that draws circles (ok, you may with the right tricks, but's let's forget it for now, because it is misleading and has very rare and specific uses). Shaders are little programs you write to take care of certain stages of the graphic pipeline, and are more specific than "drawing a circle".
Generally speaking, every time you make a draw call, you have to tell openGL which shaders to use ( with a call to glUseProgram You have to use at least a Vertex Shader and a Fragment Shader. The resulting pipeline will be something like
Vertex Shader: the code that is going to be executed for each of the vertices you are going to send to openGL. It will be executed for each indices you sent in the element array, and it will use as input data the correspnding vertex attributes, such as the vertex position, its normal, its uv coordinates, maybe its tangent (if you are doing normal mapping), or whatever you are sending to it. Generally you want to do your geometric calculations here. You can also access uniform variables you set up for your draw call, which are global variables whic are not goin to change per vertex. A typical uniform variable you might watn to use in a vertex shader is the PVM matrix. If you don't use tessellation, the vertex shader will be writing gl_Position, the position which the rasterizer is going to use to create fragments. You can also have the vertex outputs different things (as the uv coordinates, and the normals after you have dealt with thieri geometry), give them to the rasterizer an use them later.
Rasterization
Fragment Shader: the code that is going to be executed for each fragment (for each pixel if that is more clear). Generally you do here texture sampling and light calculation. You will use the data coming from the vertex shader and the rasterizer, such as the normals (to evaluate diffuse and specular terms) and the uv coordinates (to fetch the right colors form the textures). The texture are going to be uniform, and probably also the parameters of the lights you are evaluating.
Depth Test, Stencil Test. (which you can move before the fragment shader with the early fragments optimization ( http://www.opengl.org/wiki/Early_Fragment_Test )
Blending.
I suggest you to look at this nice program to develop simple shaders http://sourceforge.net/projects/quickshader/ , which has very good examples, also of some more advanced things you won't find on every tutorial.