Varyings from all vertices in fragment shader with no interpolation. Why not? - opengl

If we pass a varying from any geometry stage (vertex, geometry or tess shader) to fragment shader, we always loose some information. Basically, we loose it in two ways:
By interpolation: smooth, noperspective or centroid - does not matter. If we passed 3 floats (one per vertex) in geometry stage, we will get only one mixed float in fragment stage.
By discarding. When doing flat interpolation, hardware discards all values except one from provoking vertex.
Why does OpenGL not allow functionality like this:
Vertex shader:
// nointerp is an interpolation qualifier I would like to have
// along with smooth or flat.
nointerp out float val;
main()
{
val = whatever;
}
Fragment shader:
nointerp in float val[3];
// val[0] might contain the value from provoking vertex,
// and the rest of val[] elements contain values from vertices in winding order.
main()
{
// some code
}
In GLSL 330 I need to make integer indexing tricks or divide by barycentric coordinates in fragment shader, if I want values from all vertices.
Is it hard to implement in hardware, or is it not widely requested by shader coders? Or am I not aware of it?

Is it hard to implement in hardware, or is it not widely requested by shader coders?
It is usually just not needed in the typical shading algorithms. So traditionally, there has been the automatic (more or less) interpolation for each fragment. It is probably not too hard to implement in current gen hardware, because at least modern desktop GPUs typically use "pull-model interpolation" (see Fabian Giesen's blog article) anyway, meaning the actual interpolation is done in the shader already, the fixed-function hw just provides the interpolation coefficients. But this is hidden from you by the driver.
Or am I not aware of it?
Well, in unextended GL, there is currently (GL 4.6) no such feature. However, there are two related GL extensions:
GL_AMD_shader_explicit_vertex_parameter
GL_NV_fragment_shader_barycentric
which basically provide the features you are asking for.

Related

Handling per-primitive normals when there are less vertices than primitives in OpenGL4.5

I've had a bit of trouble coming up with a solution for passing the correct normals to a fragment shader in OpenGL 4.5 for each of the triangle primitives to be able to use per triangle normals while doing indexed triangle rendering. (I want to use an IBO)
My current solution that works for some of models is to basically set the first vertex of each primitive to be the provoking vertex and have the primitive's normal be counted as the normal from the provoking vertex. (Of course adding the flat modifier to the normal attribute in the shaders)
This should work for most models but I've realized that it just doesn't work when there are more triangle primitives than vertices in a model. The simplest example I can come up with is a triangular bipyramid.
Is there a typical way this is done in industry for OpenGL? In industry are models just so large that per vertex normals are easier to implement and look better?
As others mentioned in the comments, "in the industry" one would often duplicate vertices that have discontinuous normals. This is unavoidable when only parts of your geometry are flat shaded and parts are smooth, or there are creases in it.
If your geometry is entirely flat shaded, an alternative thing you can do is to use the gl_PrimitiveID to fetch the per-primitive normal from an SSBO in the fragment shader:
layout(std430, binding = 0) buffer NormalsBuffer {
vec4 NORMALS[];
};
void main() {
vec3 normal = NORMALS[gl_PrimitiveID].xyz;
// ...
}
You can also use the unpackSnorm2x16 or similar functions to read normals stored in smaller datatypes and thus reduce the bandwidth, much like with vertex array attributes.

Calculating surface normals of dynamic mesh (without geometry shader)

I have a mesh whose vertex positions are generated dynamically by the vertex shader. I've been using https://www.khronos.org/opengl/wiki/Calculating_a_Surface_Normal to calculate the surface normal for each primitive in the geometry shader, which seems to work fine.
Unfortunately, I'm planning on switching to an environment where using a geometry shader is not possible. I'm looking for alternative ways to calculate surface normals. I've considered:
Using compute shaders in two passes. One to generate the vertex positions, another (using the generated vertex positions) to calculate the surface normals, and then passing that data into the shader pipeline.
Using ARB_shader_image_load_store (or related) to write the vertex positions to a texture (in the vertex shader), which can then be read from the fragment shader. The fragment shader should be able to safely access the vertex positions (since it will only ever access the vertices used to invoke the fragment), and can then calculate the surface normal per fragment.
I believe both of these methods should work, but I'm wondering if there is a less complicated way of doing this, especially considering that this seems like a fairly common task. I'm also wondering if there are any problems with either of the ideas I've proposed, as I've had little experience with both compute shaders and image_load_store.
See Diffuse light with OpenGL GLSL. If you just want the face normals, you can use the partial derivative dFdx, dFdy. Basic fragment shader that calculates the normal vector (N) in the same space as the position:
in vec3 position;
void main()
{
vec3 dx = dFdx(position);
vec3 dy = dFdy(position);
vec3 N = normalize(cross(dx, dy));
// [...]
}

OpenGL: Passing random positions to the Vertex Shader

I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?
The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.

Why gl_Color is not a built-in variable for the fragment shader?

The vertex shader is expected to output vertices positions in clip space:
Vertex shaders, as the name implies, operate on vertices.
Specifically, each invocation of a vertex shader operates on a single
vertex. These shaders must output, among any other user-defined
outputs, a clip-space position for that vertex. (source: Learning Modern 3D Graphics Programming, by Jason L. McKesson)
It has a built-in variable named gl_Position for that.
Similarly, the fragment shader is expected to output colors:
A fragment shader is used to compute the output color(s) of a
fragment. [...] After the fragment shader executes, the fragment
output color is written to the output image. (source: Learning
Modern 3D Graphics Programming, by Jason L. McKesson)
but there is no gl_Color built-in variable defined for that as stated here: opengl44-quick-reference-card.pdf
Why that (apparent) inconsistency in the OpenGL API?
That is because the OpenGL pipeline uses gl_Position for several tasks. The manual says: "The value written to gl_Position will be used by primitive assembly, clipping, culling and other fixed functionality operations, if present, that operate on primitives after vertex processing has occurred."
In contrast, the pipeline logic does not depend on the final pixel color.
The accepted answer does not adequately explain the real situation:
gl_Color was already used once-upon-a-time, but it was always defined as an input value.
In compatibility GLSL, gl_Color is the color vertex pointer in vertex shaders and it takes on the value of gl_FrontColor or gl_BackColor depending on which side of the polygon you are shading in a fragment shader.
However, none of this behavior exists in newer versions of GLSL. You must supply your own vertex attributes, your own varyings and you pick between colors using the value of gl_FrontFacing. I actually explained this in more detail in a different question related to OpenGL ES 2.0, but the basic principle is the same.
In fact, since gl_Color was already used as an input variable this is why the output of a fragment shader is called gl_FragColor instead. You cannot have a variable serve both as an input and an output in the same stage. The existence of an inout storage qualifier may lead you to believe otherwise, but that is for function parameters only.

GLSL shader for each situation

In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.
What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.
Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.
You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards