I am sorry to post a question that may be easily tested, but I don't have an OGL4+ hardware at the moment and I have to make some design decision beforehand so I wanted a clear scenario.
Suppose I have a variable produced in the vertex shader that I will not need until the fragment shader. If I also include the tessellation shaders can I do something like:
//// Vertex shader
out vec3 foo;
// Ignore foo in tessellation control and eval shader
//// Fragment shader
in vec3 foo;
Or I necessarily have to do something like:
//// Vertex shader
out vec3 fooCS;
// TCS
in vec3 fooCS;
out vec3 fooES;
//TES
in vec3 fooES;
out vec3 foo;
//// Fragment shader
in vec3 foo;
And in the latter case, should I use the [] qualifier to pass the variables?
Every shader stage that actually exists in your pipeline is responsible for feeding all values to the next one. So if you want to pass a variable from the vertex shader to the fragment shader, every intervening stage must provide that variable.
And this makes sense. Take tessellation. The whole point of tessellation is that new vertices get created. How could OpenGL automatically map the vertex shader output to arbitrary tessellation functions? Indeed, doing that mapping is why the TES exists; that's what it's for.
Same thing with the GS. It can generate new primitives and vertices.
As for using [], you will have to use it as is appropriate for the inputs and outputs of that particular stage.
Related
I want to some objects using shader. I name these objects obj_1, obj_2, ...obj_n.
One object's data stores in one vbo. I name these vbos vbo_1, vbo_2, ...vbo_n.
I need use vertex shader, geometry shader , and fragment shader for every object.
All vertex shader is just multiple the vertex with a modelviewprojectoin matrix.
All fragment shader is just set the color.
However, every geometry shader is different.
Here is my planA:
I create one program. The vertex shader has a uniform variable named ModelViewProjection. The fragment shader has a uniform variable named color.
In the loop of all objects, I set the vertex shader's ModelViewProjection uniform variable and set the fragment shader's color uniform variable.
Then change the geometry shader using corresponding GLSL code.
However, I can not find a way to replace the geometry shader in a program while changing vertex shader and fragment shader unchanged.
So I have a planB:
I create lots of programs as many as objects. It is feasible.
However, it means that the varialbe shader and fragment shader is repeated as many as numbers of objects. It is a waste of space and it's hard to expand.
So my question is, is there a plan C for my case? Or that there is a way to change geometry shader in program?
I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?
The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.
In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.
What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.
Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.
You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards
I'm having some problem understanding one line in the most basic (flat) shader example while reading OpenGL SuperBible.
In chapter 6, Listing 6.4 and 6.5 it introduces the following two very basic shaders.
6.4 Vertex Shader:
// Flat Shader
// Vertex Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Transformation Matrix
uniform mat4 mvpMatrix;
// Incoming per vertex
in vec4 vVertex;
void main(void)
{
// This is pretty much it, transform the geometry
gl_Position = mvpMatrix * vVertex;
}
6.5 Fragment Shader:
// Flat Shader
// Fragment Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Make geometry solid
uniform vec4 vColorValue;
// Output fragment color
out vec4 vFragColor;
void main(void)
{
gl_FragColor = vColorValue;
}
My confusion is that it says vFragColor in the out declaration while saying gl_FragColor in main().
On the other hand, in code from the website, it has been corrected to 'vFragColor = vColorValue;' in the main loop.
What my question is that other then being a typo in the book, what is the rule for naming out values of shaders? Do they have to follow specific names?
On OpenGL.org I've found that gl_Position is required for the out of the vertex shader. Is there any such thing for the fragment shader? Or it is just that if there is only one output, then it will be the color in the buffer?
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
As stated in the GLSL specification for version 1.3, the use of gl_FragColor in the fragment shader is deprecated. Instead, you should use a user defined output variable like the
vFragColor variable described in your fragment shader. As you said, it's a typo.
What is the rule for naming out values of shaders?
The variable name can be anything you like, unless it collides with any existing names.
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
When there is more than one out in the fragment shader, you should assign slots to the fragment shader outputs by calling BindFragDataLocation. You can then say which slots will render to which render target by calling DrawBuffers.
The specification states that if you have one output variable in the fragment shader defined, it will be assigned to index 0 and output 0. For more information, I recommend you take a look at it yourself.
gl_FragColor was the original output variable in early versions of GLSL. This was the color of the fragment that was to be drawn.
Your initial confusion is justified, as there's no reason to declare that out variable and then write to glFragColor.
In later versions it became customizable, such that you could give arbitrary names to your output variables. You can map these arbitrary outputs to specific buffers with the command glBindFragDataLocation.
I'm not 100% positive, but I believe if you don't call this function before linking, then your output variables will be randomly assigned to buffers. If you only have one output, then it should always be assigned to buffer 0.
I'm trying to write a shader using cg (for ogre3d). I can't seem to parse a working shader that I'd like to use as a starting point for my own code.
Here's the declaration for the shader:
void main
(
float2 iTexCoord0 : TEXCOORD0,
out float4 oColor : COLOR,
uniform sampler2D covMap1,
uniform sampler2D covMap2,
uniform sampler2D splat1,
uniform sampler2D splat2,
uniform sampler2D splat3,
uniform sampler2D splat4,
uniform sampler2D splat5,
uniform sampler2D splat6,
uniform float splatScaleX,
uniform float splatScaleZ
)
{...}
My questions:
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
(oColor is obviously an output parameter. No question)
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
Are splatScaleX and splatScaleZ also parameters? The ogre definition for the shader program also doesn't list these as parameters.
Does the order of declaration mean anything when sending values from an external program?
I'd like to pass in an array of floats (the height map). I assume that would be
uniform float splatScaleZ,
uniform float heightmap[1024]
)
{...}
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Is there a better way to debug these than just hit/miss and guess?
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
Uniforms are not the same things as input parameters. iTexCoord is a varying input, which is to say that for every vertex, it can have a unique value. This is set with commands like glVertexAttribPointer. Things like vertex coordinates, normals, texcoords, vertex colors, are examples of what you might use a varying input for.
Uniforms are the other hand are intended to be static for an entire draw call, or potentially for the entire frame or life of the program. They are set with glUniform* commands. A uniform might be something like the modelview matrix for an object, or the position of the sun for a lighting calculation. They don't change very often.
[edit] These specific commands I think actually work with GLSL, but the theory should be the same for CG. Lookup a cg specific tutorial to figure out the exact commands to set varyings and uniforms.
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
My CG is a little rusty, but if its the same as GLSL then the sampler2d is a uniform that takes an index that represent which sampler you want to sample from. When you do something like glActiveTexture(GL_TEXTURE3), glBindTexture(n), then you set the sampler uniform to "3" you can sample from that texture.
Does the order of declaration mean anything when sending values from an external program?
No, the variables are referred to in the external program by their string variable names.
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Unknown. They will likely have "some" initial value, though whether that makes any sense and will generate anything visible is hard to guess.
Is there a better way to debug these than just hit/miss and guess?
http://http.developer.nvidia.com/CgTutorial/cg_tutorial_appendix_b.html
See section B.3.8