OpenSceneGraph and GLSL 330 light and shadows - c++

I've written plenty of
#version 330 core
GLSL shaders I'd like to reuse along with the OpenSceneGraph (OSG) 3.2.0 framework, and try to figure out how to get the state from the OSG I need to pass in by uniforms, and how to set them without having to change well-tested shader code, as well as how to populate arbitrarily named attributes.
This (version 140, OpenGL 3.1)
http://trac.openscenegraph.org/projects/osg/browser/OpenSceneGraph/trunk/examples/osgsimplegl3/osgsimplegl3.cpp
and this (version 400)
http://trac.openscenegraph.org/projects/osg/browser/OpenSceneGraph/trunk/examples/osgtessellationshaders/osgtessellationshaders.cpp
example give rise to a notion of aliasing certain attribute and uniform names to "osg_", but I'd like to use arbitrary names for the uniforms,
uniform mat4 uMVMatrix;
/*...*/
and to refer, or let the OSG refer, to the attributes by their numbers only, so sth like this
/*...*/
layout(location = 0) in vec4 aPosition;
layout(location = 1) in vec3 aNormal;
layout(location = 2) in vec2 aST;
as used in my legacy shaders, I'd like the OSG framework to populate with the vbo it already maintains for the "Drawables", or, at least, use an API call and do it myself.
I addition, I'd like to populate uniforms for lights and shawdowmaps by means of the scenegraph and the visitors; "somewhere" and "somehow" in the SG there must be light and esp shadow information be aggregated for default shading, so I'd like simply like to use this data and tailor it to fit to my custom shaders.
So the fundamental question is
How to populate arbitrary GLSL 330 shaders with data from within OSG without having to resent to redundant uniform assignment - providing my "u[..]Matrix" manually in addition to the "osg_[...]" uniform set by OSG - or changing attribute names in the shader sources?

I just stumbled upon this, turns out, you can just use your own names after all, if you just specify the layout location (so far I only tried it for the vertex position, so you might have to take care of using the correct layout location as osg would specify them, i.e. vertex position at 0, normal at 1 (which is not done the example of the link though))
layout (location = 0) in vec3 vertex;
this is enough to use the variable named vertex in the shader.
The link also provides an example to use custom names for matrices: you create an osg::Uniform::Callback class that uploads the matrix to the uniform.
when you create the osg::Uniform object, you specify the name of your choosing and add the callback.

Related

The difference between a color attribute and using gl_Color

Most GLSL shaders are using a attribute for the color in the vertex shader, which will be forwarded as varying to the fragment shader. Like this:
attribute vec4 position;
attribute vec4 color;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = color;
gl_Position = mvp * position;
};
Setting the color can be done with glVertexAtribPointer() to pass one color per vertex or with glVertexAttrib4fv() to pass a global color for all vertexes. I try to understand the difference to the predefined variable gl_Color in the vertex shader (if there is any difference at all). i.e.
attribute vec4 position;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = gl_Color;
gl_Position = mvp * position;
};
and using glColorPointer() to pass one color per vertex or glColor4fv() to use a global color for all vertexes. To me the second shader looks better (= more efficient?), because it uses less attributes. But all tutorials & online resources are using the first approach - so I wonder if I missed anything or if there is no difference at all.
What is better practice when writing GLSL shaders?
To me the second shader looks better (= more efficient?), because it uses less attributes.
It does not use fewer attributes. It just uses fewer explicit attribute declarations. All of the work needed to get that color value to OpenGL is still there. It's still being done. The hardware is still fetching data from a buffer object or getting it from the glColor context value or whatever.
You just don't see it in your shader's text. But just because you don't see it doesn't mean that it happens for free.
User-defined attributes are preferred for the following reasons:
User-defined attributes make it clear how many resources your shaders are using. If you want to know how many attributes you need to provide to a shader, just look at the global declarations. But with predefined attributes, you can't do this; you have to scan through the entire vertex shader for any gl_* names that name a predefined attribute.
User-defined attributes can do more things. If you want to pass integer values as integers to the vertex shader, you must use a user-defined attribute. If you need to pass a double-precision float to the vertex shader, again, a predefined attribute cannot help you.
Predefined attributes were removed from core OpenGL contexts. OSX, for example, does not allow the compatibility profile. You can still use OpenGL 2.1, but if you want to use any OpenGL version 3.2 or greater on OSX, you cannot use removed functionality. And the built-in vertex attributes were removed in OpenGL 3.1.
Predefined attributes were never a part of OpenGL ES 2.0+. So if you want to write shaders that can work in OpenGL ES, you again cannot use them.
So basically, there's no reason to use them these days.
if I remember correctly gl_Color is deprecated remnant from the old style API without VAO/VBO using glBegin() ... glEnd(). If you go to core profile there is no gl_Color anymore ... so I assume you use old OpenGL version or compatibility profile.
If you try to use gl_Color in core profile (for example 4.00) you got:
0(35) : error C7616: global variable gl_Color is removed after version 140
Which means gl_Color was removed from GLSL 1.4
It is not entirely matter of performance but the change in graphic rendering SW architecture or hierarchy of the GL calls if you want.

Usage of custom and generic vertex shader attributes in OpenGL and OpenGL ES

Since generic vertex attributes are deprecated in OpenGL, I tried to rewrite my vertex shader using only custom attributes. And I didn't work for me. Here is the vertex shader:
attribute vec3 aPosition;
attribute vec3 aNormal;
varying vec4 vColor;
vec4 calculateLight(vec4 normal) {
// ...
}
void main(void) {
gl_Position = uProjectionMatrix * uWorldViewMatrix * vec4(aPosition, 1);
vec4 rotatedNormal = normalize(uWorldViewMatrix * vec4(aNormal, 0));
vColor = calculateLight(rotatedNormal);
}
This works perfectly in OpenGL ES 2.0. However, when I try to use it with OpenGL I see black screen. If I change aNormal to generic gl_Normal everything works fine aswell (note that aPosition works fine in both contexts and I don't have to use gl_Vertex).
What am I doing wrong?
I use RenderMonkey to test shaders, and I've set up stream mapping in it with appropriate attribute names (aPosition and aNormal). Maybe it has something to do with attribute indices, becouse I have all of them set to 0? Also, here's what RenderMonkey documentation says about setting custom attribute names in "Stream Mapping":
The “Attribute Name” field displays the default name that can be
used in the shader editor to refer to that stream. In an OpenGL ES effect, the changed
name should be used to reference the stream; however, in a DirectX or OpenGL effect,
the new name has no affect in the shader editor
I wonder is this issue specific to RenderMonkey or OpenGL itself? And why aPosition still works then?
Attribute indices should be unique. It is possible to tell OpenGL to use specific indices via glBindAttribLocation before linking the program. Either way the normal way is to query the index with glGetAttribLocation. It sounds like RenderMonkey lets you choose, in which case have you tried making them separate?
I've seen fixed function rendering cross over to vertex attributes before, where glVertexPointer can wind up binding to the first attribute if its left unbound (I don't know if this is reproducible any more).
I also see some strange things when experimenting with attributes and fixed function names. Without calling glBindAttribLocation, I compile the following shader:
attribute vec4 a;
attribute vec4 b;
void main()
{
gl_Position = gl_Vertex + vec4(gl_Normal, 0) + a + b;
}
and I get the following locations (via glGetActiveAttrib):
a: 1
b: 3
gl_Vertex: -1
gl_Normal: -1
When experimenting, it seems the use of gl_Vertex takes up index 0 and gl_Normal takes index 2 (even if its not reported). I wonder if you throw in a padding attribute between aPosition and aNormal (don't forget to use it in the output or it'll be compiled away) makes it work.
In this case it's possible the position data is simply bound to location zero last. However, the black screen with aNormal points to nothing being bound (in which case it will always be {0, 0, 0}). This is a little less consistent - if the normal was bound to the same data as the position you'd expect some colour, if not correct colour, as the normal would have the position data.
Applications are allowed to bind more than one user-defined attribute
variable to the same generic vertex attribute index. This is called
aliasing, and it is allowed only if just one of the aliased attributes
is active in the executable program, or if no path through the shader
consumes more than one attribute of a set of attributes aliased to the
same location.
My feeling is then that RenderMonkey is using just glVertexPointer/glNormalPointer instead of attributes, which I would have though would bind both normal and position to either the normal or position data since you say both indices are zero.
in a DirectX or OpenGL effect, the new name has no affect in the shader editor
Maybe this means "named streams" are simply not available in the non-ES OpenGL version?
This is unrelated, but in the more recent OpenGL-GLSL versions, a #version number is needed and attributes use the keyword in.

Two layouts with the same location in GLSL 4.4

Is it possible to have two layouts with the same location equate to two different input variables of different types in a shader? Currently, my program is not explicitly assigning any location for the vertex, texture, normal vertex arrays. But in my shader, when I have selected the location 0 for both my vertex position and texture coords, it gives me a perfect output. I wanted to know if this is just a coincidence or is it really possible to assign to the same location? Here is my definition of the input variables in the vertex shader:
#version 440
layout (location = 0) in vec4 VertexPosition;
layout (location = 2) in vec4 VertexNormal;
layout (location = 0) in vec2 VertexTexCoord;
Technically... yes, you can. For Vertex Shader inputs (and only for vertex shader inputs), you can assign two variables to the same location. However, you may not attempt to read from both of them. You can dynamically select which one to read from, but it's undefined behavior if your shader takes a path that reads from both variables.
The relevant quote from the standard is:
The one exception where component aliasing is permitted is for two input variables (not block members) to a vertex shader, which are allowed to have component aliasing.This vertex-variable component aliasing is intended only to support vertex shaders where each execution path accesses at most one input per each aliased component.
But this is stupid and pointless. Don't do it.

cg shader parameters

I'm trying to write a shader using cg (for ogre3d). I can't seem to parse a working shader that I'd like to use as a starting point for my own code.
Here's the declaration for the shader:
void main
(
float2 iTexCoord0 : TEXCOORD0,
out float4 oColor : COLOR,
uniform sampler2D covMap1,
uniform sampler2D covMap2,
uniform sampler2D splat1,
uniform sampler2D splat2,
uniform sampler2D splat3,
uniform sampler2D splat4,
uniform sampler2D splat5,
uniform sampler2D splat6,
uniform float splatScaleX,
uniform float splatScaleZ
)
{...}
My questions:
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
(oColor is obviously an output parameter. No question)
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
Are splatScaleX and splatScaleZ also parameters? The ogre definition for the shader program also doesn't list these as parameters.
Does the order of declaration mean anything when sending values from an external program?
I'd like to pass in an array of floats (the height map). I assume that would be
uniform float splatScaleZ,
uniform float heightmap[1024]
)
{...}
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Is there a better way to debug these than just hit/miss and guess?
iTexCoord0 is obviously an input parameter. Why is it not declared uniform?
Uniforms are not the same things as input parameters. iTexCoord is a varying input, which is to say that for every vertex, it can have a unique value. This is set with commands like glVertexAttribPointer. Things like vertex coordinates, normals, texcoords, vertex colors, are examples of what you might use a varying input for.
Uniforms are the other hand are intended to be static for an entire draw call, or potentially for the entire frame or life of the program. They are set with glUniform* commands. A uniform might be something like the modelview matrix for an object, or the position of the sun for a lighting calculation. They don't change very often.
[edit] These specific commands I think actually work with GLSL, but the theory should be the same for CG. Lookup a cg specific tutorial to figure out the exact commands to set varyings and uniforms.
covMap1 - splat6 are textures. Are these parameters or something loaded into the graphics card memory (like globals)? The ogre definition for the shader program doesn't list them as parameters.
My CG is a little rusty, but if its the same as GLSL then the sampler2d is a uniform that takes an index that represent which sampler you want to sample from. When you do something like glActiveTexture(GL_TEXTURE3), glBindTexture(n), then you set the sampler uniform to "3" you can sample from that texture.
Does the order of declaration mean anything when sending values from an external program?
No, the variables are referred to in the external program by their string variable names.
If I don't pass one of the parameters will the shader just not be executed (and my object will be invisible because it has no texture)?
Unknown. They will likely have "some" initial value, though whether that makes any sense and will generate anything visible is hard to guess.
Is there a better way to debug these than just hit/miss and guess?
http://http.developer.nvidia.com/CgTutorial/cg_tutorial_appendix_b.html
See section B.3.8

Does RenderMonkey have a bug in TEXCOORD stream mapping for GLSL?

For clarity, I start with my question:
Is it possible to use (in the shader code) the custom attribute name which I set for the TEXCOORD usage in the (OpenGL) stream mapping in RenderMonkey 1.82 or do I have to use gl_MultiTexCoord0?
(The question might be valid for the NORMAL usage too, i.e custom name or gl_Normal)
Background:
Using RenderMonkey version 1.82.
I have successfully used the stream mapping to map the general vertex attribute "position" (and maybe "normal"), but the texture coordinates does not seem to be forwarded correctly.
For the shader code, I use #version 330 and the "in" qualifier in GLSL, which should be OK since RM does not compile the shaders itself (the OpenGL driver do).
I have tried both .obj and .3ds files (exported from blender), and when checking the wavefront .obj-file, all texture coordinate information is there, as well as the vertex positions and normals.
If it is not possible, the stream mapping is broken and there is no point in naming the variables in the stream mapping editor (besides for the vertex position stream, which works), since one has to use the built-in variables anyway.
Update:
If using the deprecated built-in variables, one has to use compatibility mode in the shader e.g
#version 330 compatibility
out vec2 vTexCoord;
and, in the main function:
vTexCoord = vec2(gl_MultiTexCoord0);
(Now I'm not sure about the stream mapping of normals either. As soon as I got the texture coordinates working, I had normal problems and had to revert to gl_Normal.)
Here is a picture of a working solution, but with built-in variables (and yes, the commented texcoord variable in the picture does not have the same name as in the stream mapping dialog, but it had the same name when I tried to use it, so it's OK.):
You could try to use the generic vertices's attributes, see http://open.gl, it's a great tutorial ;)
(but I think it imply you'd have to rewrite the code to manual handle the transformations...)
#version 330
layout(location = 0) in vec3 bla_bla_bla_Vertex;
layout(location = 2) in vec3 bla_bla_bla_Normal;
layout(location = 8) in vec3 bla_bla_bla_TexCoord0;
This is a working solution for RM 1.82