Is it possible to have two layouts with the same location equate to two different input variables of different types in a shader? Currently, my program is not explicitly assigning any location for the vertex, texture, normal vertex arrays. But in my shader, when I have selected the location 0 for both my vertex position and texture coords, it gives me a perfect output. I wanted to know if this is just a coincidence or is it really possible to assign to the same location? Here is my definition of the input variables in the vertex shader:
#version 440
layout (location = 0) in vec4 VertexPosition;
layout (location = 2) in vec4 VertexNormal;
layout (location = 0) in vec2 VertexTexCoord;
Technically... yes, you can. For Vertex Shader inputs (and only for vertex shader inputs), you can assign two variables to the same location. However, you may not attempt to read from both of them. You can dynamically select which one to read from, but it's undefined behavior if your shader takes a path that reads from both variables.
The relevant quote from the standard is:
The one exception where component aliasing is permitted is for two input variables (not block members) to a vertex shader, which are allowed to have component aliasing.This vertex-variable component aliasing is intended only to support vertex shaders where each execution path accesses at most one input per each aliased component.
But this is stupid and pointless. Don't do it.
Related
In the vertex shader we can specify two OUT uniforms.
layout(location = 0) out float v_transparency;
layout(location = 1) out vec2 v_texcoord;
In the Fragment shader we can receive them like this.
layout(location = 0) in float v_transparency;
layout(location = 1) in vec2 v_texcoord;
The same can also be done without specifying the layout , Is there any advantage in specifying the layouts ?
If you link shader objects together into programs containing multiple shader stages, then the interfaces between the two programs must exactly match. That is, for every input in one stage, there must be a corresponding output in the next, and vice-versa. Otherwise, a linker error occurs.
If you use separate programs however, this is not necessary. It is possible for a shader stage output to not be listed among the next shader stage's inputs. Inputs not filled by an output have unspecified values (and therefore this is only useful if you don't read from it); outputs not received by an input are ignored.
However, this rule only applies to interface variables declared with layout(location = #) qualifiers. For variables that match via name (as well as input/output interface blocks), every input must go to an output and vice-versa.
Also, for interface variables with location qualifiers, the types do not have to exactly match. If the output and input types are different vector sizes (or scalar) of the same basic type (vec2 and vec3, uvec4 and uint, but not uvec3 and ivec2), so long as the input has fewer components than the output, the match will still work. So the output can be a vec4 while the input is a vec2; the input will only get the first 2 values.
Note that this is also a function of separate programs, not just the use of location qualifiers.
I know that each time a vertex shader is run it basically accesses part of the buffer (VBO) being drawn, when drawing vertex number 7 for example it's basically indexing 7 vertices into that VBO, based on the vertex attributes and so on.
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec3 texCoords; // This may be running on the 7th vertex for example.
What I want to do is have access to an earlier part of the VBO for example, so when it's drawing the 7th Vertex I would like to have access to vertex number 1 for example, so that I can interpolate with it.
Seeing that at the time of running the shader it's already indexing into the VBO already, I would think that this is possible, but I don't know how to do it.
Thank you.
As you can see in the documentation, vertex attributes are expected to change on every shader run. So no, you can't access attributes defined for other vertices in a vertex shader.
You can probably do this:
Define a uniform array and pass in the values you need. But keep in mind that you are using more memory this way, you need to pass more data etc.
As #Reaper said you can use a uniform buffer, which can be accessed freely. But the GPU doesn't like random access, it's usually more efficient to stream the data.
You can solve this as well by just adding the data for later/earlier vertices into the array, because in C++ all vertices are at your disposal.
For example if this is the "normal" array:
{
vertex1_x, vertex1_y, vertex1_z, normal1_x, normal1_y, normal1_z, texCoord1_x, texCoord1_y,
...
}
Then you could extend it with data for the other vertex to interpolate with:
{
vertex1_x, vertex1_y, vertex1_z, normal1_x, normal1_y, normal1_z, texCoord1_x, texCoord1_y, vertex2_x, vertex2_y, vertex2_z, normal2_x, normal2_y, normal2_z, texCoord2_x, texCoord2_y,
...
}
Actually you can pass any data per vertex. Just make sure that the stride size and other parameters are adjusted in the glVertexAttribPointer parameters.
So I was reading "The Official OpenGL Guide" and in a section where they taught material lighting, they suddenly used the "flat" qualifier for an input variable in the fragment shader.
I google'd the matter and all I came up with was "flat shading" and "smooth shading" and the differences between them, which I cannot understand how it is related to a simple "MatIndex" variable.
here's the example code from the book:
struct MaterialProperties {
vec3 emission;
vec3 ambient;
vec3 diffuse;
vec3 specular;
float shininess;
};
// a set of materials to select between, per shader invocation
const int NumMaterials = 14;
uniform MaterialProperties Material[NumMaterials];
flat in int MatIndex; // input material index from vertex shader
What is this all about?
In the general case, there is not a 1:1 mapping between a vertex and a fragment. By default, the associated data per vertex is interpolated across the primitive to generate the corresponding associated data per fragment, which is what smooth shading does.
Using the flat keyword, no interpolation is done, so every fragment generated during the rasterization of that particular primitive will get the same data. Since primitives are usually defined by more than one vertex, this means that the data from only one vertex is used in that case. This is called the provoking vertex in OpenGL.
Also note that integer types are never interpolated. You must declare them as flat in any case.
In your specific example, the code means that each primitive can only have one material, which is defined by the material ID of the provoking vertex of each primitive.
It's part of how the attribute is interpolated for the fragment shader, the default is a perspective correct interpolation.
in the GLSL spec section 4.5
A variable qualified as flat will not be interpolated. Instead, it
will have the same value for every fragment within a triangle. This
value will come from a single provoking vertex, as described by the
OpenGL Graphics System Specification. A variable may be qualified as
flat can also be qualified as centroid or sample, which will mean the same thing as qualifying it only as flat
Integral attributes need to be flat.
You can find the table of which vertex is the provoking vertex in the openGL spec table 13.2 in section 13.4.
I've written plenty of
#version 330 core
GLSL shaders I'd like to reuse along with the OpenSceneGraph (OSG) 3.2.0 framework, and try to figure out how to get the state from the OSG I need to pass in by uniforms, and how to set them without having to change well-tested shader code, as well as how to populate arbitrarily named attributes.
This (version 140, OpenGL 3.1)
http://trac.openscenegraph.org/projects/osg/browser/OpenSceneGraph/trunk/examples/osgsimplegl3/osgsimplegl3.cpp
and this (version 400)
http://trac.openscenegraph.org/projects/osg/browser/OpenSceneGraph/trunk/examples/osgtessellationshaders/osgtessellationshaders.cpp
example give rise to a notion of aliasing certain attribute and uniform names to "osg_", but I'd like to use arbitrary names for the uniforms,
uniform mat4 uMVMatrix;
/*...*/
and to refer, or let the OSG refer, to the attributes by their numbers only, so sth like this
/*...*/
layout(location = 0) in vec4 aPosition;
layout(location = 1) in vec3 aNormal;
layout(location = 2) in vec2 aST;
as used in my legacy shaders, I'd like the OSG framework to populate with the vbo it already maintains for the "Drawables", or, at least, use an API call and do it myself.
I addition, I'd like to populate uniforms for lights and shawdowmaps by means of the scenegraph and the visitors; "somewhere" and "somehow" in the SG there must be light and esp shadow information be aggregated for default shading, so I'd like simply like to use this data and tailor it to fit to my custom shaders.
So the fundamental question is
How to populate arbitrary GLSL 330 shaders with data from within OSG without having to resent to redundant uniform assignment - providing my "u[..]Matrix" manually in addition to the "osg_[...]" uniform set by OSG - or changing attribute names in the shader sources?
I just stumbled upon this, turns out, you can just use your own names after all, if you just specify the layout location (so far I only tried it for the vertex position, so you might have to take care of using the correct layout location as osg would specify them, i.e. vertex position at 0, normal at 1 (which is not done the example of the link though))
layout (location = 0) in vec3 vertex;
this is enough to use the variable named vertex in the shader.
The link also provides an example to use custom names for matrices: you create an osg::Uniform::Callback class that uploads the matrix to the uniform.
when you create the osg::Uniform object, you specify the name of your choosing and add the callback.
For clarity, I start with my question:
Is it possible to use (in the shader code) the custom attribute name which I set for the TEXCOORD usage in the (OpenGL) stream mapping in RenderMonkey 1.82 or do I have to use gl_MultiTexCoord0?
(The question might be valid for the NORMAL usage too, i.e custom name or gl_Normal)
Background:
Using RenderMonkey version 1.82.
I have successfully used the stream mapping to map the general vertex attribute "position" (and maybe "normal"), but the texture coordinates does not seem to be forwarded correctly.
For the shader code, I use #version 330 and the "in" qualifier in GLSL, which should be OK since RM does not compile the shaders itself (the OpenGL driver do).
I have tried both .obj and .3ds files (exported from blender), and when checking the wavefront .obj-file, all texture coordinate information is there, as well as the vertex positions and normals.
If it is not possible, the stream mapping is broken and there is no point in naming the variables in the stream mapping editor (besides for the vertex position stream, which works), since one has to use the built-in variables anyway.
Update:
If using the deprecated built-in variables, one has to use compatibility mode in the shader e.g
#version 330 compatibility
out vec2 vTexCoord;
and, in the main function:
vTexCoord = vec2(gl_MultiTexCoord0);
(Now I'm not sure about the stream mapping of normals either. As soon as I got the texture coordinates working, I had normal problems and had to revert to gl_Normal.)
Here is a picture of a working solution, but with built-in variables (and yes, the commented texcoord variable in the picture does not have the same name as in the stream mapping dialog, but it had the same name when I tried to use it, so it's OK.):
You could try to use the generic vertices's attributes, see http://open.gl, it's a great tutorial ;)
(but I think it imply you'd have to rewrite the code to manual handle the transformations...)
#version 330
layout(location = 0) in vec3 bla_bla_bla_Vertex;
layout(location = 2) in vec3 bla_bla_bla_Normal;
layout(location = 8) in vec3 bla_bla_bla_TexCoord0;
This is a working solution for RM 1.82