OpenGL transform feedback definition completely inside shader - c++

I'm trying to get my transform feedback running. I wanted to specify my buffer layout completely from the shaders using the core 4.4 or the GL_ARB_enhanced_layouts extension using layout (xfb_offset=xx) declarers. I assumed that after declaring these in a vertex shader i can call
GLint iTransformFeedbackVars;
glGetProgramiv(m_uProgramID, GL_TRANSFORM_FEEDBACK_VARYINGS, &iTransformFeedbackVars);
to get the number of potential variables that want to be written to a transform feedback buffer. But my opengl keeps returning 0 for "iTransformFeedbackVars". I tried calling the above command BEFORE and AFTER linking the program.
Am I missing something here or is it even possible to let the shader specify the variables it wants to write to and my code create the buffer(s) after the wishes of the shader?

Related

Tessellation Shaders

I am trying to learn tessellation shaders in openGL 4.1. I understood most of the things. I have one question.
What is gl_InvocationID?
Can any body please explain in some easy way?
gl_InvocationID has two current uses, but it represents the same concept in both.
In Geometry Shaders, you can have GL run your geometry shader multiple times per-primitive. This is useful in scenarios where you want to draw the same thing from several perspectives. Each time the shader runs on the same set of data, gl_InvocationID is incremented.
The common theme between Geometry and Tessellation Shaders is that each invocation shares the same input data. A Tessellation Control Shader can read every single vertex in the input patch primitive, and you actually need gl_InvocationID to make sense of which data point you are supposed to be processing.
This is why you generally see Tessellation Control Shaders written something like this:
gl_out [gl_InvocationID].gl_Position = gl_in [gl_InvocationID].gl_Position;
gl_in and gl_out are potentially very large arrays in Tessellation Control Shaders (equal in size to GL_PATCH_VERTICES), and you have to know which vertex you are interested in.
Also, keep in mind that you are not allowed to write to any index other than gl_out [gl_InvocationID] from a Tessellation Control Shader. That property keeps invoking Tessellation Control Shaders in parallel sane (it avoids order dependencies and prevents overwriting data that a different invocation already wrote).

VBO with and without shaders OpenGL C++

Im trying to implement modern OpenGL, but the problem is: most tutorials are based on 3.3+, and talk about GLSL 330, I only have GLSL 130. Therefore, many things are apparently different, since my VBO's do not work.
Could you give me general hints or a tutorial that explains how to use GLSL 130 with VBO's? In my case, I have the vbo loaded, but when I use my shader program, only vertices called with glVertex get rendered, it's like the vbo gets ignored (no input). How to solve this?
And can you use VBO's without shaders? I tried to do that, but it crashed...
Yes, VBOs can still be used in GLSL 130, and can still be used even without shaders. The purpose of the VBO is to hold the vertex attributes for drawing. Most up to date tutorials I've seen have you use the layout position specifier for indicating how to address the different attributes in your shader, i.e.
layout(location = 0) in vec3 Position;
This isn't supported in GLSL 130, so you have to have another way of relating the attribute with the VBO. It's pretty simple... you can use glBindAttribLocation or glGetAttribLocation. Calling glGetAttribLocation will give you the identifier you need to use in glVertexAttribPointer to associate the VBO data with the particular attribute. You can call this at any time after the program has been compiled. In addition, you can call glBindAttribLocation to specifically set the identifier that will be associated with a given attribute name if you call it after you've created the program object but before you link the shaders. This is handy because it lets you decide for yourself what the location should be, just as you would be able to with the layout specifier.
Finally, if you want to use a VBO without using a shader at all, then you still have to find a way of associating the data in the VBO with the various inputs that the fixed function pipeline expects. This is done using a now deprecated method called glEnableClientState() and glVertexPointer(), which together let you tell OpenGL what fixed function pipeline attribute you're going to populate, and how it can find the data in the VBO.

Transform Feedback varyings

From some examples of using OpenGL transform feedback I see that the glTransformFeedbackVaryings​ are mapped after program compilation and before it is linking.Is this way enforced for all OpenGL versions?Can't layout qualifier be used to set the indices just like for vertex arrays?I am asking because in my code shader programs creation process is abstracted from other routines and before splitting it to controllable compile/link methods I would like to know if there's a way around.
Update:
How is it done when using separable shader objects?There is no explicit linkage step.
UPDATE:
It is still not clear to me how to set glTransformFeedbackVaryings when using separate shader objects.
This explantion is completely unclear to me:
If separable program objects are in use, the set of attributes
captured is taken from the program object active on the last shader
stage processing the primitives captured by transform feedback. The
set of attributes to capture in transform feedback mode for any other
program active on a previous shader stage is ignored.
I actually thought I could activate a pipeline object and do the query.But it seem to have no effect.My transform feedback writes nothing.Then I found this discussion in Transform Feedback docs:
Can you output varyings from a seperate shader program created
with glCreateShaderProgramEXT?
RESOLVED: No.
glTransformFeedbackVaryings requires a re-link to take effect on a
program. glCreateShaderProgramEXT detaches and deletes the shader
object use to create the program so a glLinkProgram will fail.
You can still create a vertex or geometry shader program
with the standard GLSL creation process where you could use
glTransformFeedbackVaryings and glLinkProgram.
This is unclear too.Does the answer mean that to set transform feedback varyings one should use the regular shader programs only?I don't get it.
What you are asking is possible using 4.4.2.1 Transform Feedback Layout Qualifiers, unfortunately it is an OpenGL 4.4 feature. It is available in extension form through GL_ARB_enhanced_layouts, but this is a relatively new extension and adoption is sparse at the moment.
It is considerably more complicated than any of the more traditional layout qualifiers in GLSL, so your best bet for the foreseeable future is to manage the varyings from the GL API instead of in your shader.
As far as varyings in SSO (separable shader object) programs, the OpenGL specification states the following:
OpenGL 4.4 (Core Profile) - 13.2 Transform Feedback - pp. 392
If separable program objects are in use, the set of attributes captured is taken
from the program object active on the last shader stage processing the primitives
captured by transform feedback. The set of attributes to capture in transform feedback mode for any other program active on a previous shader stage is ignored.
Ordinarily linking identifies varyings (denoted in/out in modern GLSL) that are actually used between stages and establishes the set of "active" uniforms for a GLSL program. Linking trims the dead weight that is not shared across multiple stages and performs static interface validation between stages and it is also when binding locations for any remaining varyings or uniforms are set. Since each program object can be a single stage when using SSOs, the linker is not going to reduce the number of inputs/outputs (varyings) and you can ignore a lot of language in the specification that says it must occur before or after linking.
Since linking is not a step in creating a program object for use with separate shader objects, your transform feedback has to be relative to a single stage (which can mean a different program object depending on which stage you select). OpenGL uses the program associated with the final vertex processing stage enabled in your pipeline for this purpose; this could be a vertex shader, tessellation evaluation shader, or geometry shader (in that order). Whichever program provides the final vertex processing stage for your pipeline is the program object that transform feedback varyings are relative to.

OpenGL 4.1 GL_ARB_separate_program_objects usefulness

I have been reading this OpenGL4.1 new features review.I don't really understand the idea behind GL_ARB_separate_program_objects usage , at least based on how the post author puts it:
It allows to independently use shader stages without changing others
shader stages. I see two mains reasons for it: Direct3D, Cg and even
the old OpenGL ARB program does it but more importantly it brings some
software design flexibilities allowing to see the graphics pipeline at
a lower granularity. For example, my best enemy the VAO, is a
container object that links buffer data, vertex layout data and GLSL
program input data. Without a dedicated software design, this means
that when I change the material of an object (a new fragment shader),
I need different VAO... It's fortunately possible to keep the same VAO
and only change the program by defining a convention on how to
communicate between the C++ program and the GLSL program. It works
well even if some drawbacks remains.
Now ,this line :
For example, my best enemy the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
makes me wonder.In my OpenGL programs I use VAO objects and I can switch between different shader programs without doing any change to VAO itself.So ,have I misunderstood the whole idea? Maybe he means we can switch shaders for the same program without re-linking ?
I'm breaking this answer up into multiple parts.
What the purpose of ARB_separate_shader_objects is
The purpose of this functionality is to be able to easily mix-and-match between vertex/fragment/geometry/tessellation shaders.
Currently, you have to link all shader stages into one monolithic program. So I could be using the same vertex shader code with two different fragment shaders. But this results in two different programs.
Each program has its own set of uniforms and other state. Which means that if I want to change some uniform data in the vertex shader, I have to change it in both programs. I have to use glGetUniformLocation on each (since they could have different locations). I then have to set the value on each one individually.
That's a big pain, and it's highly unnecessary. With separate shaders, you don't have to. You have a program that contains just the vertex shader, and two programs that contain the two fragment shaders. Changing vertex shader uniforms doesn't require two glGetUniformLocation calls. Indeed, it's easier to cache the data, since there's only one vertex shader.
Also, it deals with the combinatorial explosion of shader combinations.
Let's say you have a vertex shader that does simple rigid transformations: it takes a model-to-camera matrix and a camera-to-clip matrix. Maybe a matrix for normals too. And you have a fragment shader that will sample from some texture, do some lighting computations based on the normal, and return a color.
Now let's say you add another fragment shader that takes extra lighting and material parameters. It doesn't have any new inputs from the vertex shaders (no new texture coordinates or anything), just new uniforms. Maybe it's for projective lighting, which the vertex shader isn't involved with. Whatever.
Now let's say we add a new vertex shader that does vertex weighted skinning. It provides the same outputs as the old vertex shader, but it has a bunch of uniforms and input weights for skinning.
That gives us 2 vertex shaders and 2 fragment shaders. A total of 4 program combinations.
What happens when we add 2 more compatible fragment shaders? We get 8 combinations. If we have 3 vertex and 10 fragment shaders, we have 30 total program combinations.
With separate shaders, 3 vertex and 10 fragment shaders needs 30 program pipeline objects, but only 13 program objects. That's over 50% fewer program objects than the non-separate case.
Why the quoted text is wrong
Now ,this line [...] makes me wonder.
It should make you wonder; it's wrong in several ways. For example:
the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.
No, it does not. It ties buffer objects that provide vertex data to the vertex formats for that data. And it specifies which vertex attribute indices that goes to. But how tightly coupled this is to "GLSL program input data" is entirely up to you.
Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
Unless this line equates "a dedicated software design" with "reasonable programming practice", this is pure nonsense.
Here's what I mean. You'll see example code online that does things like this when they set up their vertex data:
glBindBuffer(GL_ARRAY_BUFFER, buffer_object);
glEnableVertexAttribArray(glGetAttribLocation(prog, "position"));
glVertexAttribPointer(glGetAttribLocation(prog, "position"), ...);
There is a technical term for this: terrible code. The only reason to do this is if the shader specified by prog is somehow not under your direct control. And if that's the case... how do you know that prog has an attribute named "position" at all?
Reasonable programming practice for shaders is to use conventions. That's how you know prog has an attribute named "position". But if you know that every program is going to have an attribute named "position", why not take it one step further? When it comes time to link a program, do this:
GLuint prog = glCreateProgram();
glAttachShader(prog, ...); //Repeat as needed.
glBindAttribLocation(prog, 0, "position");
After all, you know that this program must have an attribute named "position"; you're going to assume that when you get it's location later. So cut out the middle man and tell OpenGL what location to use.
This way, you don't have to use glGetAttribLocation; just use 0 when you mean "position".
Even if prog doesn't have an attribute named "position", this will still link successfully. OpenGL doesn't mind if you bind attribute locations that don't exist. So you can just apply a series of glBindAttribLocation calls to every program you create, without problems. Indeed, you can have multiple conventions for your attribute names, and as long as you stick to one set or the other, you'll be fine.
Even better, stick it in the shader and don't bother with the glBindAttribLocation solution at all:
#version 330
layout(location = 0) in vec4 position;
In short: always use conventions for your attribute locations. If you see glGetAttribLocation in a program, consider that a code smell. That way, you can use any VAO for any program, since the VAO is simply written against the convention.
I don't see how having a convention equates to "dedicated software design", but hey, I didn't write that line either.
I can switch between different shader programs
Yes, but you have to replace whole programs altogether. Separate shader objects allow you to replace only one stage (e.g. only vertex shader).
If you have for example N vertex shaders and M vertex shaders, using conventional linking you would have N * M program objects (to cover all posible combinations). Using separate shader objects, they are separated from each other, and thus you need to keep only N + M shader objects. That's a significant improvement in complex scenarios.

how to apply shader to specific object

I have several objects on my scene. I want to apply my shader to one of them only. Environment: OpenGL 2.0, C++, GLUT, GLEW.
The shader program is only in effect for as long as it is installed. Only the draw calls you make while the program is installed will use the shader. You must install your shader, draw your object, and then uninstall the shader.
Edit: By "install" the shader I mean use glUseProgram with your shader's handle. By "uninstall" I mean either installing another shader or calling glUseProgram with an argument of 0. See glUseProgram. My "install/uninstall" terminology comes from there.
In your drawcall draw that object with that shader and draw the other ones without it.. can't really be any more simple than that ;P You could use enums in your object class where you can specify shaders that are enabled for that object and only pass them through that shader when they are supposed to.. of course if it's a fullscreen pixelshader then you're in trouble as it processes every pixel and renders a new image to display. Unless you have a way of passing the object as a parameter and a algoritm to only apply the alterations at the location of that object.