VBO with and without shaders OpenGL C++ - opengl

Im trying to implement modern OpenGL, but the problem is: most tutorials are based on 3.3+, and talk about GLSL 330, I only have GLSL 130. Therefore, many things are apparently different, since my VBO's do not work.
Could you give me general hints or a tutorial that explains how to use GLSL 130 with VBO's? In my case, I have the vbo loaded, but when I use my shader program, only vertices called with glVertex get rendered, it's like the vbo gets ignored (no input). How to solve this?
And can you use VBO's without shaders? I tried to do that, but it crashed...

Yes, VBOs can still be used in GLSL 130, and can still be used even without shaders. The purpose of the VBO is to hold the vertex attributes for drawing. Most up to date tutorials I've seen have you use the layout position specifier for indicating how to address the different attributes in your shader, i.e.
layout(location = 0) in vec3 Position;
This isn't supported in GLSL 130, so you have to have another way of relating the attribute with the VBO. It's pretty simple... you can use glBindAttribLocation or glGetAttribLocation. Calling glGetAttribLocation will give you the identifier you need to use in glVertexAttribPointer to associate the VBO data with the particular attribute. You can call this at any time after the program has been compiled. In addition, you can call glBindAttribLocation to specifically set the identifier that will be associated with a given attribute name if you call it after you've created the program object but before you link the shaders. This is handy because it lets you decide for yourself what the location should be, just as you would be able to with the layout specifier.
Finally, if you want to use a VBO without using a shader at all, then you still have to find a way of associating the data in the VBO with the various inputs that the fixed function pipeline expects. This is done using a now deprecated method called glEnableClientState() and glVertexPointer(), which together let you tell OpenGL what fixed function pipeline attribute you're going to populate, and how it can find the data in the VBO.

Related

Custom Vertex Attributes GLSL

I want to make a couple of vec4 vertex attributes in my shaders. I've done quite a bit of googling, but I can't seem to find consistent information for specifically what I want to do.
My goal here is to move skinning to the GPU, so I need a list of bones and weights per vertex, hence why I want to use vertex attributes. I have 2 arrays of floats that represent this data. Basically this:
weightsBuffer = new float[vSize*4];
indexesBuffer = new int[vSize*4];
The part that I can't consistently find is how to upload these and use them in the shader. To be clear, I don't want to upload all the position, normal and texture coordinate data, I'm already using display lists and have decided to keep using them for a few reasons that aren't relevant. How can I create the buffers and bind them properly so I can use them?
Thanks.
Binding your bone weights and indices is no different of a process than binding your position data. Assuming the data is generated properly in your buffers, you use glBindAttribLocation to bind the attribute index in your vertex stream to your shader variable, and glVertexAttribPointer to define your vertex array (and don't forget glEnableVertexAttribArray).
The exact code may vary, depending on whether you're using VAOs and VBOs (or just client buffers). If you want a more specific answer, you should provide your code and shader.

OpenGL transform feedback definition completely inside shader

I'm trying to get my transform feedback running. I wanted to specify my buffer layout completely from the shaders using the core 4.4 or the GL_ARB_enhanced_layouts extension using layout (xfb_offset=xx) declarers. I assumed that after declaring these in a vertex shader i can call
GLint iTransformFeedbackVars;
glGetProgramiv(m_uProgramID, GL_TRANSFORM_FEEDBACK_VARYINGS, &iTransformFeedbackVars);
to get the number of potential variables that want to be written to a transform feedback buffer. But my opengl keeps returning 0 for "iTransformFeedbackVars". I tried calling the above command BEFORE and AFTER linking the program.
Am I missing something here or is it even possible to let the shader specify the variables it wants to write to and my code create the buffer(s) after the wishes of the shader?

OpenGL 4.1 GL_ARB_separate_program_objects usefulness

I have been reading this OpenGL4.1 new features review.I don't really understand the idea behind GL_ARB_separate_program_objects usage , at least based on how the post author puts it:
It allows to independently use shader stages without changing others
shader stages. I see two mains reasons for it: Direct3D, Cg and even
the old OpenGL ARB program does it but more importantly it brings some
software design flexibilities allowing to see the graphics pipeline at
a lower granularity. For example, my best enemy the VAO, is a
container object that links buffer data, vertex layout data and GLSL
program input data. Without a dedicated software design, this means
that when I change the material of an object (a new fragment shader),
I need different VAO... It's fortunately possible to keep the same VAO
and only change the program by defining a convention on how to
communicate between the C++ program and the GLSL program. It works
well even if some drawbacks remains.
Now ,this line :
For example, my best enemy the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
makes me wonder.In my OpenGL programs I use VAO objects and I can switch between different shader programs without doing any change to VAO itself.So ,have I misunderstood the whole idea? Maybe he means we can switch shaders for the same program without re-linking ?
I'm breaking this answer up into multiple parts.
What the purpose of ARB_separate_shader_objects is
The purpose of this functionality is to be able to easily mix-and-match between vertex/fragment/geometry/tessellation shaders.
Currently, you have to link all shader stages into one monolithic program. So I could be using the same vertex shader code with two different fragment shaders. But this results in two different programs.
Each program has its own set of uniforms and other state. Which means that if I want to change some uniform data in the vertex shader, I have to change it in both programs. I have to use glGetUniformLocation on each (since they could have different locations). I then have to set the value on each one individually.
That's a big pain, and it's highly unnecessary. With separate shaders, you don't have to. You have a program that contains just the vertex shader, and two programs that contain the two fragment shaders. Changing vertex shader uniforms doesn't require two glGetUniformLocation calls. Indeed, it's easier to cache the data, since there's only one vertex shader.
Also, it deals with the combinatorial explosion of shader combinations.
Let's say you have a vertex shader that does simple rigid transformations: it takes a model-to-camera matrix and a camera-to-clip matrix. Maybe a matrix for normals too. And you have a fragment shader that will sample from some texture, do some lighting computations based on the normal, and return a color.
Now let's say you add another fragment shader that takes extra lighting and material parameters. It doesn't have any new inputs from the vertex shaders (no new texture coordinates or anything), just new uniforms. Maybe it's for projective lighting, which the vertex shader isn't involved with. Whatever.
Now let's say we add a new vertex shader that does vertex weighted skinning. It provides the same outputs as the old vertex shader, but it has a bunch of uniforms and input weights for skinning.
That gives us 2 vertex shaders and 2 fragment shaders. A total of 4 program combinations.
What happens when we add 2 more compatible fragment shaders? We get 8 combinations. If we have 3 vertex and 10 fragment shaders, we have 30 total program combinations.
With separate shaders, 3 vertex and 10 fragment shaders needs 30 program pipeline objects, but only 13 program objects. That's over 50% fewer program objects than the non-separate case.
Why the quoted text is wrong
Now ,this line [...] makes me wonder.
It should make you wonder; it's wrong in several ways. For example:
the VAO, is a container object that links buffer data, vertex layout data and GLSL program input data.
No, it does not. It ties buffer objects that provide vertex data to the vertex formats for that data. And it specifies which vertex attribute indices that goes to. But how tightly coupled this is to "GLSL program input data" is entirely up to you.
Without a dedicated software design, this means that when I change the material of an object (a new fragment shader), I need different VAO...
Unless this line equates "a dedicated software design" with "reasonable programming practice", this is pure nonsense.
Here's what I mean. You'll see example code online that does things like this when they set up their vertex data:
glBindBuffer(GL_ARRAY_BUFFER, buffer_object);
glEnableVertexAttribArray(glGetAttribLocation(prog, "position"));
glVertexAttribPointer(glGetAttribLocation(prog, "position"), ...);
There is a technical term for this: terrible code. The only reason to do this is if the shader specified by prog is somehow not under your direct control. And if that's the case... how do you know that prog has an attribute named "position" at all?
Reasonable programming practice for shaders is to use conventions. That's how you know prog has an attribute named "position". But if you know that every program is going to have an attribute named "position", why not take it one step further? When it comes time to link a program, do this:
GLuint prog = glCreateProgram();
glAttachShader(prog, ...); //Repeat as needed.
glBindAttribLocation(prog, 0, "position");
After all, you know that this program must have an attribute named "position"; you're going to assume that when you get it's location later. So cut out the middle man and tell OpenGL what location to use.
This way, you don't have to use glGetAttribLocation; just use 0 when you mean "position".
Even if prog doesn't have an attribute named "position", this will still link successfully. OpenGL doesn't mind if you bind attribute locations that don't exist. So you can just apply a series of glBindAttribLocation calls to every program you create, without problems. Indeed, you can have multiple conventions for your attribute names, and as long as you stick to one set or the other, you'll be fine.
Even better, stick it in the shader and don't bother with the glBindAttribLocation solution at all:
#version 330
layout(location = 0) in vec4 position;
In short: always use conventions for your attribute locations. If you see glGetAttribLocation in a program, consider that a code smell. That way, you can use any VAO for any program, since the VAO is simply written against the convention.
I don't see how having a convention equates to "dedicated software design", but hey, I didn't write that line either.
I can switch between different shader programs
Yes, but you have to replace whole programs altogether. Separate shader objects allow you to replace only one stage (e.g. only vertex shader).
If you have for example N vertex shaders and M vertex shaders, using conventional linking you would have N * M program objects (to cover all posible combinations). Using separate shader objects, they are separated from each other, and thus you need to keep only N + M shader objects. That's a significant improvement in complex scenarios.

OpenGL 3.3, GLSL 1.5: How to setup a Texture Buffer Object containing various texture2D?

I've been wondering whether it is possible to have an array of sampler2D in a GLSL 1.5 vertex shader.
I need to access 30 different 2d-textures from my vertex shader. I read that it is not possible to have a structure like:
uniform sampler2d texture[30];
However, having 30 different uniforms is a bit exaggerated and fairly hard to manage...
So, that brought me to the idea of having a texture buffer object. TBO's are supported since OpenGL 3.0. However, I couldn't find a good tutorial or example, respectively, which shows how to initialize a TBO with not only one texture, but several textures.
This website shows an example on how to initialize a TBO with a single texture. No big deal at all. I think the most important method is
void createTBO(GLuint* tbo, GLuint* tex)
By executing the method
glTexBufferEXT(GL_TEXTURE_BUFFER_EXT, GL_RGBA32F_ARB, *tbo);
one can actually attach the texture to the buffer. This is also mentioned here. I assume calling glTexBuffer 30 times one after the other wouldn't do the trick.
So, I've been thinking if there might be another way of getting the very same result. I came up with two ideas:
Adding the 30 2d-textures to a 3d-texture and attach that directly to the vertex shader. However, that would be a big waste of memory since most of the 3d-texture's layers wouldn't be used.
Using a structure called sampler2DArray. It is mentioned in the specs. However, I searched the web and couldn't find any valuable information about how to implement that.
So, my questions are:
How do I setup a TBO containing more than only 1 texture?
Is that possible at all?
Do you know sources where I could find information about adding 2d-textures to a 3d-texture?
Do you know websites where I could find information about the initializing, binding and usage of sampler2DArray?
I'd be grateful if you could advice me. I'm really a newbie in terms of OpenGL.
Thanks
Walter
I believe you misunderstood what a Texture Buffer Object (TBO) is. A TBO is used to access a buffer object inside a shader, nothing more. It doesn't hold multiple textures or anything like that.
If the textures are of the same size, you can use a 3D texture or a Texture array. A TBO is no use for your problem.
You could use sampler2DArray uniforms. You can use them to pass multiple textures to your shader.
http://www.opengl.org/registry/specs/EXT/texture_array.txt
An alternative solution would be to use a very large tbo and store all textures within the tbo. A Texture can be as large as 11585x11585 texels (2^27)
texture_buffer_object.txt
As Jotschi suggested use one very large texture and adjust texture coordinates accordingly. (or: write a shader that maps standard coordinates to the right place)
This is what's called a texture atlas. I'd guess you can find some implementations by searching for the term.

Vertex shader attribute mapping in GLSL

I'm coding a small rendering engine with GLSL shaders:
Each Mesh (well, submesh) has a number of vertex streams (eg. position,normal,texture,tangent,etc) into one big VBO and a MaterialID.
Each Material has a set of textures and properties (eg. specular-color, diffuse-color, color-texture, normal-map,etc)
Then I have a GLSL shader, with it's uniforms and attributes. Let's say:
uniform vec3 DiffuseColor;
uniform sampler2D NormalMapTexture;
attribute vec3 Position;
attribute vec2 TexCoord;
I'm a little bit stuck in trying to design a way for the GLSL shader to define the stream mappings (semantics) for the attributes and uniforms, and then bind the vertex streams to the appropriate attributes.
Something in the lines of saying to the mesh :"put your position stream in attribute "Position" and your tex coordinates in "TexCoord". Also put your material's diffuse color in "DiffuseColor" and your material's second texture in "NormalMapTexture"
At the moment I am using hard-coded names for the attributes (ie. vertex pos is always "Position" ,etc) and checking each uniform and attribute name to understand what the shader is using it for.
I guess I'm looking for some way of creating a "vertex declaration", but including uniforms and textures too.
So I'm just wondering how people do this in large-scale rendering engines.
Edit:
Recap of suggested methods:
1. Attribute/Uniform semantic is given by the name of the variable
(what I'm doing now)
Using pre-defined names for each possible attribute.The GLSL binder will query the name for each attribute and link the vertex array based on the name of the variable:
//global static variable
semantics (name,normalize,offset) = {"Position",false,0} {"Normal",true,1},{"TextureUV,false,2}
...when linking
for (int index=0;index<allAttribs;index++)
{
glGetActiveAttrib(program,index,bufSize,length,size[index],type[index],name);
semantics[index]= GetSemanticsFromGlobalHardCodedList(name);
}
... when binding vertex arrays for render
for (int index=0;index<allAttribs;index++)
{
glVertexAttribPointer(index,size[index],type[index],semantics[index]->normalized,bufferStride,semantics[index]->offset);
}
2. Predefined locations for each semantic
GLSL binder will always bind the vertex arrays to the same locations.It is up to the shader to use the the appropriate names to match. (This seems awfully similar to method 1, but unless I misunderstood, this implies binding ALL available vertex data, even if the shader does not consume it)
.. when linking the program...
glBindAttribLocation(prog, 0, "mg_Position");
glBindAttribLocation(prog, 1, "mg_Color");
glBindAttribLocation(prog, 2, "mg_Normal");
3. Dictionary of available attributes from Material, Engine globals, Renderer and Mesh
Maintain list of availlable attributes published by the active Material, the Engine globals, the current Renderer and the current Scene Node.
eg:
Material has (uniformName,value) = {"ambientColor", (1.0,1.0,1.0)}, {"diffuseColor",(0.2,0.2,0.2)}
Mesh has (attributeName,offset) = {"Position",0,},{"Normals",1},{"BumpBlendUV",2}
then in shader:
uniform vec3 ambientColor,diffuseColo;
attribute vec3 Position;
When binding the vertex data to the shader, the GLSL binder will loop over the attribs and bind to the one found (or not? ) in the dictionary:
for (int index=0;index<allAttribs;index++)
{
glGetActiveAttrib(program,index,bufSize,length,size[index],type[index],name);
semantics[index] = Mesh->GetAttributeSemantics(name);
}
and the same with uniforms, only query active Material and globals aswell.
Attributes:
Your mesh has a number of data streams. For each stream you can keep the following info: (name, type, data).
Upon linking, you can query the GLSL program for active attributes and form an attribute dictionary for this program. Each element here is just (name, type).
When you draw a mesh with a specified GLSL program, you go through programs attribute dictionary and bind the corresponding mesh streams (or reporting an error in case of inconsistency).
Uniforms:
Let the shader parameter dictionary be the set of (name, type, data link). Typically, you can have the following dictionaries:
Material (diffuse,specular,shininess,etc) - taken from the material
Engine (camera, model, lights, timers, etc) - taken from engine singleton (global)
Render (custom parameters related to the shader creator: SSAO radius, blur amount, etc) - provided exclusively by the shader creator class (render)
After linking, the GLSL program is given a set of parameter dictionaries in order to populate it's own dictionary with the following element format: (location, type, data link). This population is done by querying the list of active uniforms and matching (name, type) pair with the one in dictionaries.
Conclusion:
This method allows for any custom vertex attributes and shader uniforms to be passed, without hard-coded names/semantics in the engine. Basically only the loader and render know about particular semantics:
Loader fills out the mesh data streams declarations and materials dictionaries.
Render uses a shader that is aware of the names, provides additional parameters and selects proper meshes to be drawn with.
From my experience, OpenGL does not define the concept of attributes or uniforms semantics.
All you can do is define your own way of mapping your semantics to OpenGL variables, using the only parameter you can control about these variables: their location.
If you're not constrained by platform issues, you could try to use the 'new' GL_ARB_explicit_attrib_location (core in OpenGL 3.3 if I'm not mistaken) that allows shaders to explicitly express which location is intended for which attribute. This way, you can hardcode (or configure) which data you want to bind on which attribute location, and query the shaders' locations after it's compiled. It seems that this feature is not yet mature, and perhaps subject to bugs in various drivers.
The other way around is to bind the locations of your attributes using glBindAttribLocation. For this, you have to know the names of the attributes that you want to bind, and the locations you want to assign them.
To find out the names used in a shader, you can:
query the shader for active attributes
parse the shader source code to find them yourself
I would not recommend using the GLSL parsing way (although it may suit your needs if you're in simple enough contexts): the parser can easily be defeated by the preprocessor. Supposing that your shader code becomes somewhat complex, you may want to start using #includes, #defines, #ifdef, etc. Robust parsing supposes that you have a robust preprocessor, which can become quite a heavy lift to set up.
Anyway, with your active attributes names, you have to assign them locations (and/or semantics), for this, you're alone with your use case.
In our engine, we happily hardcode locations of predefined names to specific values, such as:
glBindAttribLocation(prog, 0, "mg_Position");
glBindAttribLocation(prog, 1, "mg_Color");
glBindAttribLocation(prog, 2, "mg_Normal");
...
After that, it's up to the shader writer to conform to the predefined semantics of the attributes.
AFAIK it's the most common way of doing things, OGRE uses it for example. It's not rocket science but works well in practice.
If you want to add some control, you could provide an API to define the semantics on a shader basis, perhaps even having this description in an additional file, easily parsable, living near the shader source code.
I don't get into uniforms where the situation is almost the same, except that the 'newer' extensions allow you to force GLSL uniform blocks to a memory layout that is compatible with your application.
I'm not satisfied by all this myself, so I'll be happy to have some contradictory information :)
You may want to consider actually parsing the GLSL itself.
The uniform/attribute declaration syntax is pretty simple. You can come up with a small manual parser that looks for lines that start with uniform or attribute, get the type and name and then expose some C++ API using strings. This will save you the trouble of hard coded names. If you don't want to get your hands dirty with manual parsing a couple of likes of Spirit would do the trick.
You probably won't want to fully parse GLSL so you'll need to make sure you don't do anything funny in the decelerations that might alter the actual meaning. One complication that comes to mind is conditional compilation using macros in the GLSL.