Why does OpenGL drawing fail when vertex attrib array zero is disabled? - opengl

I was having extreme trouble getting a vertex shader of mine to run under OpenGL 3.3 core on an ATI driver:
#version 150
uniform mat4 graph_matrix, view_matrix, proj_matrix;
uniform bool align_origin;
attribute vec2 graph_position;
attribute vec2 screen_position;
attribute vec2 texcoord0;
attribute vec4 color;
varying vec2 texcoord0_px;
varying vec4 color_px;
void main() {
// Pick the position or the annotation position
vec2 pos = graph_position;
// Transform the coordinates
pos = vec2(graph_matrix * vec4(pos, 0.0, 1.0));
if( align_origin )
pos = floor(pos + vec2(0.5, 0.5)) + vec2(0.5, 0.5);
gl_Position = proj_matrix * view_matrix * vec4(pos + screen_position, 0.0, 1.0);
texcoord0_px = texcoord0;
color_px = color;
}
I used glVertexAttrib4f to specify the color attribute, and turned the attribute array off. According to page 33 of the 3.3 core spec, that should work:
If an array corresponding to a generic attribute required by a vertex shader is not enabled, then the corresponding element is taken from the current generic attribute state (see section 2.7).
But (most of the time, depending on the profile and driver) the shader either didn't run at all or used black if I accessed the disabled color attribute. Replacing it with a constant got it to run.
Much searching yielded this page of tips regarding WebGL, which had the following to say:
Always have vertex attrib 0 array enabled. If you draw with vertex attrib 0 array disabled, you will force the browser to do complicated emulation when running on desktop OpenGL (e.g. on Mac OSX). This is because in desktop OpenGL, nothing gets drawn if vertex attrib 0 is not array-enabled. You can use bindAttribLocation() to force a vertex attribute to use location 0, and use enableVertexAttribArray() to make it array-enabled.
Sure enough, not only was the color attribute assigned to index zero, but if I force-bound a different, array-enabled attribute to zero, the code ran and produced the right color.
I can't find any other mention of this rule anywhere, and certainly not on ATI hardware. Does anyone know where this rule comes from? Or is this a bug in the implementation that the Mozilla folks noticed and warned about?

tl;dr: this is a driver bug. Core OpenGL 3.3 should allow you to not use attribute 0, but the compatibility profile does not, and some drivers don't implement that switch correctly. Just make sure to use attribute 0 to avoid any problems.
Actual Content:
Let's have a little history lesson in how the OpenGL specification came to be.
In the most ancient days of OpenGL, there was exactly one way to render: immediate mode (ie: glBegin/glVertex/glColor/glEtc/glEnd). Display lists existed, but they were always defined as simply sending the captured commands again. So while implementations didn't actually make all of those function calls, implementations would still behave as if they did.
In OpenGL 1.1, client-side vertex arrays were added to the specification. Now remember: the specification is a document that specifies behavior, not implementation. Therefore, the ARB simply defined that client-side arrays worked exactly like making immediate mode calls, using the appropriate accesses to the current array pointers. Obviously implementations wouldn't actually do that, but they behaved as if they did.
Buffer-object-based vertex arrays were defined in the same way, though with language slightly complicated by pulling from server storage instead of client storage.
Then something happened: ARB_vertex_program (not ARB_vertex_shader. I'm talking about assembly programs).
See, once you have shaders, you want to start being able to define your own attributes instead of using the built-in ones. And that all made sense. However, there was one problem.
Immedate mode works like this:
glBegin(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glEnd();
Every time you call glVertex, this causes all of the current attribute state to be used for a single vertex. All of the other immediate mode functions simply set values into the context; this function actually sends the vertex to OpenGL to be processed. That's very important in immediate mode. And since every vertex must have a position in fixed-function land, it made sense to use this function to decide when a vertex should be processed.
Once you're no longer using OpenGL's fixed-function vertex semantics, you have a problem in immediate mode. Namely, how do you decide when to actually send the vertex?
By convention, they stuck this onto attribute 0. Therefore, all immediate mode rendering must use either attribute 0 or glVertex to send a vertex.
However, because all other rendering is based on the language of immediate mode rendering, all other rendering has the same limitations of immediate mode rendering. Immediate mode requires attribute 0 or glVertex, and therefore so too do client-side arrays and so forth. Even though it doesn't make sense for them to, they need it because of how the specification defines their behavior.
Then OpenGL 3.0 came around. They deprecated immediate mode. Deprecated does not mean removed; the specification still had those functions in it, and all vertex array rendering was still defined in terms of them.
OpenGL 3.1 actually ripped out the old stuff. And that posed a bit of a language problem. After all, every array drawing command was always defined in terms of immediate mode. But once immediate mode no longer exists... how do you define it?
So they had to come up with new language for core OpenGL 3.1+. While doing so, they removed the pointless restriction on needing to use attribute 0.
But the compatibility profile did not.
Therefore, the rules for OpenGL 3.2+ is this. If you have a core OpenGL profile, then you do not have to use attribute 0. If you have a compatibility OpenGL profile, you must use attribute 0 (or glVertex). That's what the specification says.
But that's not what implementations implement.
In general, NVIDIA never cared much for the "must use attribute 0" rule and just does it how you would expect, even in compatibility profiles. Thus violating the letter of the specification. AMD is generally more likely to stick to the specification. However, they forgot to implement the core behavior correctly. So NVIDIA's too permissive on compatibility, and AMD is too restrictive on core.
To work around these driver bugs, simply always use attribute 0.
BTW, if you're wondering, NVIDIA won. In OpenGL 4.3, the compatibility profile uses the same wording for its array rendering commands as core. Thus, you're allowed to not use attribute 0 on both core and compatibility.

Related

How Many Shader Programs Do I Really Need?

Let's say I have a shader set up to use 3 textures, and that I need to render some polygon that needs all the same shader attributes except that it requires only 1 texture. I have noticed on my own graphics card that I can simply call glDisableVertexAttrib() to disable the other two textures, and that doing so apparently causes the disabled texture data received by the fragment shader to be all white (1.0f). In other words, if I have a fragment shader instruction (pseudo-code)...
final_red = tex0.red * tex1.red * tex2.red
...the operation produces the desired final value regardless whether I have 1, 2, or 3 textures enabled. From this comes a number of questions:
Is it legit to disable expected textures like this, or is it a coincidence that my particular graphics card has this apparent mathematical safeguard?
Is the "best practice" to create a separate shader program that only expects a single texture for single texture rendering?
If either approach is valid, is there a benefit to creating a second shader program? I'm thinking it would be cost less time to make 2 glDisableVertexAttrib() calls than to make a glUseProgram() + 5-6 glGetUniform() calls, but maybe #4 addresses that issue.
When changing the active shader program with glUseProgram() do I need to call glGetUniform... functions every time to re-establish the location of each uniform in the program, or is the location of each expected to be consistent until the shader program is deallocated?
Disabling vertex attributes would not really disable your textures, it would just give you undefined texture coordinates. That might produce an affect similar to disabling a certain texture, but to do this properly you should use a uniform or possibly subroutines (if you have dozens of variations of the same shader).
As far as time taken to disable a vertex array state, that's probably going to be slower than changing a uniform value. Setting uniform values don't really affect the render pipeline state, they're just small changes to memory. Likewise, constantly swapping the current GLSL program does things like invalidate shader cache, so that's also significantly more expensive than setting a uniform value.
If you're on a modern GL implementation (GL 4.1+ or one that implements GL_ARB_separate_shader_objects) you can even set uniform values without binding a GLSL program at all, simply by calling glProgramUniform* (...)
I am most concerned with the fact that you think you need to call glGetUniformLocation (...) each time you set a uniform's value. The only time the location of a uniform in a GLSL program changes is when you link it. Assuming you don't constantly re-link your GLSL program, you only need to query those locations once and store them persistently.

OpenGL version to start with (as of late 2014)

I know nothing about OpenGL, but it turns out it might be needed for something that I'm working on (figured this out simply because I want some visual effects that need fast drawing - hardware accelerated).
So I'm very much troubled about which version of OpenGL to start coding for as of the current date? (September 2014 for future readers).
As of now, I have read OpenGL 3.x is the most available (read as: supported) version, even on very old hardware (i.e. designed for Windows XP). But, what if I develop for OpenGL 4.x, will my application be backward-compatible with 3.x supported hardware? Or does it just break there and only run on 4.x (and future?) supported hardware?
My use of OpenGL is not for a videogame or anything similar so I might never need advanced functionality (as of now I can't name any to give you an example of what I may never use, I hope experienced OpenGL people will understand my point). However, I don't think this is going to lower in any way the driver support required, am I right?
Please note that I might have used the wrong terminology and I want to start learning OpenGL which puts me in a position not so good to understand whether I might need this or that feature and/or possible requirements - I am in the need of real advice to get started and figure my way out.
I'm sure this answer skips many many details and exceptions (so take with salt and if you disagree or have something to add, leave a comment :P), but... Each OpenGL version mostly just adds features, so if you start using a new features it might limit the hardware you can run your app on (and so the question attracted a downvote, when instead it should have attracted an explanation).
Basic drawing pretty much remains a constant, with one very big exception: all the stuff that was deprecated in version 3 and removed in 3.1.
Old, < GL 3:
We used to be able to do things like:
glEnable(GL_LIGHTING); //fixed-pipeline lighting. replaced with shaders
glEnable(GL_TEXTURE_2D); //fixed texturing
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glTranslatef(...); //builtin matrices
glBegin(GL_TRIANGLES); glVertex3f(...) ...; glEnd(); //immediate mode with per-vertex calls
glVertexPointer(..., myArray); glDrawArrays(...); //immediate mode with client side array
But even then we still had GLSL shaders and VBOs. There's a fair amount of overlap and the GL3 deprecation didn't instantly replace everything:
glBufferData(...); //buffer mesh data to bound VBO
...
glVertexAttribPointer(...); //attach data to myCustomVar (either VBO or immediate mode local array)
...
attribute vec3 myCustomVar;
varying vec3 myVarying;
...
gl_Position = gl_ProjectionMatrix * gl_ModelviewMatrix * gl_Vertex;
...
gl_FragColor = vec4(1, 0, 0, 1);
New, >= GL 3:
The new way of drawing removes immediate mode and a lot of these in-built features so you do a lot from scratch, adds a few features and syntax changes. You have to use VBOs and shaders:
glBufferData(...); //buffer mesh data
...
glVertexAttribPointer(...); //attach VBOs, but have to set your own position etc too
...
#version 150 //set GLSL version number
out vec3 myVarying; //replaces "varying"
gl_Position = myProjection * myModelview * myPosition;
...
in vec3 myVarying;
out vec4 fragColour; //have to define your own fragment shader output
...
fragColour = vec4(1, 0, 0, 1);
To answer your question, and this is completely my opinion, I'd start learning GLES2.0 (afaik pretty much a subset of GL3) because it's so well supported right now. GL2/immediate mode certainly doesn't hurt as a learning step, and there's still quite a lot that isn't deprecated. You can always just use GL 4 features if you find you need them, but as said the basics really only changed between the above versions. Even if you use deprecated features you have to go out of your way to get a pure GL3.2 (for example) context that disallows certain calls/features. I'm not advocating it but it's entirely possible to mix very old with very new and most drivers seem to allow it.
TLDR: 1. use VBOs, 2. use shaders. Just for learning, don't bother with an explicit version context.
See Also:
http://www.opengl.org/wiki/History_of_OpenGL
http://www.opengl.org/wiki/Core_And_Compatibility_in_Contexts
List of deprecated OpenGL functionalities

Fixed pipeline to glsl gl_Normal / vnormal / v_normal

The more I read the more confused I become. I am trying to learn getting from old opengl1 fixed pipeline to modern gl. I learned a lot already but for one thing I am still unsure about. In old tutorials its just used as gl_Normal, in newer its often referred to vnormal or v_normal.
In older versions I didn't have to take care about that, also in fixed pipeline it seems to be provided automatically. So where to get this or rather, how to calculate it? Must it be done in c++ or can it be done in vertex or fragment shader as well from vert position (in old tutorials referred as gl_Vertex)?
A sample or pseudo code would be nice.
Normal never came automatically. Even with fixed pipeline, you had to provide normals yourself.
gl_Normal was pre-defined vertex shader attribute that came from glNormalPointer. In latest GLs (not sure about actual version, probably 4.*) these functions was deprecated so all attributes have to come from glVertexAttribPointer - no predefined attributes, programmer have to bind every array to attribute location himself.
So normal, or whatever it is called - is just named attribute. You have to get its location (with glGetAttribLocation) and assign array containing vertex normals (normals to sufrace at the point of specified vertex) to that location.
As for calculating normals - it is trivial for flat surfaces (just a cross-product of two triangle edges), but for smooth shading - normals have to be interpolated between nearest polygons. It is usually done in 3D mesh editors and just exported to file.

Usage of custom and generic vertex shader attributes in OpenGL and OpenGL ES

Since generic vertex attributes are deprecated in OpenGL, I tried to rewrite my vertex shader using only custom attributes. And I didn't work for me. Here is the vertex shader:
attribute vec3 aPosition;
attribute vec3 aNormal;
varying vec4 vColor;
vec4 calculateLight(vec4 normal) {
// ...
}
void main(void) {
gl_Position = uProjectionMatrix * uWorldViewMatrix * vec4(aPosition, 1);
vec4 rotatedNormal = normalize(uWorldViewMatrix * vec4(aNormal, 0));
vColor = calculateLight(rotatedNormal);
}
This works perfectly in OpenGL ES 2.0. However, when I try to use it with OpenGL I see black screen. If I change aNormal to generic gl_Normal everything works fine aswell (note that aPosition works fine in both contexts and I don't have to use gl_Vertex).
What am I doing wrong?
I use RenderMonkey to test shaders, and I've set up stream mapping in it with appropriate attribute names (aPosition and aNormal). Maybe it has something to do with attribute indices, becouse I have all of them set to 0? Also, here's what RenderMonkey documentation says about setting custom attribute names in "Stream Mapping":
The “Attribute Name” field displays the default name that can be
used in the shader editor to refer to that stream. In an OpenGL ES effect, the changed
name should be used to reference the stream; however, in a DirectX or OpenGL effect,
the new name has no affect in the shader editor
I wonder is this issue specific to RenderMonkey or OpenGL itself? And why aPosition still works then?
Attribute indices should be unique. It is possible to tell OpenGL to use specific indices via glBindAttribLocation before linking the program. Either way the normal way is to query the index with glGetAttribLocation. It sounds like RenderMonkey lets you choose, in which case have you tried making them separate?
I've seen fixed function rendering cross over to vertex attributes before, where glVertexPointer can wind up binding to the first attribute if its left unbound (I don't know if this is reproducible any more).
I also see some strange things when experimenting with attributes and fixed function names. Without calling glBindAttribLocation, I compile the following shader:
attribute vec4 a;
attribute vec4 b;
void main()
{
gl_Position = gl_Vertex + vec4(gl_Normal, 0) + a + b;
}
and I get the following locations (via glGetActiveAttrib):
a: 1
b: 3
gl_Vertex: -1
gl_Normal: -1
When experimenting, it seems the use of gl_Vertex takes up index 0 and gl_Normal takes index 2 (even if its not reported). I wonder if you throw in a padding attribute between aPosition and aNormal (don't forget to use it in the output or it'll be compiled away) makes it work.
In this case it's possible the position data is simply bound to location zero last. However, the black screen with aNormal points to nothing being bound (in which case it will always be {0, 0, 0}). This is a little less consistent - if the normal was bound to the same data as the position you'd expect some colour, if not correct colour, as the normal would have the position data.
Applications are allowed to bind more than one user-defined attribute
variable to the same generic vertex attribute index. This is called
aliasing, and it is allowed only if just one of the aliased attributes
is active in the executable program, or if no path through the shader
consumes more than one attribute of a set of attributes aliased to the
same location.
My feeling is then that RenderMonkey is using just glVertexPointer/glNormalPointer instead of attributes, which I would have though would bind both normal and position to either the normal or position data since you say both indices are zero.
in a DirectX or OpenGL effect, the new name has no affect in the shader editor
Maybe this means "named streams" are simply not available in the non-ES OpenGL version?
This is unrelated, but in the more recent OpenGL-GLSL versions, a #version number is needed and attributes use the keyword in.

Fixed-Function Vs. Shaders - help understand the conceptual differences

My background: I first started experimenting with OpenGL some months ago, for no particular purpose, just fun. I started reading the OpenGL redbook, and got as far as making a planetary system with a lot of different lighting. That lasted for a month, and my interest for openGL went away. It awoke again a week or so ago, and as I gathered from some SO posts, the redbook is outdated and the OpenGL Superbible is a better source for learning. So I started reading it. I like the concept of shaders but there's a real mess going on in my brain because of transition from my old memories of the fixed pipeline and the new concept of shaders.
Question: I would like to write some statements which I think are true and I am asking OpenGL experts to verify them (i.e. whether I am understanding correctly, not quite correctly or absolutely incorrectly). So...
1) If we don't use any shader program, nothing changes. We have current color, current normal, current transformation matrix, current everything, and as soon as we call glVertex**(...) these current values are taken and the vertex is fed to ... I don't know what. The fact is that it's transformed with the current matrix, the current color and normal are applied to it etc.
2) As soon as we use a shader program, all the above stops working. That is, glColor, glRotate etc. make no sense (Do they?). I mean, glColor still does set the current color, glRotate still multiplies the current matrix by the rotation matrix, but these aren't used at all. Instead, we feed vertex attributes by glVertexAttrib. Which attribute means what is totally dependent on our vertex shader and the in variable binding. We also find ans set the values of the uniforms and then call glVertex and the shader is executed ( I don't know immediately or after glEnd() is called). The actual vertex and fragment processing is done entirely manually in the shader program.
3) Shaders don't add anything to depth testing. That is, I don't need to take care of it in a shader. I just call glEnable(GL_DEPTH_TEST). Neither is face culling affected.
4) Alpha blending and antialiasing need not be taken care of in shaders. glEnable calls will suffice.
5) Is it a good idea to use gluPerspective, glRotate, glPushMatrix and other matrix functions, and then retrieve the current matrix and feed it as a uniform to a shader? Thus there won't be any need in using a 3rd party matrix library.
It depends on what version of OpenGL you're talking about. Up through OpenGL 3.0, all the fixed functionality is still present, so yes, if you decide to just use fixed functionality it continues to work like it always did. Starting from 3.0, quite a bit of the fixed pipeline was deprecated, and as of 3.1 it disappears completely. Using these, you no longer really have the option to just use the fixed pipeline.
Again, it depends. For example, up through OpenGL 3.0, glColor is still supported, even when you use a shader. The difference is that instead of automatically being applied to what gets drawn, it's supplied to your shader, which can use it unchanged, modify it as it sees fit, or ignore it completely. So, your fragment shader receives gl_FrontColor and gl_BackColor, and writes the actual fragment color to gl_FragColor. If you're using OpenGL 3.1 or newer, however, glColor (for example) just no longer exists -- a color will be just another value you supply to your shader like you could/would anything else.
That's correct, at least up to OpenGL 3.1. As of 4.0, there's a new compute shader that (I believe) can get involved in things like depth testing (but I haven't used it, so I'm a bit uncertain about that).
Yes, you can still use built-in alpha blending. Depending on your hardware, you may also want to consider using the gl_ARB_draw_buffers_blend extension (which is mandatory as of OpenGL 4, if I recall correctly).
Yet again, it depends on the version of OpenGL you're talking about. Current OpenGL completely eliminates all support for matrices so you have no choice but to use some other matrix library. Older versions supplied things like gl_ModelViewMatrix and gl_NormalMatrix to your shader as a uniform so you could go that route if you chose.
2) In modern OpenGL, there is no glColor, glBegin, glVertex, glRotate etc. so they don't make sense.
5) In modern OpenGL there are no built-in matrices, so you have to use a 3rd party library or write your own. So to answer your question, no, it's not a good idea.