I've been having trouble with the gl_VertexID built in vertex index, passed using in, to work with Three.js
I'm not sure why, as the documentation says it works in all versions of OpenGL
http://www.opengl.org/sdk/docs/manglsl/xhtml/gl_VertexID.xml
I'm using this vertex shader:
uniform int freqData[64];
uniform int fftSize;
in int gl_VertexID;
void main() {
vec3 norm = normalize(position);
int modFFT = mod(gl_VertexID, fftSize);
vec3 newPosition = norm * float(freqData[modFFT]);
gl_Position = projectionMatrix * modelViewMatrix * vec4( newPosition, 1.0 );
}
The error I receive is:
ERROR: 0:68: 'in' : syntax error
It seems to have a problem with the in declaration, and it doesn't complain about anything else (the error console is able to detect multiple compilation errors).
I very much appreciate your help, I'm working with yesterday's Three.js build.
As far as I'm aware, in WebGL you can't access built-in vertex indices.
However you should be able emulate this by providing your own custom attribute stream, set to values that would be equivalent to such built-in index stream.
In three.js there are no integer custom attributes implemented yet, so you would need to use float attribute.
Check "displacement" attribute in this example:
http://mrdoob.github.com/three.js/examples/webgl_custom_attributes.html
First, there's no need to declare gl_VertexID; either it is present (thanks to an extension, because WebGL, which is built on OpenGL ES 2.0, does not provide this as a core feature), or it is not.
Second, WebGL, which is built on OpenGL ES 2.0, does not use in and out syntax; that's for desktop OpenGL 3.0 and above. GLSL ES uses the older attribute syntax. So even if you needed to declare gl_VertexID (and again, you do not), you would need to call it an attribute, not an in.
Related
Most GLSL shaders are using a attribute for the color in the vertex shader, which will be forwarded as varying to the fragment shader. Like this:
attribute vec4 position;
attribute vec4 color;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = color;
gl_Position = mvp * position;
};
Setting the color can be done with glVertexAtribPointer() to pass one color per vertex or with glVertexAttrib4fv() to pass a global color for all vertexes. I try to understand the difference to the predefined variable gl_Color in the vertex shader (if there is any difference at all). i.e.
attribute vec4 position;
uniform mat4 mvp;
varying vec4 destinationColor;
void main(){
destinationColor = gl_Color;
gl_Position = mvp * position;
};
and using glColorPointer() to pass one color per vertex or glColor4fv() to use a global color for all vertexes. To me the second shader looks better (= more efficient?), because it uses less attributes. But all tutorials & online resources are using the first approach - so I wonder if I missed anything or if there is no difference at all.
What is better practice when writing GLSL shaders?
To me the second shader looks better (= more efficient?), because it uses less attributes.
It does not use fewer attributes. It just uses fewer explicit attribute declarations. All of the work needed to get that color value to OpenGL is still there. It's still being done. The hardware is still fetching data from a buffer object or getting it from the glColor context value or whatever.
You just don't see it in your shader's text. But just because you don't see it doesn't mean that it happens for free.
User-defined attributes are preferred for the following reasons:
User-defined attributes make it clear how many resources your shaders are using. If you want to know how many attributes you need to provide to a shader, just look at the global declarations. But with predefined attributes, you can't do this; you have to scan through the entire vertex shader for any gl_* names that name a predefined attribute.
User-defined attributes can do more things. If you want to pass integer values as integers to the vertex shader, you must use a user-defined attribute. If you need to pass a double-precision float to the vertex shader, again, a predefined attribute cannot help you.
Predefined attributes were removed from core OpenGL contexts. OSX, for example, does not allow the compatibility profile. You can still use OpenGL 2.1, but if you want to use any OpenGL version 3.2 or greater on OSX, you cannot use removed functionality. And the built-in vertex attributes were removed in OpenGL 3.1.
Predefined attributes were never a part of OpenGL ES 2.0+. So if you want to write shaders that can work in OpenGL ES, you again cannot use them.
So basically, there's no reason to use them these days.
if I remember correctly gl_Color is deprecated remnant from the old style API without VAO/VBO using glBegin() ... glEnd(). If you go to core profile there is no gl_Color anymore ... so I assume you use old OpenGL version or compatibility profile.
If you try to use gl_Color in core profile (for example 4.00) you got:
0(35) : error C7616: global variable gl_Color is removed after version 140
Which means gl_Color was removed from GLSL 1.4
It is not entirely matter of performance but the change in graphic rendering SW architecture or hierarchy of the GL calls if you want.
I want to organize depth for objects in OpenGL ES 2.0 so that I can specify one object to be in front of other object(s)(just send uniform variable). If I want to draw two lines separatelly I just specify which one is closer.
I work with Qt and its OpenGL support. When I tend to use gl_FragDepth, it gives me linking error saying that gl_FragDepth is undeclared identifier.
Also, I tried, in vertex shader, something like gl_Position.z = depthAttr; where depthAttr is uniform variable for the rendering object.
Can someone tell me what else can I do, or what I did wrong? Is there preferred way?
The pre-defined fragment shader output variable gl_FragDepth is not supported in ES 2.0. It is only available in full OpenGL, and in ES 3.0 or later.
If you really want to specify the depth with a uniform variable, you need to have the uniform variable in the vertex shader, and use it to calculate gl_Position. This approach from your question looks fine:
uniform float depthAttr;
...
gl_Position = ...;
gl_Position.z = depthAttr;
A much more standard approach is to make the desired depth part of your position coordinates. If you currently use 2D coordinates for drawing your lines, simply add the desired depth as a third coordinate, and change the position attribute in the vertex shader from vec2 to vec3. The depth will then arrive in the shader as part of the attribute, and there's no need for additional uniform variables.
OpenGL ES 2.0 supports an optional extension - EXT_frag_depth, which allows the fragment shader to assign gl_FragDepthEXT.
See the specification, but of course, be prepared to fall back when it's not available.
Since generic vertex attributes are deprecated in OpenGL, I tried to rewrite my vertex shader using only custom attributes. And I didn't work for me. Here is the vertex shader:
attribute vec3 aPosition;
attribute vec3 aNormal;
varying vec4 vColor;
vec4 calculateLight(vec4 normal) {
// ...
}
void main(void) {
gl_Position = uProjectionMatrix * uWorldViewMatrix * vec4(aPosition, 1);
vec4 rotatedNormal = normalize(uWorldViewMatrix * vec4(aNormal, 0));
vColor = calculateLight(rotatedNormal);
}
This works perfectly in OpenGL ES 2.0. However, when I try to use it with OpenGL I see black screen. If I change aNormal to generic gl_Normal everything works fine aswell (note that aPosition works fine in both contexts and I don't have to use gl_Vertex).
What am I doing wrong?
I use RenderMonkey to test shaders, and I've set up stream mapping in it with appropriate attribute names (aPosition and aNormal). Maybe it has something to do with attribute indices, becouse I have all of them set to 0? Also, here's what RenderMonkey documentation says about setting custom attribute names in "Stream Mapping":
The “Attribute Name” field displays the default name that can be
used in the shader editor to refer to that stream. In an OpenGL ES effect, the changed
name should be used to reference the stream; however, in a DirectX or OpenGL effect,
the new name has no affect in the shader editor
I wonder is this issue specific to RenderMonkey or OpenGL itself? And why aPosition still works then?
Attribute indices should be unique. It is possible to tell OpenGL to use specific indices via glBindAttribLocation before linking the program. Either way the normal way is to query the index with glGetAttribLocation. It sounds like RenderMonkey lets you choose, in which case have you tried making them separate?
I've seen fixed function rendering cross over to vertex attributes before, where glVertexPointer can wind up binding to the first attribute if its left unbound (I don't know if this is reproducible any more).
I also see some strange things when experimenting with attributes and fixed function names. Without calling glBindAttribLocation, I compile the following shader:
attribute vec4 a;
attribute vec4 b;
void main()
{
gl_Position = gl_Vertex + vec4(gl_Normal, 0) + a + b;
}
and I get the following locations (via glGetActiveAttrib):
a: 1
b: 3
gl_Vertex: -1
gl_Normal: -1
When experimenting, it seems the use of gl_Vertex takes up index 0 and gl_Normal takes index 2 (even if its not reported). I wonder if you throw in a padding attribute between aPosition and aNormal (don't forget to use it in the output or it'll be compiled away) makes it work.
In this case it's possible the position data is simply bound to location zero last. However, the black screen with aNormal points to nothing being bound (in which case it will always be {0, 0, 0}). This is a little less consistent - if the normal was bound to the same data as the position you'd expect some colour, if not correct colour, as the normal would have the position data.
Applications are allowed to bind more than one user-defined attribute
variable to the same generic vertex attribute index. This is called
aliasing, and it is allowed only if just one of the aliased attributes
is active in the executable program, or if no path through the shader
consumes more than one attribute of a set of attributes aliased to the
same location.
My feeling is then that RenderMonkey is using just glVertexPointer/glNormalPointer instead of attributes, which I would have though would bind both normal and position to either the normal or position data since you say both indices are zero.
in a DirectX or OpenGL effect, the new name has no affect in the shader editor
Maybe this means "named streams" are simply not available in the non-ES OpenGL version?
This is unrelated, but in the more recent OpenGL-GLSL versions, a #version number is needed and attributes use the keyword in.
I was having extreme trouble getting a vertex shader of mine to run under OpenGL 3.3 core on an ATI driver:
#version 150
uniform mat4 graph_matrix, view_matrix, proj_matrix;
uniform bool align_origin;
attribute vec2 graph_position;
attribute vec2 screen_position;
attribute vec2 texcoord0;
attribute vec4 color;
varying vec2 texcoord0_px;
varying vec4 color_px;
void main() {
// Pick the position or the annotation position
vec2 pos = graph_position;
// Transform the coordinates
pos = vec2(graph_matrix * vec4(pos, 0.0, 1.0));
if( align_origin )
pos = floor(pos + vec2(0.5, 0.5)) + vec2(0.5, 0.5);
gl_Position = proj_matrix * view_matrix * vec4(pos + screen_position, 0.0, 1.0);
texcoord0_px = texcoord0;
color_px = color;
}
I used glVertexAttrib4f to specify the color attribute, and turned the attribute array off. According to page 33 of the 3.3 core spec, that should work:
If an array corresponding to a generic attribute required by a vertex shader is not enabled, then the corresponding element is taken from the current generic attribute state (see section 2.7).
But (most of the time, depending on the profile and driver) the shader either didn't run at all or used black if I accessed the disabled color attribute. Replacing it with a constant got it to run.
Much searching yielded this page of tips regarding WebGL, which had the following to say:
Always have vertex attrib 0 array enabled. If you draw with vertex attrib 0 array disabled, you will force the browser to do complicated emulation when running on desktop OpenGL (e.g. on Mac OSX). This is because in desktop OpenGL, nothing gets drawn if vertex attrib 0 is not array-enabled. You can use bindAttribLocation() to force a vertex attribute to use location 0, and use enableVertexAttribArray() to make it array-enabled.
Sure enough, not only was the color attribute assigned to index zero, but if I force-bound a different, array-enabled attribute to zero, the code ran and produced the right color.
I can't find any other mention of this rule anywhere, and certainly not on ATI hardware. Does anyone know where this rule comes from? Or is this a bug in the implementation that the Mozilla folks noticed and warned about?
tl;dr: this is a driver bug. Core OpenGL 3.3 should allow you to not use attribute 0, but the compatibility profile does not, and some drivers don't implement that switch correctly. Just make sure to use attribute 0 to avoid any problems.
Actual Content:
Let's have a little history lesson in how the OpenGL specification came to be.
In the most ancient days of OpenGL, there was exactly one way to render: immediate mode (ie: glBegin/glVertex/glColor/glEtc/glEnd). Display lists existed, but they were always defined as simply sending the captured commands again. So while implementations didn't actually make all of those function calls, implementations would still behave as if they did.
In OpenGL 1.1, client-side vertex arrays were added to the specification. Now remember: the specification is a document that specifies behavior, not implementation. Therefore, the ARB simply defined that client-side arrays worked exactly like making immediate mode calls, using the appropriate accesses to the current array pointers. Obviously implementations wouldn't actually do that, but they behaved as if they did.
Buffer-object-based vertex arrays were defined in the same way, though with language slightly complicated by pulling from server storage instead of client storage.
Then something happened: ARB_vertex_program (not ARB_vertex_shader. I'm talking about assembly programs).
See, once you have shaders, you want to start being able to define your own attributes instead of using the built-in ones. And that all made sense. However, there was one problem.
Immedate mode works like this:
glBegin(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glTexCoord(...);
glColor(...);
glVertex(...);
glEnd();
Every time you call glVertex, this causes all of the current attribute state to be used for a single vertex. All of the other immediate mode functions simply set values into the context; this function actually sends the vertex to OpenGL to be processed. That's very important in immediate mode. And since every vertex must have a position in fixed-function land, it made sense to use this function to decide when a vertex should be processed.
Once you're no longer using OpenGL's fixed-function vertex semantics, you have a problem in immediate mode. Namely, how do you decide when to actually send the vertex?
By convention, they stuck this onto attribute 0. Therefore, all immediate mode rendering must use either attribute 0 or glVertex to send a vertex.
However, because all other rendering is based on the language of immediate mode rendering, all other rendering has the same limitations of immediate mode rendering. Immediate mode requires attribute 0 or glVertex, and therefore so too do client-side arrays and so forth. Even though it doesn't make sense for them to, they need it because of how the specification defines their behavior.
Then OpenGL 3.0 came around. They deprecated immediate mode. Deprecated does not mean removed; the specification still had those functions in it, and all vertex array rendering was still defined in terms of them.
OpenGL 3.1 actually ripped out the old stuff. And that posed a bit of a language problem. After all, every array drawing command was always defined in terms of immediate mode. But once immediate mode no longer exists... how do you define it?
So they had to come up with new language for core OpenGL 3.1+. While doing so, they removed the pointless restriction on needing to use attribute 0.
But the compatibility profile did not.
Therefore, the rules for OpenGL 3.2+ is this. If you have a core OpenGL profile, then you do not have to use attribute 0. If you have a compatibility OpenGL profile, you must use attribute 0 (or glVertex). That's what the specification says.
But that's not what implementations implement.
In general, NVIDIA never cared much for the "must use attribute 0" rule and just does it how you would expect, even in compatibility profiles. Thus violating the letter of the specification. AMD is generally more likely to stick to the specification. However, they forgot to implement the core behavior correctly. So NVIDIA's too permissive on compatibility, and AMD is too restrictive on core.
To work around these driver bugs, simply always use attribute 0.
BTW, if you're wondering, NVIDIA won. In OpenGL 4.3, the compatibility profile uses the same wording for its array rendering commands as core. Thus, you're allowed to not use attribute 0 on both core and compatibility.
I started moving one of my projects away from fixed pipeline, so to try things out I tried to write a shader that would simply pass the OpenGL matrices and transform the vertex with that and then start calculating my own once I knew that worked. I thought this would be a simple task but even this will not work.
I started out with this shader for normal fixed pipeline:
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then changed it to this:
uniform mat4 model_matrix;
uniform mat4 projection_matrix;
void main(void)
{
gl_Position = model_matrix * projection_matrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then retrieve the OpenGL matrices like this and pass them to the shader with this code:
[material.shader bindShader];
GLfloat modelmat[16];
GLfloat projectionmat[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelmat);
glGetFloatv(GL_PROJECTION_MATRIX, projectionmat);
glUniformMatrix4fv([material.shader getUniformLocation:"model_matrix"], 1, GL_FALSE, modelmat);
glUniformMatrix4fv([material.shader getUniformLocation:"projection_matrix"], 1, GL_FALSE, projectionmat );
... Draw Stuff
For some reason this does not draw anything (I am 95% positive those matrices are correct before I pass them btw) Any Ideas?
The problem was that my order of matrix multiplication was wrong. I was not aware that the operations were not commutative.
The correct order should be:
projection * modelview * vertex
Thanks to ltjax and doug65536
For the matrix math, try using an external library, such as GLM. They also have some basic examples on how to create the necessary matrices and do the projection * view * model transform.
Use OpenGL 3.3's shading language. OpenGL 3.3 is roughly comparable to DirectX10, hardware-wise.
Don't use the deprecated functionality. Almost everything in your first void main example is deprecated. You must explicity declare your inputs and outputs if you expect to use the high-performance code path of the drivers. Deprecated functionality is also far more likely to be full of driver bugs.
Use the newer, more explicit style of declaring inputs and outputs and set them in your code. It really isn't bad. I thought this would be ugly but it actually was pretty easy (I wish I had just done it earlier).
FYI, the last time I looked at a lowest common denominator for OpenGL (2012), it was OpenGL 3.3. Practically all video cards from AMD and NVidia that have any gaming capability will have OpenGL 3.3. And they have for a while, so any code you write now for OpenGL 3.3 will work on a typical low-end or better GPU.