If I compile GLSL shaders on my desktop computer which contains an NVIDIA graphics card the line numbers in the error message wrong. It always seems to be the actual line number plus three.
On my laptop computer, which contains an ATI graphics card, the line numbers are correct.
Take the following simple GLSL fragment shader for example. It contains an error in line 6, where texture2D is written with a double t.
uniform sampler2D dirt;
varying vec3 pos;
void main()
{
vec3 d = ttexture2D(dirt, pos.xy).rgb;
gl_FragColor = vec4(d, 1.0);
}
The NVIDIA compiler gives me the following error message:
0(9) : error C1008: undefined variable "ttexture2D"
The ATI compiler displays the correct line numbers:
Fragment shader failed to compile with the following errors:
ERROR: 1:6: error(#202) No matching overloaded function found ttexture2D
ERROR: 1:6: error(#216) Vector field selection out of range 'rgb'
ERROR: error(#273) 2 compilation errors. No code generated
Is there a reason why the NVIDIA compiler gives me wrong line numbers?
EDIT: I am using NVIDIA driver version 331.58
Related
I ported this sample to to jogl from g-truc and it works, everything fine everything nice.
But now I am trying to understand exactly what the stream of glDrawTransformFeedbackStream refers to.
Basically a vec4 position input gets transformed to
String[] strings = {"gl_Position", "Block.color"};
gl4.glTransformFeedbackVaryings(transformProgramName, 2, strings, GL_INTERLEAVED_ATTRIBS);
as following:
void main()
{
gl_Position = mvp * position;
outBlock.color = vec4(clamp(vec2(position), 0.0, 1.0), 0.0, 1.0);
}
transform-stream.vert, transform-stream.geom
And then I simply render the transformed objects with glDrawTransformFeedbackStream
feedback-stream.vert, feedback-stream.frag
Now, based on the docs they say:
Specifies the index of the transform feedback stream from which to
retrieve a primitive count.
Cool, so if I bind my feedbackArrayBufferName to 0 here
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, feedbackName[0]);
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, feedbackArrayBufferName[0]);
gl4.glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, 0);
I guess it should be that.
Also the geometry shader outputs (only) the color to index 0. What about the positions? Are they assumed to be already on stream 0? How? From glTransformFeedbackVaryings?
Therefore, I tried to switch all the references to this stream to 1 to check if they are all consistent and then if they do refer to the same index.
So I modified
gl4.glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 1, feedbackArrayBufferName[0]);
and
gl4.glDrawTransformFeedbackStream(GL_TRIANGLES, feedbackName[0], 1);
and also inside the geometry shader
out Block
{
layout(stream = 1) vec4 color;
} outBlock;
But if I run, I get:
Program link failed: 1
Link info
---------
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
OpenGL Error(GL_INVALID_OPERATION): initProgram
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> object is not successfully linked, or is not a program object.
when 1455183474230
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
GlDebugOutput.messageSent(): GLDebugEvent[ id 0x502
type Error
severity High: dangerous undefined behavior
source GL API
msg GL_INVALID_OPERATION error generated. <program> has not been linked, or is not a program object.
when 1455183474232
source 4.5 (Core profile, arb, debug, compat[ES2, ES3, ES31, ES32], FBO, hardware) - 4.5.0 NVIDIA 361.43 - hash 0x225c78a9]
Trying to know what'g going on, I found this here
Output variables in the Geometry Shader can be declared to go to a particular stream. This is controlled via an in-shader specification, but there are certain limitations that affect advanced component interleaving.
No two outputs that go to different streams can be captured by the same buffer. Attempting to do so will result in a linker error. So using multiple streams with interleaved writing requires using advanced interleaving to route attributes to different buffers.
Is it what happens to me? position going to index 0 and color to index 1?
I'd simply like to know if my hypotesis are correct. And if yes, I want to prove it by changing the stream index.
Therefore I'd also like to know how I can set the position on stream 1 together with color after my changes.. shall I modify the output of the geometry shader in this way layout(triangle_strip, max_vertices = 3, xfb_buffer = 1) out;?
Because it complains
Shader status invalid: 0(11) : error C7548: 'layout(xfb_buffer)' requires "#extension GL_ARB_enhanced_layouts : enable" before use
Then I add it and I get
error: Transform feedback can't capture varyings belonging to different vertex streams in a single buffer.
But now they should be both on stream 1, what I am missing?
Moreover, what is the definition of a stream?
I have a shader program with a for loop in the geometry shader. The program links (and operates) fine when the for loop length is small enough. If I increase the length then I get a link error (with empty log). The shaders compile fine in both cases. Here is the geometry shader code (with everything I thought relevant):
#version 330
layout (points) in;
layout (triangle_strip, max_vertices = 256) out;
...
void main()
{
...
for(int i = 0 ; i < 22 ; ++i) // <-- Works with 22, not with 23.
{
...
EmitVertex();
...
EmitVertex();
...
EmitVertex();
...
EmitVertex();
EndPrimitive();
}
}
The specs state: "non-terminating loops are allowed. The consequences of very long or non-terminating loops are platform dependent." Could this be a platform dependent situation (GeForce GT 640)? As the shader code evolved, the max length of the for loop changed (more code -> smaller max), leading me to suspect it has something to do with loop unrolling. Can anyone give me any more info on this issue? (Let me know if you need more code/description.)
One possible reason for failure to link programs containing geometry shaders as the GL_MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS limit. Section 11.3.4.5 "Geometry Shader Outputs" of the OpenGL 4.5 core profile specifiaction states (my emphasis):
There are two implementation-dependent limits on the value of GEOMETRY_VERTICES_OUT; it may not exceed the value of MAX_GEOMETRY_OUTPUT_VERTICES, and the product of the total number of vertices and the sum of all
components of all active output variables may not exceed the value of MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS. LinkProgram will fail if it determines
that the total component limit would be violated.
The GL guarantees that this toal component limit is at least 1024.
You did not paste the full code of your shaders, so that it is unclear how many components per vertex you are using, but it might be a reason for a link failure.
If I increase the length then I get a link error (with empty log).
The spec does not require any linker or compiler messages at all. However, Nvidia usually provides quite good log messages. If you can reproduce the "link failure without log message" scenario in the most current driver version, it might be worth filing a bug report.
I have a GLSL shader that works fine, until I add the following lines to it.
"vec4 coord = texture2D(Coordinates,gl_TexCoord[0].st);"
"mat4x3 theMatrix;"
"theMatrix[1][1] = 1.0;"
"vec3 CampProj = theMatrix * coord;"
when I check the error log I am told:
"ERROR: 0:2: '=' : cannot convert from '4-component vector of float' to '3-component vector of float'\n\n"
if I make CampProject a vec4 this compiles fine, but I am very confused as to how a 4 column, 3 row matrix multiplied by a 4 component vector is gonna result in a 4 component vector.
is this a bug, or is it possible that the 4x3 matrix is really just a 4x4 under the hood with a 0,0,0,1 final row? if not can someone explain to me why the compiler is insisting on returning a vec4?
I'm using C++ via VSExpress 2013, win7, Mobile IntelĀ® 4 Series Express Chipset Family
UPDATE:
Reto's answer is what I expect to be the case. That it is a bug in the compilation. Both because that's the only thing that makes sense in a LA context and because the LA definition is what the GLSL documentation references for matrix/matrix and matrix / vec multiplication; however, even after updating my video chipset drivers the compilation is showing me the same behavior. could one other person confirm that behavior Reto describes?
#Reto if nobody has confirmed by 12-05-14 I'll accept your answer as correct as it seems the only real logical possibility.
This looks like a bug in your GLSL compiler. This should compile successfully:
mat4x3 mat;
vec3 res = mat * vec4(1.0);
and this should give an error:
mat4x3 mat;
vec4 res = mat * vec4(1.0);
I tested this on 3 configurations, and all of them confirmed this behavior:
Windows 8.1 with Intel HD Graphics 4600.
Windows 8.1 with NVIDIA GeForce GT 740M.
Mac OS Yosemite with Intel Iris Pro.
This also matches my understanding of the specs. In the GLSL 3.30 spec document, mat4x3 is described as:
a floating-point matrix with 4 columns and 3 rows
and multiplication is defined by (emphasis added):
The operator is multiply (*), where both operands are matrices or one operand is a vector and the other a matrix. A right vector operand is treated as a column vector and a left vector operand as a row vector. In all these cases, it is required that the number of columns of the left operand is equal to the number of rows of the right operand. Then, the multiply (*) operation does a linear algebraic multiply, yielding an object that has the same number of rows as the left operand and the same number of columns as the right operand.
In this example, the "number of columns of the left operand" is 4, which means that the vector needs to have 4 "rows", which is a column vector with 4 elements. Then, since the left operand has 3 rows, the resulting vector has 3 "rows", which is a column vector with 3 elements.
It's also the only thing that makes sense based on standard linear algebra.
That's because your component count of both your vec4 and your mat4x3 is four. You cannot expect to recieve a vec3 out of that multiplication.
Why don't you just use homogeneous matrix and a vec4? You'll get a vec4, and if you really must, can convert it to a vec3 by doing
vec3.xyz
Hope this helps, my first answer!
I have completely re-written this first post to better show what the problem is.
I am using ps v1.4 (highest version I have support for) and keep getting an error.
It happens any time I use any type of function such as cos, dot, distance, sqrt, normalize etc. on something that was passed into the pixelshader.
For example, I need to do "normalize(LightPosition - PixelPosition)" to use a point light in my pixelshader, but normalize gives me an error.
Some things to note-
I can use things like pow, abs, and radians with no error.
There is only an error if it is done on something passed from the vertex shader. (For example I could take the sqrt of a local pixelshader variable with no error)
I get the error from doing a function on ANY variable passed in, even text coords, color, etc.
Inside the vertex shader I can do all of these functions on any variables passed in with no errors, it's only in the pixelshader that I get an error
All the values passing from the vertex to pixel shader are correct, because if I use software processing rather than hardware I get no error and a perfectly lit scene.
Since normalizing the vector is essentially where my error comes form I tried creating my own normalizing function.
I call Norm(LightPosition - PixelPosition) and "Norm" looks like this -
float3 Norm(float3 v)
{
return v / sqrt(dot(v, v));
}
I still get the error because I guess technically I'm still trying to take a sqrt inside the pixelshader.
The error isn't anything specific, it just says "error in application" on the line where I load my .fx file in C#
I'm thinking it could actually be a compiling error because I have to use such old versions (vs 1.1 and ps 1.4)
When debugged using fxc.exe it tells me "can not map instruction to pixel shader instruction set"
Old GPU:s didn't always support any instruction, especially in the pixel shader.
You might get away with a sqrt in the vertex shader but for a so old version (1.1 !!) the fragment shader might be extremely limited.
I.e this might not be a bug.
The work around could be to skip the hlsl and write your own assembler (but you might stumble onto the same problem there) and simulate the sqrt (say with a texture lookup and / or interpolations if you can have 2 textures in 1.0 :-p )
You can of course try to write a sqrt-lookup/interpolation in hlsl but it might be too big too (I don't remember but IIRC 1.1 don't let you write very long shaders).
I have tried so many different strategies to get a usable noise function and none of them work. So, how do you implement perlin noise on an ATI graphics card in GLSL?
Here are the methods I have tried:
I have tried putting the permutation and gradient data into a GL_RGBA 1D texture and calling the texture1D function. However, one call to this noise implementation leads to 12 texture calls and kills the framerate.
I have tried uploading the permutation and gradient data into a uniform vec4 array, but the compiler won't let me get an element in the array unless the index is a constant. For example:
int i = 10;
vec4 a = noise_data[i];
will give a compiler error of this:
ERROR: 0:43: Not supported when use temporary array indirect index.
Meaning I can only retrieve the data like this:
vec4 a = noise_data[10];
I also tried programming the array directly into the shader, but I got the same index issue. I hear NVIDIA graphics cards will actually allow this method, but ATI will not.
I tried making a function that returned a specific hard coded data point depending on the input index, but the function, being called 12 times and having 64 if statements, made the linking time unbearable.
ATI does not support the "built in" noise functions for glsl, and I cant just precompute the noise and import it as a texture, because I am dealing with fractals. This means I need the infinite precision of calculating the noise at run time.
So the overarching question is...
How?
For better distribution of random values I suggest these very good articles:
Pseudo Random Number Generator in GLSL
Lumina noise GLSL tutorial
Have random fun !!!
There is a project on github with GLSL noise functions. It has both the "classic" and newer noise functions in 2,3, and 4D.
IOS does have the noise function implemented.
noise() is well-known for not beeing implemented...
roll you own :
int c;
int Xn;
srand(int x, int y, int width){// in pixel
c = x+y*width;
};
int rand(){
Xn = (a*Xn+c)%m;
return Xn;
}
for a and m values, see wikipedia
It's not perfect, but often good enough.
This SimpleX noise stuff might do what you want.
Try adding #version 150 to the top of your shader.