Parsing GLSL error messages - opengl

When I compile a broken GLSL shader then the NVidia driver gives me error messages like this:
0(102) : error C1008: undefined variable "vec"
I know the number inside the brackets is the line number. I wonder what the 0 at the beginning of the error message means. I hoped it would be the index in the sources array which is passed to glShaderSource but that's not the case. It's always 0. Does someone know what this first number means?
And is there some official standard for the error message format so I'm able to parse the line number from it or do other OpenGL implementations use other formats? I only have access to NVidia hardware so I can't check how the error messages are looking like when using AMD or Intel hardware.

It is a file name, which you can't specify via GL API, so it is 0.
You can set it with #line num filename preprocessor command right within shader code. Could be helpful if your shader is constructed from many files with #includes via external preprocessor (before passing source to GL).
There is no standard for messages. Everyone doing whatever they want.

glShaderSource accepts an array of source strings. The first number is the index into that array.

Related

How do I set the filename for OpenGL shader compile errors returned by glGetShaderInfoLog?

When compiling shaders with OpenGL, you can retrieve log lines by calling glGetShaderInfoLog. At least in the driver I'm using (Nvidia), this will return results like:
0(14) : error C1035: assignment of incompatible types
The leading 0 is the index of the string passed to glShaderSource and the number in parentheses is the line number.
Can I set that value to the actual filename so the compiler errors are more useful? Alternatively, is there a consistent way to parse the driver output?

How to find name of error ID for NVIDIA OpenGL drivers?

I have an error message (which is mostly a warning, not so much an actual error).
using glDebugMessage(), the error ID that is returned in decimal is 131186 (the error ID is the same class of enumerators as GL_NO_ERROR, GL_INVALID_ENUMERATOR...).
I want to read about the documentation of this value, but I seem to not be able to find it by searching it up. It's not an official OpenGL enumerator value, so i assume it to be driver specific (NVIDIA).
EDIT:
The full message is:
Source: GL_DEBUG_SOURCE_API
Type: GL_DEBUG_TYPE_PERFORMANCE
ID: 0x20072
Severity: GL_DEBUG_SEVERITY_MEDIUM
Message:
Buffer performance warning: Buffer object "SSBO" (bound to
GL_SHADER_STORAGE_BUFFER, and GL_SHADER_STORAGE_BUFFER (3), usage hint is
GL_DYNAMIC_DRAW) is being copied/moved from VIDEO memory to HOST memory.
Does anyone know what this error code means or how to find its documentation?
This warning simply means that OpenGL does not have total control over the SSBO. Because of that, it has to either block/copy the SSBO's data for OpenGL to use it properly. This is slightly inefficient, which is why the driver is warning you about it.
As for the documentation, I haven't really found any. But, I did find this other question which referenced a very similar problem with OpenGL and OpenCL: OpenCL Host Copying Performance Warning

Scilab compilation "cannot allocate this quantity of memory"

I am facing issues with memory allocation in Scilab after compiling.
I am compiling on a Red Hat on ppc64 (POWER8). Stack limits are already set to unlimited (ulimit -s unlimited). The ./configure script (with several options I am not showing here) runs successfully, but the make all fails and stops. When it stops, it is stuck at the Scilab command prompt with this message:
./bin/scilab-cli -ns -noatomsautoload -f modules/functions/scripts/buildmacros/buildmacros.sce
stacksize(5000000);
!--error 10001
stacksize: Cannot allocate memory.
%s: Cannot allocate this quantity of memory.
at line 27 of exec file called by :
exec('modules/functions/scripts/buildmacros/buildmacros.sce',-1)
-->
I have investigated a bit, and that error message seems to be called of course at line 00027 in buildmatros.sce, where the function stacksize(5000000) is called.
This function is defined in:
scilab-5.5.1/modules/core/sci_gateway/c/sci_stacksize.c
I found a version of the file at this page: http://doxygen.scilab.org/master_wg/d5/dfb/sci__stacksize_8c_source.html.
The condition that is FALSE and that triggers the message seems to me to show up at line 00295.
Inside that file, you see that error is displayed whenever the stacksize given as input is LARGER than what is returned by the method get_max_memory_for_scilab_stack() from the class:
scilab-5.5.1/modules/core/src/c/stackinfo.c
Again I found a version online at the following page:
http://doxygen.scilab.org/master_wg/dd/dfb/stackinfo_8h.html#afbd65a57df45bed9445a7393a4558395
The Method is declared from line 109.
It seems to invoke a variable called MAXLONG, which is however NEVER explicitly declared! As you see, it is declared several times (line 00019, 00035, 00043, 00050), but all lines are commented! [correction: the lines are NOT commented, it was my false understanding of # being a comment sign, but it's not]
So my guess is: MAXLONG is not declared, so the function does not return a value (or it returns 0) and therefore the error message is triggered because the stacksize given as input is higher than 0 or NULL or N/A.
My questions are then:
Why are all lines commented where MAXLONG is defined?
Where does MAXLONG originate from? Is it something passed from the kernel?
How can I solve the problem?
Thanks!
PS - I tried to uncomment the line in buildmacros, and it compiled and installed without issues. However, when I started scilab-cli, it displayed the same message again.
Edit after further investigation:
After further investigation, I found out that what I thought were the comments are indeed instructions for the compiler... but I kept those errors of mine, so that the answer to my question is understandable.
Here are my new points.
In Scilab I noticed that by giving an input stacksize out of bounds, the same method get_max_memory_for_scilab_stack() is invoked, to get the upper bound. The lower bound I've seen it's defined by default.
-->stacksize(1)
!--error 1504
stacksize: Out of bounds value. Not in [180000,268435454].
Also the stacksize used seems fine:
-->stacksize()
ans =
7999994. 332.
However, when trying to give such value an input inbetween, it fails.
-->stacksize(1)
!--error 1504
stacksize: Out of bounds value. Not in [180000,268435454].
It seems to invoke a variable called MAXLONG
It's not a variable, but a pre-processor macro.
Why are all lines commented where MAXLONG is defined?
You should ask that from the person who commented the lines. They're not commented in scilab-5.5.1 that's online.
Where does MAXLONG originate from? Is it something passed from the kernel?
It's defined in the file scilab-5.5.1/modules/core/src/c/stackinfo.c. It's defined to the same value as LONG_MAX which is defined by the standard c library (<limits.h> header). If the macro is not supplied by the standard library, then it's defined to some other, platform specific value.
How can I solve the problem?
If your problem originates from the lack of definition for MAXLONG, then you must define it. One way going about it is to uncomment the lines that define it. Or re-download the original sources since yours don't appear to match with the official ones.

Igraph eigenvector centrality Run-Time error c++

I'm writing program on c++ that needs to generate graphs and calculate some measures.I'm working with Visual Studio 2013 and Igraph C library. At this point I can create graphs from custom info and calculate some metrics like betweennes and closeness centrality, but when i try to calculate eigenvector centrality, the program crash and show me this message:
"Run-Time Check Failure #3 - The variable 'tgetv0' is being used without being initialized."
The tgetv0 variable is used inside of dgetv.c from Igraph source.
Here is my code:
void GraphObject::calcEigen()
{
igraph_arpack_options_t options;
igraph_real_t value;
igraph_vector_t weights;
igraph_vector_init(&weights, igraph_ecount(&cGraph)); //cGraph is already created.
igraph_vector_init(&eigenRes, igraph_vcount(&cGraph)); //All ..Res igraph_vector_t are declarated in header
igraph_vector_init(&betweennesRes, 0);
igraph_vector_init(&closenessRes, 0);
igraph_arpack_options_init(&options);
igraph_betweenness(&cGraph, &betweennesRes, igraph_vss_all(), 0, 0, 1);
igraph_closeness(&cGraph, &closenessRes, igraph_vss_all(), IGRAPH_ALL, 0, 1);
igraph_eigenvector_centrality(&cGraph, &eigenRes, &value, 0, 1, &weights, &options);
}
The closeness and betwenness are correctly calculated an "couted" but crash on eigenvector function.
After lot of research on documentation, internet and the debugger i cant't figure which is the problem, especially when I tryed the example code in the documentation http://igraph.org/c/doc/igraph-Structural.html#igraph_eigenvector_centrality (copy/paste) and makes the same. Is this a library or example issue, I a'm missing something?
When I init the weights vector and then I call igraph_null(&weights), it works but the result of all eigenvalues is 1, and this is incorrect result. What I'm doing wrong?
Let us assume that Visual Studio is right and we indeed have a variable named tgetv0 that is being used uninitialized. I scanned igraph's source code and it looks like there are two places where it could indeed be the case. One of them is in src/lapack/dnaupd.c, the other one is in src/lapack/dsaupd.c. Both of these files were converted from Fortran using f2c so it is hard to tell whether the issue was present in the original Fortran code or whether this was introduced during the conversion. Either way, you can probably fix this easily by looking up the lines where tgetv0 is declared in src/lapack/dnaupd.c and src/lapack/dsaupd.c and initializing it to a value of 0. In my version, the lines to change are line 486 in src/lapack/dnaupd.c and line 482 in src/lapack/dsaupd.c.
Please add a comment to confirm whether the solution works for you or not - if it works, I'll commit a patch to the igraph source tree.

glBindFramebuffer causes an "invalid operation" GL error when using the GL_DRAW_FRAMEBUFFER target

I'm using OpenGL 3.3 on a GeForce 9800 GTX. The reference pages for 3.3 say that an invalid operation with glBindFramebuffer indicates a framebuffer ID that was not returned from glGenFramebuffers. Yet, I output the ID returned by glGenFramebuffers and the ID I send later to glBindFramebuffer and they are the same.
The GL error goes away, however, when I change the target parameter in glBindFramebuffer from GL_DRAW_FRAMEBUFFER to GL_FRAMEBUFFER. The documentation says I should be able to use GL_DRAW_FRAMEBUFFER. Is there any case in which you can't bind to GL_DRAW_FRAMEBUFFER? Is there any harm from using GL_FRAMEBUFFER instead of GL_DRAW_FRAMEBUFFER? Is this a symptom of a larger problem?
If glBindFramebuffer(GL_FRAMEBUFFER) works when glBindFramebuffer(GL_DRAW_FRAMEBUFFER) does not, and we're not talking about the EXT version of these functions and enums (note the lack of "EXT" suffixes), then it's likely that you may have done something wrong. GL_INVALID_OPERATION is the error you get when multiple combinations of parameters that depend on different state are in conflict. If it were just a missing enum, you should get GL_INVALID_ENUM.
Of course, it could just be a driver bug too. But there's no way to know without knowing what your code looks like.