"invalid operation" on shader loading and compiling - opengl

Problem
Having a shader program in a.vs as:
#version 330
in vec2 vPosition;
void main() {
gl_Position = vec4(vPosition, 0.0, 1.0);
}
and given:
import qualified Graphics.GLUtil as GLU
import qualified Graphics.Rendering.OpenGL as GL
this line:
vs <- GLU.loadShader GL.VertexShader $ shaderPath </> "a.vs"
causes:
GL: Error InvalidOperation "invalid operation"
at runtime.
Details
I'm running on Mac OS X 10.10.2. The OpenGL context is set via GLFW with:
GLFW.windowHint $ GLFW.WindowHint'OpenGLDebugContext True
GLFW.windowHint $ GLFW.WindowHint'ContextVersionMajor 3
GLFW.windowHint $ GLFW.WindowHint'ContextVersionMinor 3
GLFW.windowHint $ GLFW.WindowHint'OpenGLForwardCompat True
GLFW.windowHint $ GLFW.WindowHint'OpenGLProfile GLFW.OpenGLProfile'Core
giving an OpenGL 3.3 context.
The context of the code can be found at this repository (link to the specific commit), and specifically in Main.hs.
Question
What can I do to fix this issue or get more debugging informations?

I ran your code under gDebugger, and it made it plain:
GL.matrixMode $= GL.Projection
GL.loadIdentity
GL.ortho2D 0 (realToFrac w) (realToFrac h) 0
This leftover piece of code was triggering an error state:
Error-Code: GL_INVALID_OPERATION
Error-Description:
The specified operation is not allowed in the current state. The offending function is ignored, having no side effect other than to set the error flag.
As a side note, shader compilation can never trigger an INVALID_OPERATION (except when you try to render with a broken pipeline); the compilation errors can be obtained by checking the compilation status directly.

Related

Check OpenGL Version in Swift 3 beta 6

I would like to check, which OpenGL version is installed on the computer, before I execute OpenGL commands. I wrote a small function to get the NSOpenGLVersion:
import OpenGL
func getnsopenglversion () -> String
{
var major: GLint = 0
var minor: GLint = 0
NSOpenGLGetVersion(&major, &minor)
return ("\(major),\(minor)")
}
But this function always returns "1,2" on Yosemite and El Captain, on older Mac Pro 3,1 and new Mac Pro 6,1 or Mac Mini. Always the same.
What I want to get is the OpenGL Version, that means "3,3" or "4,1" or something like this, what is currently installed. I tried the following:
let v = glGetString(GL_VERSION)
But this will not compile but gets the error saying "Cannot convert the value of type int32 to expected type GLenum".
When I instead convert the GL_EXTENSIONS constant to GLenum with:
let v = glGetString(GLenum(GL_EXTENSIONS))
It will compile, but I get an exception when running the code. Is there some initialization needed, before calling glGetString?
The question is: I need a small function, which gets the OpenGL version as a string, not the NSOpenGLVersion but the correct OpenGL version of my current hardware. Can anybody help with this?

What is the limit on work item (shader instance) memory in WebGL?

I declare an array in my WebGL vertex shader:
attribute vec2 position;
void main() {
#define length 1024
float arr[length];
// use arr so that it doesn't get optimized away
This works, but if I increase length to 2048 then gl.drawArrays does nothing. There are no errors- shaders compile, program links and passes gl.validateProgram. I'm guessing that I tried to use too much memory on the stack. Is there a better, programmatic way to discover this limit? Am I doing something else wrong?
There are no errors- shaders compile, program links and passes gl.validateProgram.
As guaranteed by the spec!
Section 2.10: "Vertex Shaders", page 42:
A shader should not fail to compile, and a program object should not fail to
link due to lack of instruction space or lack of temporary variables.
The GLSL spec helpfully notes:
Appendix A, section 3: "Usage of Temporary Variables":
The maximum number of variables is defined by the conformance tests.
You can get your very own copy of the conformance tests for the low, low price of $14,000-$19,000.
However, you can at least detect this situation (Section 2.10, page 41):
It is not always possible to determine at link time if a program object actually will execute. Therefore validation is done when the first rendering command (DrawArrays or DrawElements) is issued, to determine if the currently active program object can be executed. If it cannot be executed then no fragments will be rendered, and the rendering command will generate the error INVALID_OPERATION.

GLSL 4.2 - Syntax error: "layout" parse error

So, I recently found an interesting shader and tried to compile it.
But, the GLSL compiler threw the following error:
ERROR: 0:50: error(#132) Syntax error: "layout" parse error
# (Fragment shader)
#version 420
...
uint ImageAtomic_Average_RGBA8(layout (r32ui) volatile uimage3D Img, ivec3 Coords, vec4 NewVal)
{ ... }
Details:
Card: AMD Radeon HD 7870 (It supports OpenGL 4.20)
I tried both the 4.2 driver and the 4.3 beta driver.
A layout qualifier cannot be part of the function's signature. Section 6.1.1 of the GLSL 4.40 Specification defines the following grammar for a function prototype:
function-prototype :
precision-qualifier type function-name(*parameter-qualifiers* precision-qualifier type name array-specifier, ... )
Now, a parameter-qualifier can be one of
const
in
out
inout
precise
memory qualifier (volatile, ...)
precision qualifier(lowp, ...)
Consistently, section 4.10 explicitly states:
Layout qualifiers cannot be used on formal function parameters [..]
If you drop the layout qualifier, you should be fine. If not, it's a driver bug.

Delete an existing shader or program (or get its Id to do so)

I have a compiled shader or program (not sure of the correct term) and I need to delete it.
How do I find the Id of compiled programs and/or shaders to do so?
I know it exists because the debugger tells me that I am trying to redefine it, and cannot compile it again because of this:
ERROR: 0:1: error(#198) Redefinition at_coord_Y
ERROR: 1:1: error(#248) Function already has a body main
The first line of the shaders source is:
"in float at_coord_Y;"
Can I somehow use this to find the Id?
EDIT 1: Hopefully to clarify a bit, the shader fails to compile because it already exists.
GLint compiled = UNDEFINED_VALUE;
const GLchar* shaderSrc[] = {
"in float at_coord_Y;",
"void main()",
"{",
// Dont mind the empty space
"}"
};
GLuint shaderId = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(shaderId, glNumberOfLines(shaderSrc), shaderSrc, NULL);
glCompileShader(shaderId); // Fail to compile because it already exists. Redefinition error.
glGetShaderiv(shaderId), GL_COMPILE_STATUS, &compiled); // Compile status GL_FALSE
But how can I find the Id of an existing shader (or program)?
You are completely misunderstanding the error. OpenGL isn't saying that the shader object (what you get with glCreateShader) is already defined. It's saying that there is a problem in your shader's text (what you passed with glShaderSource).
There are many problems with your shader loading. I have no idea where you got this loading code from, but I strongly advise avoiding that place.
glShaderSource takes multiple strings, yes. But that doesn't mean you throw every line into a separate string. It's supposed to be used for "headers" and the like. Which means that the compiler will concatenate all of the strings together when compiling them.
In general, unless you're using the extra strings as headers to prefix onto your main shader, just pass an array of one string. Save yourself the pain.
Also, you didn't use a #version directive. Without specifying a version, you're forced to use GLSL 1.10. And in GLSL 1.10, in float at_coord_Y; is not a legal definition.

Can't comprehend "unknown OpenGL extension entry" error triggered by Haskell OpenGL program

I wrote the following program on Windows XP using GHC 7.4.1 (Haskell Platform 2012.2.0.0):
mkVertexShader :: IO Bool
mkVertexShader = do
shader <- glCreateShader gl_VERTEX_SHADER
withCString vertexShader $ \ptr -> glShaderSource shader 1 (castPtr ptr) nullPtr
glCompileShader shader
status <- with 0 $ \ptr -> do
glGetShaderiv shader gl_COMPILE_STATUS ptr
peek ptr
return $ status == fromIntegral gl_FALSE
When run, the program aborts with
*** Exception: user error (unknown OpenGL extension entry glCreateShader, check for OpenGL 3.1)
I'm not sure what this error means, or how to address it. Can anyone help/
You don't have OpenGL 3.1 support on your computer. You have imported the function from Core31 while you might want the one from Core211 or ARB.ShaderObjects2. You need to check whether your graphics card supports the various versions/extensions when starting the application, and especially that you aren't requesting an OpenGL profile that you don't support.
If you use the Haskell OpenGL library instead of OpenGLRaw, this distinction is taken care of for you automatically.
1Well, the function hasn't changed between Core21 and Core31 so using the old version won't help
2You should never use ARB_shader_objects.