I am using the GL_ARB_debug_output extension. The driver gives me this warning:
Program/shader state performance warning: Vertex shader in program 16 is being recompiled based on GL state.
So, I tried setting various GL state before I compile my shader, including:
GL_BLEND,
GL_CULL_FACE,
GL_DEPTH_TEST,
the polygon-offset,
and the blendfunc.
But it still recompiles upon first draw.
What are the typical pieces of state that could cause the driver to recompile a vertex shader?
I've run across this problem recently. It occurred that unbinding Vertex Array Objects caused this warning to appear. After deleting the lines with glBindVertexArray(0) the warning message disappeared.
Related
Is there any way to automatically compile OpenGL shaders for Vulkan? The problem is with the uniforms.
'non-opaque uniforms outside a block' : not allowed when using GLSL for Vulkan
I have tried compiling for OpenGL, then decompiling with spirv-cross with --vulkan-semantics, but it still has non-opaque uniforms.
spirv-cross seems to only have tools for compiling Vulkan shaders for OpenGL.
[--glsl-emit-push-constant-as-ubo]
[--glsl-emit-ubo-as-plain-uniforms]
A shader meant for OpenGL consumption will not work on Vulkan. Even ignoring the difference in how they consider uniforms, they have very different resource models. Vulkan uses descriptor sets and binding points, with all resources using the same binding indices (set+binding). By contrast, OpenGL gives each kind of resource its own unique set of indices. So a GLSL shader meant for OpenGL consumption might assign a texture uniform and a uniform block to the same binding index. But you can't do that in a GLSL shader meant for Vulkan, not unless the two resources are in different descriptor sets.
If you want to share shaders, you're going to need to employ pre-processor trickery to make sure that the shader assigns resources (including how it apportions uniforms) for the specific target that the shader is being compiled for.
Is glDrawElements() supposed to be working in Emscripten's current release? (v1.37.1) Because no matter what I do, calling glDrawElements() gives me Error 1282 and of course, nothing is rendered in the browser.
Important: program runs perfectly after compiling with VS for PC, even with the shaders written for WebGL. Everything works as expected, and no errors are produced. But on the web: Error 1282.
Main loop:
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VaoId);
glGetError(); // Clear any previous errors
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
int error = glGetError(); if (error != 0) printf("Error: %i\n", error);
glBindVertexArray(0);
glfwPollEvents();
glfwSwapBuffers(m_Instance);
I'm only trying to render a quad as well, 1 VBO in a VAO, indices and positions both stored in one VBO. Indicies first, starting at 0. VertexAttribPointers are set correctly. Shaders compile for the web-browser without errors. Literally the only time glGetError() produces an error-code is straight after the glDrawElements() call.
Is this an emscripten bug, or a WebGL bug?
[EDIT]
Compiling using:
em++ -std=c++11 -s USE_GLFW=3 -s FULL_ES3=1 -s ALLOW_MEMORY_GROWTH=1 --emrun main.cpp -o t.html
Is this an emscripten bug, or a WebGL bug?
It's probably a bug in your code.
I find NVIDIA's drivers to be more forgiving than AMD's drivers, leading it to execute code which the latter would violently reject. It may be possible something similar is happening here; your code works fine as a native application, but in a browser environment, the bug becomes a critical issue. This could be explained, for example, by Chrome's use of ANGLE, which implements OpenGL with Direct3D (on Windows), which could lead to some differences to your native graphics driver. Of course, this is just speculation, nevertheless, it's unlikely that a function as essential as glDrawElements would be broken.
Error code 1282 is GL_INVALID_OPERATION, which for glDrawElements "is [in this case most likely] generated if a non-zero buffer object name is bound to an enabled array or the element array and the buffer object's data store is currently mapped" (source).
Alternatively, it is possible that your shader might be causing the error, in which case, it would be useful if you shared its source with us.
For anyone who might be having issues with the glDrawElements() call specifically on the Web:
My problem turned out to be the fact that I was storing the indices in 1 buffer with all the vertext attributes. This worked fine on PC but for the web, what you need to do is create 2 separate buffers - 1 for indices, and another 1 for all the positions/uvs/normals etc.. then set the "vertexAttributePointers" appropriately in the vbo. Because in the browser, you can't bind a buffer to an array, and also to another buffer, which is what OpenGL will let you do on the PC, and it will work with no errors/warnings.
Make sure to bind both the VBO and the IBO to the VAO upon initialization. Then, just rebind the VAO, draw, unbind VAO, go render next object - job done.
I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many textures defined in one file since the shader will fail to link (it compiles successfully but the linker then complains that the number of texture samplers exceeds the HW limit). Other problem is that I am doing automatic resource binding and my shaders have lots of false resource dependencies.
So is there a way to tell the shader compiler/linker to remove all unused resources from the separate program?
Shader sampler units are not there to select textures, but to pass texture units to the shader. The textures themself are bound to the texture units. So the selection which texture to use should not be done in the shader, but in the host program.
Or you could use bindless textures if your OpenGL implementation (=GPU driver) supports these.
From some examples of using OpenGL transform feedback I see that the glTransformFeedbackVaryings​ are mapped after program compilation and before it is linking.Is this way enforced for all OpenGL versions?Can't layout qualifier be used to set the indices just like for vertex arrays?I am asking because in my code shader programs creation process is abstracted from other routines and before splitting it to controllable compile/link methods I would like to know if there's a way around.
Update:
How is it done when using separable shader objects?There is no explicit linkage step.
UPDATE:
It is still not clear to me how to set glTransformFeedbackVaryings when using separate shader objects.
This explantion is completely unclear to me:
If separable program objects are in use, the set of attributes
captured is taken from the program object active on the last shader
stage processing the primitives captured by transform feedback. The
set of attributes to capture in transform feedback mode for any other
program active on a previous shader stage is ignored.
I actually thought I could activate a pipeline object and do the query.But it seem to have no effect.My transform feedback writes nothing.Then I found this discussion in Transform Feedback docs:
Can you output varyings from a seperate shader program created
with glCreateShaderProgramEXT?
RESOLVED: No.
glTransformFeedbackVaryings requires a re-link to take effect on a
program. glCreateShaderProgramEXT detaches and deletes the shader
object use to create the program so a glLinkProgram will fail.
You can still create a vertex or geometry shader program
with the standard GLSL creation process where you could use
glTransformFeedbackVaryings and glLinkProgram.
This is unclear too.Does the answer mean that to set transform feedback varyings one should use the regular shader programs only?I don't get it.
What you are asking is possible using 4.4.2.1 Transform Feedback Layout Qualifiers, unfortunately it is an OpenGL 4.4 feature. It is available in extension form through GL_ARB_enhanced_layouts, but this is a relatively new extension and adoption is sparse at the moment.
It is considerably more complicated than any of the more traditional layout qualifiers in GLSL, so your best bet for the foreseeable future is to manage the varyings from the GL API instead of in your shader.
As far as varyings in SSO (separable shader object) programs, the OpenGL specification states the following:
OpenGL 4.4 (Core Profile) - 13.2 Transform Feedback - pp. 392
If separable program objects are in use, the set of attributes captured is taken
from the program object active on the last shader stage processing the primitives
captured by transform feedback. The set of attributes to capture in transform feedback mode for any other program active on a previous shader stage is ignored.
Ordinarily linking identifies varyings (denoted in/out in modern GLSL) that are actually used between stages and establishes the set of "active" uniforms for a GLSL program. Linking trims the dead weight that is not shared across multiple stages and performs static interface validation between stages and it is also when binding locations for any remaining varyings or uniforms are set. Since each program object can be a single stage when using SSOs, the linker is not going to reduce the number of inputs/outputs (varyings) and you can ignore a lot of language in the specification that says it must occur before or after linking.
Since linking is not a step in creating a program object for use with separate shader objects, your transform feedback has to be relative to a single stage (which can mean a different program object depending on which stage you select). OpenGL uses the program associated with the final vertex processing stage enabled in your pipeline for this purpose; this could be a vertex shader, tessellation evaluation shader, or geometry shader (in that order). Whichever program provides the final vertex processing stage for your pipeline is the program object that transform feedback varyings are relative to.
In my game there is a render module that handles shaders, framebuffers and drawing. Now I want to encapsulate the logic of these three tasks separately. The idea is that I split up the render module in three modules. I do this for decreasing code complexity and easily implementing live shader reload, but isn't important for my question.
The drawing module would create empty shader objects with glCreateProgram() and globally store them along with the path to the source file. The shader module would check for them and create the actual shader by loading the source file, compiling and linking.
But in this concept, it could occur a case where the render module already wants to draw but the shader module hadn't create the actual shader. So my question is, is it valid to draw with an empty shader program? It is completely acceptable for me that the screen would be black when this happens. Creating the shaders should be ready very soon, so the delay might be unnoticeable.
Is it valid to draw with an empty shader program? How can I implement the idea of elsewhere loaded shaders otherwise?
How do you define "valid" and "empty"?
If the program object's last link was not successful (or if it had no last link), then calling glUseProgram on it is a GL_INVALID_OPERATION error. This also means that glUseProgram will fail, so the current program will not change. So all glUniform calls will refer to that program and not the new one; if that program is zero, you get more GL_INVALID_OPERATION errors.
If there is no current program (ie, the program is 0), then attempting to render will produce undefined behavior.
Is undefined behavior "valid" for your needs? If you're not going to show the user that frame (by not calling swap buffers), then what you render won't matter. Is having all of those errors in the OpenGL error queue "valid" for you?
Two relevant lines in the documentation:
glAttachShader
All operations that can be performed on a shader object are valid whether or not the shader object is attached to a program object. It is permissible to attach a shader object to a program object before source code has been loaded into the shader object or before the shader object has been compiled.
glUseProgram
If program is zero, then the current rendering state refers to an invalid program object and the results of shader execution are undefined. However, this is not an error.
If program does not contain shader objects of type GL_FRAGMENT_SHADER, an executable will be installed on the vertex, and possibly geometry processors, but the results of fragment shader execution will be undefined.
So it seems it is possible to do that, but you might not get the same result on all machines.