Recompile, relink, vertex shader in shader program - c++

I have in place a system for detecting changes made to my shader files. When a shader is changes, let's say the vertex shader, I want to compile that one and replace it with the old version (while not changing the input/output of the shader and thus reusing the VAO, VBO, etc).
std::pair<bool, std::string> Shader::recompile() {
std::string vertex_src = load_shader_source(vertex_filepath);
glDetachShader(gl_program, vertex_shader); // gl_program is a GLuint for the shader program
glDeleteShader(vertex_shader);
vertex_shader = glCreateShader(GL_VERTEX_SHADER);
auto raw_str = vertex_src.c_str();
glShaderSource(vertex_shader, 1, &raw_str, NULL);
glCompileShader(vertex_shader);
GLint vertex_shader_status;
glGetShaderiv(vertex_shader, GL_COMPILE_STATUS, &vertex_shader_status);
GLint fragment_shader_status = 1;
if (vertex_shader_status == 1) {
glAttachShader(gl_program, vertex_shader);
// glLinkProgram(gl_program);
// Logging - mostly irrelevant to the question
GLint is_linked = 0;
glGetProgramiv(gl_program, GL_LINK_STATUS, &is_linked);
SDL_Log("Shader relinking is success: %i", is_linked == GL_TRUE);
std::string err_log = "";
GLint max_log_lng = 0;
glGetProgramiv(gl_program, GL_INFO_LOG_LENGTH, &max_log_lng);
err_log.reserve(max_log_lng);
glGetProgramInfoLog(gl_program, max_log_lng, NULL, (char *) err_log.c_str());
SDL_Log("%s", err_log.c_str());
return {true, ""};
}
The result when using this approach is that nothing changes, but when I link the program (inside the if-statement) it goes all away. I am assuming this is because the linking changes all the bindings and thus makes the existing ones invalid, resulting in nothing being rendered at all.
So, is this even possible to hotswap the a specific shader like this in a shader program?

Linking does not invalidate any attribute bindings since they are part of the VAO state and not of the program. There are two things that could happpen when relinking the program:
The index of an attribute might change. This can be prevented by either fixing them in both shaders (layout (location=...)) or by fixing them between compile and load (glBindAttribLocation​). The same thing might also happen for uniforms and can be handled in the same way.
The values of uniforms are lost. This can not simply be prevented, but setting them in each frame should fix the problem.

Related

GL_INVALID_ENUM with glMapBuffer()

I'm completely new to OpenGL, so I assume I'm probably doing something stupid here. Basically I'm just trying to memory map a buffer object I've created, but glMapBuffer() is returning NULL and giving an error code of GL_INVALID_ENUM. Here's the relevant code that's failing:
glGenVertexArrays(1, &m_vao);
glBindVertexArray(m_vao);
glGenBuffers(1, &m_vertex_buffer);
glNamedBufferStorage(m_vertex_buffer,
BUFFER_SIZE,
NULL,
GL_MAP_READ_BIT | GL_MAP_WRITE_BIT | GL_DYNAMIC_STORAGE_BIT);
glBindBuffer(GL_ARRAY_BUFFER, m_vertex_buffer);
void* vertex_buffer = glMapBuffer(GL_ARRAY_BUFFER, GL_READ_ONLY);
if (!vertex_buffer)
{
GLenum error = glGetError();
fprintf(stderr, "Buffer map failed! %d (%s)\n", error, gluErrorString(error));
return;
}
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This is printing:
Buffer map failed! 1280 (invalid enumerant)
According to the docs, that error gets returned if the target is not one of the available target enums. That said, GL_ARRAY_BUFFER is definitely listed as available.
Am I just doing something wrong here?
In case it helps anyone else, I had multiple issues:
glGetError() returns the first value from a queue of errors. The GL_INVALID_ENUM I thought I was getting was actually from a previous (unrelated) call.
Per this thread, glGenBuffers() allocates a new buffer name, but the buffer is not actually created until it gets bound to a context using glBindBuffer(). I was instead calling glNamedBufferStorage() immediately, which resulted in a GL_INVALID_OPERATION since the buffer didn't actually exist yet. So basically I should always use glCreate*() instead of glGen*() when using DSA.
I believe that glNamedBufferStorage() is for immutable buffers, and glNamedBufferData() is for buffers that you can modify, although I'm not entirely clear on that from the documentation. In any case, I'm using glNamedBufferData() now.
I'm now able to successfully map and write to my buffers after the following setup:
glCreateBuffers(1, &m_vertex_buffer);
glNamedBufferData(m_vertex_buffer, BUFFER_SIZE, NULL, GL_DYNAMIC_DRAW);
void* vertex_buffer_ptr = glMapNamedBuffer(m_vertex_buffer, GL_WRITE_ONLY);

GLSL Shaders run perfect on Intel's integrated GPU but nothing on NVIDIA

I'm using Geometry Shaders for Geometry Amplification.
The code runs perfectly with Intel graphics both in Windows and OS X.
I change the configs to use the dedicated NVIDIA GPU from my windows machine aaaaaaaaaaand... nothing.
This code:
void testError(std::string src) {
GLenum err = glGetError();
if (err != GL_NO_ERROR){
printf("(%s) Error: %s %d\n", src.c_str(), gluErrorString(err), err);
}
}
...
printf("glIsProgram: %s\n", glIsProgram(shaderProgram)?"True":"false");
glUseProgram(shaderProgram);
testError("GOGO 111");
GLint isLinked = 0;
glGetProgramiv(shaderProgram, GL_LINK_STATUS, (int *)&isLinked);
if (isLinked == GL_FALSE)
{
GLint maxLength = 0;
glGetProgramiv(shaderProgram, GL_INFO_LOG_LENGTH, &maxLength);
//The maxLength includes the NULL character
std::vector<GLchar> infoLog(maxLength);
glGetProgramInfoLog(shaderProgram, maxLength, &maxLength, &infoLog[0]);
printf("Program Not Linked %d:\n %s\n", maxLength, infoLog);
//We don't need the program anymore.
glDeleteProgram(shaderProgram);
//Use the infoLog as you see fit.
//In this simple program, we'll just leave
return 0;
}
Outputs:
glIsProgram: True
(GOGO 111) Error: invalid operation 1282
Program Not Linked 116:
­Ð
Also the Log have a strange behaviour since it is not printing nothing but the length would be 116.
Thank you.
EDIT
This:
char * infoLog;
glGetProgramiv(shaderProgram, GL_INFO_LOG_LENGTH, &maxLength);
Printed out the result.
Program Not Linked 116:
Geometry info
-------------
(0) : error C6033: Hardware limitation reached, can only emit 128 vertices of this size
Which comes from:
const GLchar* geometryShaderSrc = GLSL(
layout(points) in;
layout(triangle_strip, max_vertices = 256) out;
...
It's just weird that the Intel integrated GPUS have less hardware (memory?) imitations that an NVIDIA GPU.
Any solution to go around this without decreasing the vertices?
It looks like you're exceeding the GEOMETRY_TOTAL_OUTPUT_COMPONENTS limit.
In the OpenGL 4.4 Spec - Section 11.3.4.5 - page 388
The product of the total number of vertices and the sum of all
components of all active output variables may not exceed the value of MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS. LinkProgram will fail if it determines
that the total component limit would be violated.
i.e max_vertices cannot exceed MAX_GEOMETRY_TOTAL_OUTPUT_COMPONENTS / number_of_components
The minimum requirements are detailed in Table 23.60 - page 585
GEOMETRY_TOTAL_OUTPUT_COMPONENTS 1024
It seems like you have 8 components, so can only have 128 vertices. You must either decrease the number of components, or decrease the number of vertices.
Check the value of GEOMETRY_TOTAL_OUTPUT_COMPONENTS on each device to make sure.

Why can't I add an int member to my GLSL shader input/output block?

I'm getting an "invalid operation" error when attempting to call glUseProgram against the fragment shader below. The error only occurs when I try to add an int member to the block definition. Note that I am keeping the block definition the same in both the vertex and fragment shaders. I don't even have to access it! Merely adding that field to the vertex and fragment shader copies of the block definition cause the program to fail.
#version 450
...
in VSOutput // and of course "out" in the vertex shader
{
vec4 color;
vec4 normal;
vec2 texCoord;
//int foo; // uncommenting this line causes "invalid operation"
} vs_output;
I also get the same issue when trying to use free standing in/out variables of the same type, though in those cases, I only get the issue if accessing those variables directly; if I ignore them, I assume the compiler optimizes them away and thus error doesn't occur. It's almost like I'm only allowed to pass around vectors and matrices...
What am I missing here? I haven't been able to find anything in the documentation that would indicate that this should be an issue.
EDIT: padding it out with float[2] to force the int member onto the next 16-byte boundary did not work either.
EDIT: solved, as per the answer below. Turns out I could have figured this out much more quickly if I'd checked the shader program's info log. Here's my code to do that:
bool checkProgramLinkStatus(GLuint programId)
{
auto log = logger("Shaders");
GLint status;
glGetProgramiv(programId, GL_LINK_STATUS, &status);
if(status == GL_TRUE)
{
log << "Program link successful." << endlog;
return true;
}
return false;
}
bool checkProgramInfoLog(GLuint programId)
{
auto log = logger("Shaders");
GLint infoLogLength;
glGetProgramiv(programId, GL_INFO_LOG_LENGTH, &infoLogLength);
GLchar* strInfoLog = new GLchar[infoLogLength + 1];
glGetProgramInfoLog(programId, infoLogLength, NULL, strInfoLog);
if(infoLogLength == 0)
{
log << "No error message was provided" << endlog;
}
else
{
log << "Program link error: " << std::string(strInfoLog) << endlog;
}
return false;
}
(As already pointed out in the comments): The GL will never interpolate integer types. To quote the GLSL spec (Version 4.5) section 4.3.4 "input variables":
Fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision
floating-point type must be qualified with the interpolation qualifier flat.
This of couse also applies to the corresponding outputs in the previous stage.

Why is glMapBuffer returning NULL?

I'm not trying to stream or anything, I just want to speed up my file loading code by loading vertex and index data directly into OpenGL's buffer instead of having to put it in an intermediate buffer first. Here's the code that grabs the pointer:
void* VertexArray::beginIndexLoad(GLenum indexFormat, unsigned int indexCount)
{
if (vao == 0)
return NULL;
bindArray();
glGenBuffers(1, &ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indexSize(indexFormat) * indexCount, NULL, GL_STATIC_DRAW);
iformat = indexFormat;
icount = indexCount;
GLenum err = glGetError();
printf("%i\n", err);
void* ptr = glMapBuffer(GL_ELEMENT_ARRAY_BUFFER, GL_WRITE_ONLY);
err = glGetError();
printf("%i\n", err);
unbindArray();
return ptr;
}
Problem is, this returns NULL. What's worse, just before I do something similar with GL_ARRAY_BUFFER, and I get a perfectly valid pointer. Why does this fail, while the other succeeds?
The first glGetError returns 1280 (GL_INVALID_ENUM). The second glGetError returns 1285(GL_OUT_OF_MEMORY). I know it's not actually out of memory because uploading the exact same data normally via glBufferData works fine.
Maybe I'm just handling vertex arrays wrong?
(ps. I asked this on gamedev stack exchange and got nothing. Re-posting here to try to figure it out)
First and foremost your error checking code is wrong. You must call glGetError in a loop until it returns GL_NO_ERROR.
Regarding the GL_OUT_OF_MEMORY error code: It can also mean out of address space, which can easily happen if a large contiguous area of virtual address space is requested from the OS, but the process' address space is so much fragmented that no chunk that size is available (even if the total amount of free address space would suffice).
This has become the bane of 32 bit systems. A simple remedy is to use a 64 bit system. If you're stuck with a 32 bit plattform you'll have to defragment your address space (which is not trivial).
If I were you I would try the following:
Replace GL_STATIC_DRAW with GL_DYNAMIC_DRAW
Make sure that indexSize(indexFormat) * indexCount produces the size you are expecting
Try using glMapBufferRange instead of glMapBuffer, something along the line of glMapBufferRange(GL_ELEMENT_ARRAY_BUFFER, 0, yourBufferSize, GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT);
Check that ibo is of type GLuint
EDIT: fwiw, I would get a gDEBugger and set a breakpoint to break when there is an OpenGL error.
I solved the problem. I was passing in indexSize(header.indexFormat) when I should have been passing in header.indexFormat. I feel like an idiot now, and I'm sorry for wasting everyone's time.

Possible bug in glGetUniformLocation with AMD drivers

Using Radeon 3870HD I've run into strange behavior with AMD's drivers (updated yesterday to newest version available).
First of all I'd like to note that whole code ran without problem on NVIDIA GeForce 540M and that glGetUniformLocation isn't failing all the time.
So my problem is that I get from glGetUniformLocation strange values with one of my shader programs used in app, while the other shader program doesn't have such flaw. I switch that shaders between frames so I'm sure it isn't temporary and is related to that shader. By strange values I mean something like 17-26, while I have only 9 uniforms present. I'm using my interface for shaders that afterwards queries for type of GLSL variable with just obtained variable and as side effect I also query its name. For all those 17-26 locations I got returned the name wasn't set and same for type. Now I've got idea to debug into interface which is separate library and change those values to something I'd expect: 0-8. Using debugger I changed these and indeed they returned proper variable name in that shader and also type was correct.
My question is how possibly could the code that is working always with NVIDIA and also with that other shader on Radeon behave differently for another shader, that is treated the same way, fail?
I include related part of interface for this:
//this fails to return correct value
m_location = glGetUniformLocation(m_program.getGlID(), m_name.c_str());
printGLError();
if(m_location == -1){
std::cerr << "ERROR: Uniform " << m_name << " doesn't exist in program" << std::endl;
return FAILURE;
}
GLsizei charSize = m_name.size()+1, size = 0, length = 0;
GLenum type = 0;
GLchar* name = new GLchar[charSize];
name[charSize-1] = '\0';
glGetActiveUniform(m_program.getGlID(), m_location, charSize, &length, &size, &type, name);
delete name; name = 0;
if(!TypeResolver::resolve(type, m_type))
return FAILURE;
m_prepared = true;
m_applied = false;
The index you pass to glGetActiveUniform is not supposed to be a uniform location. Uniform locations are only used with glUniform calls; nothing else.
The index you pass to glGetActiveUniform is just a index between 0 and the value returned by glGetProgram(GL_ACTIVE_UNIFORMS). It is used to ask what uniforms exist and to inspect the properties of those uniforms.
Your code works on NVIDIA only because you got lucky. The OpenGL specification doesn't guarantee that the order of uniform locations is the same as the order of active uniform indices. AMD's drivers don't work that way, so your code doesn't work.