http://docs.gl/gl4/glCreateShaderProgram
Pseudo code:
const GLuint shader = glCreateShader(type);
if (shader) {
glShaderSource(shader, count, strings, NULL);
glCompileShader(shader);
const GLuint program = glCreateProgram();
if (program) {
GLint compiled = GL_FALSE;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
glProgramParameteri(program, GL_PROGRAM_SEPARABLE, GL_TRUE);
if (compiled) {
glAttachShader(program, shader);
glLinkProgram(program);
glDetachShader(program, shader);
}
/* append-shader-info-log-to-program-info-log */
}
glDeleteShader(shader);
return program;
}
else {
return 0;
}
I didn't know that it is possible to compile several shaders at once. The problem that I have is that the documentation doesn't tell me how to call this.
What type should I specify if I want to compile a vertex + fragment shader? Initially I tried glCreateShader(GL_VERTEX_SHADER | GL_FRAGMENT_SHADER); but then it complained about two main functions. OpenGL probably interpreted two shaders as one shader.
glCreateShaderProgram() only compiles and links a single shader. The last argument is an array of strings just like the corresponding argument for glShaderSource(), which is also used for specifying the source of one shader. Being able to pass multiple strings is just a convenience for both of those calls. For example, if you have your shader code as an array of strings (say one per line), you can directly pass them in, without a need to concatenate them first.
glCreateShaderProgram() is intended to be used in conjunction with program pipeline objects. This mechanism allows you to combine shaders from several programs, without a need to link them. A typical call sequence would look like this:
GLuint vertProgId = glCreateShaderProgram(GL_VERTEX_SHADER, 1, &vertSrc);
GLuint fragProgId = glCreateShaderProgram(GL_FRAGMENT_SHADER, 1, &fragSrc);
GLuint pipelineId = 0;
glGenProgramPipelines(1, &pipelineId);
glUseProgramStages(pipelineId, GL_VERTEX_SHADER_BIT, vertProgId);
glUseProgramStages(pipelineId, GL_FRAGMENT_SHADER_BIT, fragProgId);
glBindProgramPipeline(pipelineId);
From the linked documentation:
glCreateShaderProgram creates a program object containing compiled and linked shaders for a single stage specified by type.
[emphasis mine]
What it means is that you cannot use it with a list of shaders targeting different parts of the rendering pipeline.
However, you could perhaps modify the sample implementation to cover that need for you. It would require to change the function signature by replacing the first parameter by an array of GLenum of count elements, then turn the implementation into a loop on each pair shader[i], strings[i] to add them to the generated program, where shader is an array of shader ids associated with each source string separately.
Once each shader has been compiled, the code would then link the whole into a program, release all the shader ids, and return the program id (actual implementation left as an exercise).
I don't think it's possible to create multiple shaders at once. glCreateShaderProgram just does everything needed to create a shader for you (create, compile, link).
If you look at the documentation at opengl.org, you will see that you can only create one shader at a time. You can only specify one of the following GLenums: GL_COMPUTE_SHADER, GL_VERTEX_SHADER, GL_TESS_CONTROL_SHADER, GL_TESS_EVALUATION_SHADER, GL_GEOMETRY_SHADER, or GL_FRAGMENT_SHADER.
I think you are right about what you are thinking. OpenGL is probably interpreting two shaders as one and that is not possible.
You can do this:
glCreateShader(GL_VERTEX_SHADER);
glCreateShader(GL_FRAGMENT_SHADER);
Look at the Description field of the source I have provided. It will explain what further to do with the shader objects.
I am not sure why you can pass an array of strings. It could be if you had multiple small snippets of shader code in different source code files.
Source: https://www.opengl.org/sdk/docs/man/html/glCreateShader.xhtml
Related
I'm trying to create a single compute a shader program computeProgram and attach two source codes on it. Here are my codes:
unsigned int computeProgram = glCreateProgram();
glAttachShader(computeProgram, MyFirstComputeShaderSourceCode);
glAttachShader(computeProgram, MySecondComputeShaderSourceCode);
glLinkProgram(computeProgram);
glGetProgramiv(computeProgram, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(computeProgram, 512, NULL, infoLog);
std::cout << "ERROR::SHADER::COMPUTE_PROGRAM::LINKING_FAILED\n" << infoLog << std::endl;
exit(1);
}
I get this type of linking error information:
ERROR::SHADER::COMPUTE_PROGRAM::LINKING_FAILED
ERROR: Duplicate function definitions for "main"; prototype: "main()" found.
I do have main functions in both shader source codes, and I understand why this is not gonna work cause there is only one main function expected in one program. But here comes my question: If I'm trying to link a vertex shader source and a fragment shader source to a single program, say, renderProgram, there are also two main functions, one in vertex shader, one in fragment shader. However, if I link these two, it somehow works fine.
Why is this difference happen? And if I want to use these two compute shaders, am I supposed to create two compute programs in order to avoid duplication of the main function?
Any help is appreciated!!
Why is this difference happen?
When you link a vertex shader and a fragment shader to the same shader program, then those two (as their names imply) are in different shader stages. Every shader stage expects exactly one definition of the main() function.
When you attach two shaders that are in the same shader stage, such as your two compute shader objects, then those get linked into the same shader stage (compute). And that does not work.
And if I want to use these two compute shaders, am I supposed to create two compute programs in order to avoid duplication of the main function?
Yes. When you have two compute shaders that each define their own functionality in terms of a main() function, then creating two shader programs each with one of the shader objects linked to it would work. Especially, when your two shaders have completely different interfaces with the host, such as SSBOs or samplers/images.
I have been spending a lot of times tweaking my shaders and I want to quickly reload a shader without recompiling my program. What is the official way to hot reload shaders in OpenGL 4.1?
Here is my current (non-functional) method:
Create a program pipeline, create one separable program per shader (right now it's just one vertex and one fragment) and use it in the pipeline. Load uniforms into each program that needs it (is there a way to mimic the old style of one uniform shared across shaders?)
glGenProgramPipelines(1, &programPipeline);
glBindProgramPipeline(programPipeline); //is this necessary?
auto handle = glCreateShader(type);
const char* src = source.c_str();
glShaderSource(handle, 1, &src, NULL);
glCompileShader(handle);
program[idx] = glCreateProgram();
__glewProgramParameteri(program[idx], GL_PROGRAM_SEPARABLE, GL_TRUE); //xcode auto corrected this for me
glAttachShader(program[idx], handle);
glLinkProgram(program[idx]);
glDetachShader(program[idx], handle);
glUseProgramStages(programPipeline, shaderStage,
program[idx]);
The part I am stuck at is what to do with attributes. It fails with "GL_INVALID_OPERATION" when I try to enable a vertex attrib array when in the normal non-separable programs it would work fine. I have generated and bound a vao and got the attrib location before trying to enable the vertex attrib array. The online resources I've found only mentions uniforms in conjunction with separable programs, but nothing regarding attributes. Can they even be used with separable programs? What would be the alternative?
What is the easiest way to join more shaders (sources) to a glsl program? Normally when a vertex shader and fragment shader are attached to a program does something like this:
vertex = glCreateShader(GL_VERTEX_SHADER);
fragment = glCreateShader(GL_FRAGMENT_SHADER);
programa = glCreateProgram();
char *vsFuente = LEE_SHADER(rutaVert);
char *fsFuente = LEE_SHADER(rutaFrag);
if(vsFuente == NULL)return GL_FALSE;
if(fsFuente == NULL)return GL_FALSE;
const char *vs = vsFuente;
const char *fs = fsFuente;
glShaderSource(vertex ,1,&vs,NULL);
glShaderSource(fragment,1,&fs,NULL);
delete [] vsFuente;
delete [] fsFuente;
glCompileShader(vertex);
glCompileShader(fragment);
glAttachShader(programa,vertex);
glAttachShader(programa,fragment);
glLinkProgram(programa);
Well, if I want to join another pair of shader (vertex and fragment) to the same program, how do I do it? I use version 2.0 of OpenGL
Exactly the same way you added the shaders you already have in your code. You create more shaders, call glCompileShader() and glAttachShader() for them, and then call glLinkProgram().
You need to be aware of what this is doing, though. While it is perfectly legal to add multiple shaders of the same type to a program, only one of the shaders of each type can have a main() function.
The typical setup I have seen when multiple shaders of the same type are attached to a program (which is not very common) is that some of the shaders contain collections of "utility" functions, and then there's one shader of each type type that contains the main() function, and implements the overall shader logic. This setup can be useful for complex shaders, since the shaders with the utility functions can be compiled once, and then attached to multiple different programs without having to compile them each time.
I'm not sure if this is what you're after. If you really just want to use different sets of shaders, the much more common setup is that you create a program for each pair of vertex/fragment shaders, and then switch between them during your rendering by calling glUseProgram() with the program you need for each part of the rendering.
Two questions:
I am rendering elements in a large VBO with different shaders. In GLSL 1.2 which I must use if I am correct as it is the most current version on OSX does not support uniform locations, which I assume means that the location of your attributes is wherever the compiler decides. Is there any way around this? For instance, as my VBO up with interleaved (x,y,z,nx,ny,nz,texU,texV), I need multiple shaders to be able to access these attributes in the same place every time. I am finding however that the compiler is giving them different locations leading to the location being the normals, and so on. I need their locations to be consistent with my VBO attribute location.
I just got my first GLSL rendering completed and it looks exactly like I forgot to enable the depth test with various polygons rendered on top of one another. I enabled depth testing with:
glEnable(GL_DEPTH_TEST);
And the problem persists. Is there a different way to enable them with shaders? I thought the depth buffer took care of this?
Problem 2 Solved. Turned out to be an SFML issue where I needed to specify the OpenGL settings when creating the window.
Attribute locations are specified in one of 3 places, in order from highest priority to lowest:
Through the use of the GLSL 3.30 (or better) or ARB_explicit_attrib_location extension syntax layout(location = #), where # is the attribute index. So if I have an input called position, I would give it index 3 like this:
layout(location = 3) in vec4 position;
This is my preferred method of handling this. Explicit_attrib_location is available on pretty much any hardware that is still being supported (that isn't Intel).
Explicit association via glBindVertexAttrib. You call this function before linking the program. To do the above, we would do this:
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
glBindVertexAttrib(program, 3, "position");
glLinkProgram(program);
You can set multiple attributes. Indeed, you can set multiple attribute names to the same index. The idea with that is to be able to just set a bunch of mappings automatically and let OpenGL figure out which one works with the actual shader code. So you could have "position" and "axis" map to index 3, and as long as you don't put a shader into this system that has both of those inputs, you'll be fine.
Note that you can also set attributes that don't exist. You could give "normal" an attribute that isn't specified in the shader. That is fine; the linker will only care about attributes that actually exist. So you can establish a complex convention for this sort of thing, and just run every program on it before linking:
void AttribConvention(GLuint prog)
{
glBindVertexAttrib(program, 0, "position");
glBindVertexAttrib(program, 1, "color");
glBindVertexAttrib(program, 2, "normal");
glBindVertexAttrib(program, 3, "tangent");
glBindVertexAttrib(program, 4, "bitangent");
glBindVertexAttrib(program, 5, "texCoord");
}
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
AttribConvention(program);
glLinkProgram(program);
Even if a particular shader doesn't have all of these attributes, it will still work.
Let OpenGL assign it. If you don't assign an attribute index to an attribute in one of the other ways, the GLSL linker will assign it for you. You can fetch the attribute post-linking with glGetAttribLocation.
I really don't advise this, because OpenGL will assign the indices arbitrarily. So every shader that uses an attribute named position may have the position in a different index. I don't think it's a good idea. So if you can't explicitly set it in the shader, then at least explicitly set it in your OpenGL code before linking. That way, you can have a convention about what attribute index 0 means, what index 1 means, etc.
On OpenGL 3.3+ you have VAOs, when you use them, you do bind VBOs to it and you can define attributes in a custom order : http://www.opengl.org/sdk/docs/man3/xhtml/glEnableVertexAttribArray.xml (remember that attributes must be contiguous)
A nice/easy implementation of this can be found on XNA : VertexDeclaration, you might want to see all the Vertex* types as well.
Some hint on getting v3 to work with SFML :
http://en.sfml-dev.org/forums/index.php?topic=6314.0
An example on how to create and use VAOs : http://www.opentk.com/files/issues/HelloGL3.cs
(It's C# but I guess you'll get it)
Update :
On v2.1 you have it too http://www.opengl.org/sdk/docs/man/xhtml/glEnableVertexAttribArray.xml, though you can't create VAOs. Almost the same functionality can be achieved but you will have to bind attributes every time since it'll be on the fixed pipeline.
I'm switching from HLSL to GLSL
When defining vertex attributes in of a vertexbuffer, one has to call
glVertexAttribPointer( GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
and pass an index. But how do I specify which index maps to which semantic in the shader?
for example gl_Normal. How can I specify that when using gl_Normal in a vertex shader, I want this to be the generic vertex attribute with index 1?
There is no such thing as a "semantic" in GLSL. There are just attribute indices and vertex shader inputs.
There are two kinds of vertex shader inputs. The kind that were removed in 3.1 (the ones that start with "gl_") and the user-defined kind. The removed kind cannot be set with glVertexAttribPointer; each of these variables had its own special function. gl_Normal had glNormalPointer, gl_Color had glColorPointer, etc. But those functions aren't around in core OpenGL anymore.
User-defined vertex shader inputs are associated with an attribute index. Each named input is assigned an index in one of the following ways, in order from most overriding to the default:
Through the use of the GLSL 3.30 or ARB_explicit_attrib_location extension syntax layout(location = #), where # is the attribute index. So if I have an input called position, I would give it index 3 like this:
layout(location = 3) in vec4 position;
This is my preferred method of handling this. Explicit_attrib_location is available on pretty much any hardware that is still being supported (that isn't Intel).
Explicit association via glBindAttribLocation. You call this function before linking the program. To do the above, we would do this:
GLuint program = glCreateProgram();
glAttachShader(program, some_shader);
glBindVertexAttrib(program, 3, "position");
glLinkProgram(program);
You can set multiple attributes. Indeed, you can set multiple attribute names to the same index. The idea with that is to be able to just set a bunch of mappings automatically and let OpenGL figure out which one works with the actual shader code. So you could have "position" and "axis" map to index 3, and as long as you don't put a shader into this system that has both of those inputs, you'll be fine.
Let OpenGL assign it. If you don't assign an attribute index to an attribute in one of the other ways, the GLSL linker will assign it for you. You can fetch the attribute post-linking with glGetAttribLocation.
I really don't advise this, because OpenGL will assign the indices arbitrarily. So every shader that uses an attribute named position may have the position in a different index. I don't think it's a good idea. So if you can't explicitly set it in the shader, then at least explicitly set it in your OpenGL code before linking. That way, you can have a convention about what attribute index 0 means, what index 1 means, etc.