EDIT: clarification. I'm not checking if the compile failed, I'm just looking for logs. I'm checking COMPILE_STATUS later in the code. (See the GLint isCompiled that is not used in this piece.)
Why does this return 1, it's supposed to be 0?
glGetShaderiv(compiled, GL_INFO_LOG_LENGTH, &infoLogLength);
GLuint compiled = glCreateShader(shader->Type);
GLchar const *shader_source = code.c_str();
GLint const shader_length = code.size();
glCheck(glShaderSource(compiled, 1, &shader_source, &shader_length));
glCheck(glCompileShader(compiled));
GLint isCompiled = 0;
char msg[512];
// Check if everything went ok
glGetShaderiv(compiled, GL_COMPILE_STATUS, &isCompiled);
// Getting information about the compile
GLsizei infoLogLength = 0;
glGetShaderiv(compiled, GL_INFO_LOG_LENGTH, &infoLogLength);
if (infoLogLength > 0)
{
glGetShaderInfoLog(compiled, 512, &infoLogLength, msg);
printf("Shader [%s:%s] error when compiling[%d]: \n%s", shader->Name.c_str(), GetShaderTypeAsString(shader->Type).c_str(), infoLogLength, msg);
}
Output:
Shader [dust_particle_VS.glsl:Vertex Sader] error when compiling[1]:
The shader seems to be working fine and the game plays without problems.
I'm just thinking that it might be some warning that might benefit me to know about.
Implementations are allowed to give you an info log even if the shader successfully compiled. Or more to the point, the info log is not required to be empty upon a successful shader compilation. From the specification:
A string that contains information about the last compilation attempt on a shader object, last link or validation attempt on a program object, or last validation attempt on a program pipeline object, called the info log, can be obtained...
Note that it does not say "last failed compilation attempt" or anything of that nature. So it doesn't matter if the info log length is 0, 1, or anything else; the info log's length cannot tell you if compilation succeeded or failed. Indeed, some implementations have been known to give you warnings in an info log even on successful compilations.
Checking the compile status is how you determine if compilation succeeded or not, not the info log.
The OpenGLs 4.6 specs at 7.13 says:
void GetShaderiv( uint shader, enum pname, int *params );
...
If pname
is INFO_LOG_LENGTH, the length of the info log, including a null
terminator, is returned. If there is an empty info log, zero is
returned.
And later:
void GetShaderInfoLog( uint shader, sizei bufSize, sizei *length, char
*infoLog );
...
These commands return an info log string for the corresponding type of
object in infoLog. This string will be null-terminated even if the
INFO_LOG_LENGTH query returns zero.
As I undestand these sentences is that the query INFO_LOG_LENGTH must return zero for an empty info log string. And if you retrieve this string it will contain at least a null character.
My guess is that the driver you use counts (for some cases) that null-terminated char even for an empty log.
In other words, seems a driver bug.
Not a big one, because, as #NicoBolas told, you will not use the info-log as a fail-check, but for info in case of fail; and likely in this case the driver will set a string longer than 1.
Related
I am currently attempting to write a small program in C++ to learn a specific feature of OpenGl (Transform feedback). I have a small function that is part of this program that is supposed to load a shader and provide a GLuint to the user of that function.
Function in header:
namespace glutils{
GLuint loadShader(const std::string &path, GLenum type);
}
Function implementation:
GLuint glutils::loadShader(const std::string &path, GLenum type) {
GLuint shader = glCreateShader(type);
std::string shaderSource = utils::loadFile(path);
const char *shaderSourceCString = shaderSource.c_str();
glShaderSource(shader, 1, &shaderSourceCString, nullptr);
glCompileShader(shader);
GLint success;
glGetShaderiv(shader, GL_COMPILE_STATUS, &success);
if(!success){
GLchar infoLog[1024];
glGetShaderInfoLog(shader, 1024, nullptr, infoLog);
throw std::runtime_error("Failed to compile shader " + path + ":\n" + infoLog);
}
return shader;
}
Function use:
GLuint shader = glutils::loadShader("sqrt.glsl", GL_VERTEX_SHADER);
When I first ran my program, it was crashing with a segmentation fault. I used my debugger (LLDB wrapper in CLion) to determine where it is. It is the call to glCreateShader on the second line of my function. My debugger also told me that the parameter path is a string full of garbage, and that the parameter type contains a value different than that of GL_VERTEX_SHADER. When looking at the function, I noticed that I had initially forgotten to add a return to the end, which is undefined behavior and thus can cause this. Correcting this error, however, did not fix the garbage data arguments and associated segmentation fault. I tried searching for similar issues with my preferred search engine to no avail.
What is the issue? How can I fix it?
The following GLSL fragment shader compiles and works as expected:
#version 330 core
out vec3 color;
in float U;
in vec4 vertexNormal_worldSpace;
uniform sampler1D TextureSampler;
uniform vec4 LightPos;
void main()
{
float cosT = dot(normalize(vertexNormal_worldSpace.xyz),normalize(LightPos.xyz));
color = cosT * texture(TextureSampler,U).rgb;
}
However, when I change line 9 to clamp the value of "cosT" between 0 and 1:
float cosT = clamp(dot(normalize(vertexNormal_worldSpace.xyz),normalize(LightPos.xyz)),0.0,1.0);
I get the errors:
0(1) : error C0000: syntax error, unexpected integer constant, expecting "::" at token "<int-const>"
0(10) : error C7532: global function texture requires "#version 130" or later
This appears to say the error appears on the first line, but nothing has changed there at all. Furthermore the 2nd error suggests that there is an issue with the version of GLSL I am using, however #version 330 core should be a later version than #version 130, as the error message states.
EDIT:
This is my code for loading in shaders:
static GLuint LoadShaders(const char* vertex_file_path, const char* frag_file_path){
GLuint VertID = glCreateShader(GL_VERTEX_SHADER);
GLuint FragID = glCreateShader(GL_FRAGMENT_SHADER);
char const* VertPointer = ReadShaderFile(vertex_file_path);
char const* FragPointer = ReadShaderFile(frag_file_path);
glShaderSource(VertID,1,&VertPointer,NULL);
glCompileShader(VertID);
GLint Result = GL_FALSE;
int InfoLogLength;
glGetShaderiv(VertID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(VertID, GL_INFO_LOG_LENGTH, &InfoLogLength);
if ( InfoLogLength > 0 ){
std::vector<char> VertexShaderErrorMessage(InfoLogLength+1);
glGetShaderInfoLog(VertID, InfoLogLength, NULL, &VertexShaderErrorMessage[0]);
printf("%s\n", &VertexShaderErrorMessage[0]);
}
glShaderSource(FragID,1,&FragPointer,NULL);
glCompileShader(FragID);
glGetShaderiv(FragID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(FragID, GL_INFO_LOG_LENGTH, &InfoLogLength);
if ( InfoLogLength > 0 ){
std::vector<char> FragmentShaderErrorMessage(InfoLogLength+1);
glGetShaderInfoLog(FragID, InfoLogLength, NULL, &FragmentShaderErrorMessage[0]);
printf("%s\n", &FragmentShaderErrorMessage[0]);
}
GLuint ProgramID = glCreateProgram();
glAttachShader(ProgramID,VertID);
glAttachShader(ProgramID,FragID);
glLinkProgram(ProgramID);
glGetProgramiv(ProgramID, GL_LINK_STATUS, &Result);
glGetProgramiv(ProgramID, GL_INFO_LOG_LENGTH, &InfoLogLength);
if ( InfoLogLength > 0 ){
std::vector<char> ProgramErrorMessage(InfoLogLength+1);
glGetProgramInfoLog(ProgramID, InfoLogLength, NULL, &ProgramErrorMessage[0]);
printf("%s\n", &ProgramErrorMessage[0]);
}
return ProgramID;
}
static char const* ReadShaderFile(const char* path){
std::string ShaderCode;
std::ifstream ShaderStream(path,std::ios::in);
if(ShaderStream.is_open()){
std::string line = "";
while (std::getline(ShaderStream,line)){
ShaderCode +="\n"+line;
}
ShaderStream.close();
return ShaderCode.c_str();
}else{
return 0;}
}
This is basically straight from the tutorial I am following link, only change is putting the file reading in the ReadShaderFile function.
EDIT 2:
My OpenGL context is created with the following version:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR,3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR,3);
The problem is indeed the shader loader. You are adding with this statement
ShaderCode +="\n"+line;
a newline character (\n) before each line which moves everything one line down compared to your input.
Since the version statement has to be in the first line of a shader and you move it to the second, the statement seems to be ignored by your driver. My NVIDIA driver, for example, states:
error C0204: version directive must be first statement and may not be repeated
A simple fix would be to add to newline character after each line instead of before, but I would strongly encourage you not to read whole files line by line since this will give you a terrible performance. For example, the ShaderCode variable will be resized for each line which means you get a memory allocation and a copy operation for the whole string. Have a look at this question on how to read complete files.
Edit:
Another problem is that you're returning the c_str() pointer of local variable. When the ReadShaderFile method ends, the std::string ShaderCode variable goes out of scope (thus frees it's content) and the returned pointer points to an invalid memory address.
Solution: Return the std::string object instead of the const char pointer to its content.
The problem is that, shaders (pretty simple ones, as I'm learning OpenGL) fail to compile in a seemingly random manner (and gives random error messages * ).
The same shaders, however, compile after about 3 or 4 tries.
Here is the code:
Shader::Shader(GLenum Type,std::string filename)
{
shader_type = Type;
std::ifstream ifs(filename);
if(!ifs)
throw(std::runtime_error("File:"+filename+" not opened."));
std::ostringstream stream;
stream<<ifs.rdbuf();
const GLchar* data = stream.str().c_str();
handle = glCreateShader(shader_type);
glShaderSource(handle,1,static_cast<const GLchar**>(&data),0);
glCompileShader(handle);
int status;
glGetShaderiv(handle,GL_COMPILE_STATUS,&status);
if(status == GL_FALSE)
{
int loglength;
glGetShaderiv(handle,GL_INFO_LOG_LENGTH,&loglength);
auto data = new char[loglength];
glGetShaderInfoLog(handle,loglength,&loglength,data);
std::string strdata(data);
delete [] data;
throw(std::runtime_error(strdata));
}
}
Note that the shaders aren't missing newlines at the end, has an extra space after the last semicolon and uses tabs instead of spaces. (as suggested in various old posts over the internet!).
Here are two error messages produced from the same vertex shader here, not at the same time:
#version 330
in vec2 Position;
uniform mat4 transform;
void main()
{
gl_Position = transform*vec4(Position,0.0f,1.0f);
}
Errors:
0(1) : error C0000: syntax error, unexpected $undefined at token "<undefined>"
0(6) : error C0000: syntax error, unexpected '!', expecting ',' or ')' at token "!"
And sometimes it just works !
Is it a problem with my drivers ?
(I'm using the recent 302.x stable nvidia binary drivers on Arch Linux 64 bit, with an aged 9600 GSO card )
P.S: The code works as expected ,whenever the shader compiles correctly, so I think it shoud be correct.
I'll be happy to post a working(sometimes !) example as a zip file if the problem can't be found from this, and someone wants to take a look.
const GLchar* data = stream.str().c_str();
This is bad. If you want the string's data, you need to store it. str will return a copy of the buffer, which you then get a pointer to with c_str. Once that temporary is destroyed (at the end of this line), that pointer will point to memory you no longer have access to.
The correct code is this:
std::string dataString = stream.str();
const GLchar *data = reinterpret_cast<GLchar*>(dataString.c_str());
I'm working on a project that uses OpenGL 4.0 shaders.
I have to supply the call to glShaderSource() with an array of char arrays, which represents the source of the shader.
The shader compilation is failing, with the following errors:
(0) : error C0206: invalid token "<null atom>" in version line
(0) : error C0000: syntax error, unexpected $end at token "<EOF>"
Here's my (hello world) shader - straight from OpenGL 4.0 shading language cookbook
#version 400
in vec3 VertexPosition;
in vec3 VertexColor;
out vec3 Color;
void main()
{
Color = VertexColor;
gl_Position = vec4( VertexColor, 1.0 );
}
And here's my code to read the shader file into my C++ code, and compile the shader at runtime:
const int nMaxLineSize = 1024;
char sLineBuffer[nMaxLineSize];
ifstream stream;
vector<string> vsLines;
GLchar** ppSrc;
GLint* pnSrcLineLen;
int nNumLines;
stream.open( m_sShaderFile.c_str(), std::ios::in );
while( (stream.good()) && (stream.getline(sLineBuffer, nMaxLineSize)) )
{
if( strlen(sLineBuffer) > 0 )
vsLines.push_back( string(sLineBuffer) );
}
stream.close();
nNumLines = vsLines.size();
pnSrcLineLen = new GLint[nNumLines];
ppSrc = new GLchar*[nNumLines];
for( int n = 0; n < nNumLines; n ++ )
{
string & sLine = vsLines.at(n);
int nLineLen = sLine.length();
char * pNext = new char[nLineLen+1];
memcpy( (void*)pNext, sLine.c_str(), nLineLen );
pNext[nLineLen] = '\0';
ppSrc[n] = pNext;
pnSrcLineLen[n] = nLineLen+1;
}
vsLines.clear();
// just for debugging purposes (lines print out just fine..)
for( int n = 0; n < nNumLines; n ++ )
ATLTRACE( "line %d: %s\r\n", n, ppSrc[n] );
// Create the shader
m_nShaderId = glCreateShader( m_nShaderType );
// Compile the shader
glShaderSource( m_nShaderId, nNumLines, (const GLchar**)ppSrc, (GLint*) pnSrcLineLen );
glCompileShader( m_nShaderId );
// Determine compile status
GLint nResult = GL_FALSE;
glGetShaderiv( m_nShaderId, GL_COMPILE_STATUS, &nResult );
The C++ code executes as expected, but the shader compilation fails. Can anyone spot what I might be doing wrong?
I have a feeling that this may be to do with end of line characters somehow, but as this is my first attempt at shader compilation, I'm stuck!
I've read other SO answers on shader compilation, but they seem specific to Java / other languages, not C++. If it helps, I'm on the win32 platform.
You have made a mistake that others have made. This is the definition of glShaderSource:
void glShaderSource(GLuint shader, GLsizei count, const GLchar **string, const GLint *length);
The string is an array of strings. It is not intended to be an array of lines in your shader. The way the compiler will interpret this array of strings is by concatenating them together, one after another. Without newlines.
Since stream.getline will not put the \n character in the string, each of the shader strings you generate will not have a newline at the end. Therefore, when glShaderSource goes to compile them, your shader will look like this:
#version 400in vec3 VertexPosition;in vec3 VertexColor;out vec3 Color;...
That's not legal GLSL.
The proper way to do this is to load the file as a string.
std::ifstream shaderFile(m_sShaderFile.c_str());
if(!shaderFile)
//Error out here.
std::stringstream shaderData;
shaderData << shaderFile.rdbuf(); //Loads the entire string into a string stream.
shaderFile.close();
const std::string &shaderString = shaderData.str(); //Get the string stream as a std::string.
Then you can just pass that along to glShaderSource easily enough:
m_nShaderId = glCreateShader( m_nShaderType );
const char *strShaderVar = shaderString.c_str();
GLint iShaderLen = shaderString.size();
glShaderSource( m_nShaderId, 1, (const GLchar**)&strShaderVar, (GLint*)&iShaderLen );
glCompileShader( m_nShaderId );
If you copied this loading code from somewhere, then I strongly suggest you find a different place to learn about OpenGL. Because that's terrible coding.
glShaderSource( m_nShaderId, nNumLines, (const GLchar**)ppSrc, (GLint*) pnSrcLineLen );
I know the signature of glShaderSource looks tempting to send each line of the shader separately. But that's now what it's meant for. The point of being able to send is multiple arrays is so that one can mix multiple primitive shader sources into a single shader, kind of like include files. Understanding this, makes it much simpler to read in a shader file – and avoids such nasty bugs.
Using C++ you can do it much nicer and cleaner. I already wrote the follwing in Getting garbage chars when reading GLSL files
You're using C++, so I suggest you leverage that. Instead of reading into a self allocated char array I suggest you read into a std::string:
#include <string>
#include <fstream>
std::string loadFileToString(char const * const fname)
{
std::ifstream ifile(fname);
std::string filetext;
while( ifile.good() ) {
std::string line;
std::getline(ifile, line);
filetext.append(line + "\n");
}
return filetext;
}
That automatically takes care of all memory allocation and proper delimiting -- the keyword is RAII: Resource Allocation Is Initialization. Later on you can upload the shader source with something like
void glcppShaderSource(GLuint shader, std::string const &shader_string)
{
GLchar const *shader_source = shader_string.c_str();
GLint const shader_length = shader_string.size();
glShaderSource(shader, 1, &shader_source, &shader_length);
}
You can use those two functions together like this:
void load_shader(GLuint shaderobject, char * const shadersourcefilename)
{
glcppShaderSource(shaderobject, loadFileToString(shadersourcefilename));
}
Just a quick hunch:
Have you tried calling glShaderSource with NULL as length parameter? In that case OpenGL will assume your code to be null-terminated.
(Edited because of stupidity)
I am writing my first program using OpenGL, and I have gotten to the point where I am trying to get it to compile my extremely simple shader program. I always get the error that it failed to compile the shader. I use the following code to compile the shader:
struct Shader
{
const char* filename;
GLenum type;
GLchar* source;
};
...
static char* readShaderSource(const char* shaderFile)
{
FILE* fp = fopen(shaderFile, "r");
if ( fp == NULL ) { return NULL; }
fseek(fp, 0L, SEEK_END);
long size = ftell(fp);
fseek(fp, 0L, SEEK_SET);
char* buf = new char[size + 1];
fread(buf, 1, size, fp);
buf[size] = '\0';
fclose(fp);
return buf;
}
...
Shader s;
s.filename = "<name of shader file>";
s.type = GL_VERTEX_SHADER;
s.source = readShaderSource( s.filename );
GLuint shader = glCreateShader( s.type );
glShaderSource( shader, 1, (const GLchar**) &s.source, NULL );
glCompileShader( shader );
And my shader file source is as follows:
#version 150
in vec4 vPosition;
void main()
{
gl_Position = vPosition;
}
I have also tried replacing "in" with "attribute" as well as deleting the version line. Nothing compiles.
Note:My actual C program compiles and runs. The shader program that runs on the GPU is what is failing to compile.
I have also made sure to download my graphics card's latest driver. I have an NVIDIA 8800 GTS 512;
Any ideas on how to get my shader program (written in GLSL) to compile?
As wrote in comments, does compile shader output anything to console? To my surprise, while I was using ATI, I got message that shader program compiled successfuly, however when I started using Nvidia, I was staring at screen for first time, because nothing got output... however shaders were working. So maybe you are successfuly compiling and just don't take it? And if they are not working in context you try to use shader program and nothing happens, I think you're missing the linking of shader (it may be further in code however). Google has some good answers on how to correctly perform every step, you can compare your code to this example. I also made an interface for working whit shaders, you can take a look my UniShader. Project lacks english documentation and is mainly used for GPGPU, but you can easily load any shader, and the code itself is written whit english naming, so it should be quite comfortable. Look in folder UniShader in that zip for source codes. There are also few examples, the one named Ukazkovy program na GPGPU got also source included so you can look how to use those classes.. good luck!