I'm editing shaders for Minecraft pe on my phone via ssh using a KDE terminal on my computer.
Syntax highlighting works good, but it does not show even simplest errors.
For example in this code below I have at list two errors, mostly lexical
vec4 no`rm`alColor;
void main(){
nomralColor = vec3(1.0,1.0,1.0,1.0);//vec3 != vec4
gl_FragColor = nomralColor;//normalColorl != nomralColor
}
Is there any possible way to check glsl code for simplest (at list lexical) errors using one of available command line code editors (nano/vim/micro)?
I suggest making a little tool you can use to check. The following code just prints out the error in the console window. You could be coding in C++ or Java for Android, I'll give you my C++ example because all the OpenGL calls are the same in Java too.
It's pretty short really, you just use glCreateShader() and glCompileShader() to try to compile it, followed by glGetShaderInfoLog() to give you the error message(s).
//tell openGL to do all the work for us
bool compileShader(GLuint shaderID, std::string shaderCode) {
// Compile Shader
glShaderSource(shaderID, 1, (const char* const*)shaderCode.c_str(), NULL);
glCompileShader(shaderID);
// Check Shader
GLint Result = GL_FALSE;
int InfoLogLength;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &Result);
glGetShaderiv(shaderID, GL_INFO_LOG_LENGTH, &InfoLogLength);
// Do we have an error message?
if (InfoLogLength > 0){
std::vector<char> shaderErrorMessage(InfoLogLength + 1);
// Get the message text
glGetShaderInfoLog(shaderID, InfoLogLength, NULL, &shaderErrorMessage[0]);
#ifdef DBG_SHADER
printf("Compilation error:\n%s", &shaderErrorMessage[0]);
#endif
return false;
}
return true;
}
and the usage of that function is, given a string of code called FragmentShaderCode
GLuint FragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
bool succeeded = compileShader(FragmentShaderID, FragmentShaderCode);
Related
I wonder what is the best way to reuse the same shader multiple times.
I have different Objects which use the same shader. Is it okay to compile & link the same shader for every object again, or should i better compile / link just once?
The question is: does OpenGL cache the compiled shaders or would they compiled again?
And how about linking: should I use one program multiple times for different objects or is it also okay to recompile multiple programs with equal shaders?
/** Option 1: Using same equal shaders multiple times **/
int vertexShader1 = loadShader(GL_VERTEX_SHADER, vertexShaderCode);
int vertexShader2 = loadShader(GL_VERTEX_SHADER, vertexShaderCode);
int vertexShader3 = loadShader(GL_VERTEX_SHADER, vertexShaderCode);
int program1 = glCreateProgram();
int program2 = glCreateProgram();
int program3 = glCreateProgram();
glAttachShader(program1, vertexShader1);
glAttachShader(program2, vertexShader2);
glAttachShader(program3, vertexShader3);
glLinkProgram(program1);
glLinkProgram(program2);
glLinkProgram(program3);
/** Option 2: Using same shader multiple times **/
int vertexShader = loadShader(GL_VERTEX_SHADER, vertexShaderCode);
int program1 = glCreateProgram();
int program2 = glCreateProgram();
int program3 = glCreateProgram();
glAttachShader(program1, vertexShader);
glAttachShader(program2, vertexShader);
glAttachShader(program3, vertexShader);
glLinkProgram(program1);
glLinkProgram(program2);
glLinkProgram(program3);
/** Option 3: Reuse program **/
int vertexShader = loadShader(GL_VERTEX_SHADER, vertexShaderCode);
int program = glCreateProgram();
glAttachShader(program, vertexShader);
glLinkProgram(program);
If in doubt: Minimize the number of objects and reuse. Switching shaders is amoing the more costly operations in OpenGL and relying on the driver to deduplicate is bad practice.
I'm creating functions to load shaders, to create meshes, and the like, thus I started a simple program to test the functionalities I was adding, one by one, and I found a problem with this bit:
const char* vertexShader = prm.LoadShader ( "simple_vs.glsl" ).c_str ();
const char* fragmentShader = prm.LoadShader ( "simple_fs.glsl" ).c_str ();
GLuint vs = glCreateShader ( GL_VERTEX_SHADER );
glShaderSource ( vs, 1, &vertexShader, NULL );
glCompileShader ( vs );
GLuint fs = glCreateShader ( GL_FRAGMENT_SHADER );
glShaderSource ( fs, 1, &fragmentShader, NULL );
glCompileShader ( fs );
If I tried to compile it, I would get no errors, but there would be a black screen. If I removed the fragment shader, it would display a triangle, as it was meant to, without any colors. If I switched the two declarations, as in:
const char* fragmentShader = prm.LoadShader ( "simple_fs.glsl" ).c_str ();
const char* vertexShader = prm.LoadShader ( "simple_vs.glsl" ).c_str ();
I would get an error, and my program would crash:
Error code #3: Shader info for shader 1: WARNING: 0:1: '#version' :
version number deprecated in OGL 3.0 forward compatible context driver
ERROR: 0:1: '#extension' : 'GL_ARB_separate_shader_objects' is not
supported
However, if I put it like this:
const char* vertexShader = prm.LoadShader ( "simple_vs.glsl" ).c_str ();
GLuint vs = glCreateShader ( GL_VERTEX_SHADER );
glShaderSource ( vs, 1, &vertexShader, NULL );
glCompileShader ( vs );
const char* fragmentShader = prm.LoadShader ( "simple_fs.glsl" ).c_str ();
GLuint fs = glCreateShader ( GL_FRAGMENT_SHADER );
glShaderSource ( fs, 1, &fragmentShader, NULL );
glCompileShader ( fs );
It works perfectly fine. I am completely clueless as to why this is the case, as I ran the original code with no issues in prior versions of my program. I already checked the prm.LoadShader function, it works fine, and returns the expected value. None of the changes I have made to the program deal with shaders, so I am confused about this bizzarely particular behaviour. Can someone with more experience explain why exactly this is happening?
Presumably prm.LoadShader returns a std::string by value. Calling c_str gives you the internal character storage of a std::string, which only lives as long as the std::string does. By the end of each of the LoadShader lines, the std::string that was returned is destroyed because it was a temporary object, and the pointers you've stored are no longer pointing at valid character arrays.
You can easily get around it by storing a local copy of the returned strings.
std::string vertexShader = prm.LoadShader ( "simple_vs.glsl" );
const char* cVertexShader = vertexShader.c_str();
std::string fragmentShader = prm.LoadShader ( "simple_fs.glsl" );
const char* cFragmentShader = vertexShader.c_str();
GLuint vs = glCreateShader ( GL_VERTEX_SHADER );
glShaderSource ( vs, 1, &cVertexShader, NULL );
glCompileShader ( vs );
GLuint fs = glCreateShader ( GL_FRAGMENT_SHADER );
glShaderSource ( fs, 1, &cFragmentShader, NULL );
glCompileShader ( fs );
I'm trying to step into using shaders with OpenGL, but it seems I can't compile the shader. More frustratingly, it also appears the error log is empty. I've searched through this site extensively and checked many different tutorials but there doesn't seem to be an explanation for why it fails. I even bought a book dealing with shaders but it doesn't account for vanishing log files.
My feeling is that there must be an error with how I am linking the shader source.
//Create shader
GLuint vertObject;
vertObject = glCreateShader(GL_VERTEX_SHADER);
//Stream
ifstream vertfile;
//Try for open - vertex
try
{
//Open
vertfile.open(vertex.c_str());
if(!vertfile)
{
// file couldn't be opened
cout << "Open " << vertex << " failed.";
}
else
{
getline(vertfile, verttext, '\0');
//Test file read worked
cout << verttext;
}
//Close
vertfile.close();
}
catch(std::ifstream::failure e)
{
cout << "Failed to open" << vertfile;
}
//Link source
GLint const shader_length = verttext.size();
GLchar const *shader_source = verttext.c_str();
glShaderSource(vertObject, 1, &shader_source, &shader_length);
//Compile
glCompileShader(vertObject);
//Check for compilation result
GLint isCompiled = 0;
glGetShaderiv(vertObject, GL_COMPILE_STATUS, &isCompiled);
if (!isCompiled)
{
//Did not compile, why?
std::cout << "The shader could not be compiled\n" << std::endl;
char errorlog[500] = {'\0'};
glGetShaderInfoLog(vertObject, 500, 0, errorlog);
string failed(errorlog);
printf("Log size is %d", failed.size());
}
printf("Compiled state = %d", isCompiled);
The shader code is as trivial as can be.
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
I can't get either my fragment or my vertex shader to compile. If I can get the error log to show, then at least I will be able to start error checking my own work. For now, though, I can't even get the errors.
It seems that the reason glCompile is failing without a log in this case is because I attempted to perform this action before the OpenGL context had been initialised (glutCreateWindow etc).
If anybody else gets this problem in future, try getting your version just before you try to create any GLSL objects.
printf("OpenGL version is (%s)\n", glGetString(GL_VERSION));
If you get "OpenGL version is (null)", then you don't have a valid context to create your shaders with. Find where you create your context, and make sure your shader creation comes afterwards.
So i am making a short of tower defense game. I shared a build with them so i can check if everything performs as it should on another host.
And what actually happens is that while everything renders perfectly on my side (both on my mac/xcode + windows/visual studio 2012), on my friend's side it seems like the geometry is messed up. Each object on my screen is represented by a VBO which i use every time to render to different locations. But it seems like my VBOs have all the geometry imported from all models. (Hence the Tower with the tree on the side.)
Here's the result:
(My computer)(My friend's computer)
As by now i've managed to debug this issue up to a certain point. I can tell it's not the way i'm importing my models because i'm creating a debug.txt file with all the vectors before i send them to gpu as VBOs and on both computers they output the same values. So i their vectors are not getting messed up by memory issues or anything like that. So i am thinking maybe it is the way i am setting up or rendering my VBOs
What strikes me the most though is why things work on my computer while they are not on my friends computer. One difference i know for sure is that my computer is a developer station(whatever this means) while my friend's computer is not.
This is my VBO loading function and my VBO drawing function:
I use glfw to create my window and context and include glew headers to enable some of the newer opengl functions.
void G4::Renderer::LoadObject(
G4::TILE_TYPES aType,
std::vector<float> &v_data,
std::vector<float> &n_data,
std::vector<float> &t_data,
float scale,
bool has_texture,
unsigned int texture_id
)
{
unsigned int vertices_id, vertices_size, normals_id, texturecoordinates_id;
vertices_size = static_cast<unsigned int>(v_data.size());
glGenBuffers(1, &vertices_id);
glGenBuffers(1, &normals_id);
//::->Vertex array buffer upload.
glBindBuffer(GL_ARRAY_BUFFER, vertices_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*v_data.size(), &v_data.front(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//::->Normal Array buffer upload.
glBindBuffer(GL_ARRAY_BUFFER, normals_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*n_data.size(), &n_data.front(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
if (has_texture)
{
glGenBuffers(1, &texturecoordinates_id);
glBindBuffer(GL_ARRAY_BUFFER, texturecoordinates_id);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*t_data.size(), &(t_data[0]), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
this->vbos[aType].Update(vertices_id, vertices_size, normals_id, texture_id, texturecoordinates_id, scale, has_texture);
}
Draw code:
void G4::Renderer::DrawUnit(G4::VBO aVBO, bool drawWithColor, float r, float g, float b, float a)
{
bool model_has_texture = aVBO.HasTexture();
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
if (model_has_texture && !drawWithColor) {
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
}
if (drawWithColor)
{
glColor4f(r, g, b, a);
}
glScalef(aVBO.GetScaleValue(), aVBO.GetScaleValue(), aVBO.GetScaleValue());
glBindBuffer(GL_ARRAY_BUFFER, aVBO.GetVerticesID());
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, aVBO.GetNormalsID());
glNormalPointer(GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
if (model_has_texture && !drawWithColor)
{
glBindTexture(GL_TEXTURE_2D, aVBO.GetTextureID());
glBindBuffer(GL_ARRAY_BUFFER, aVBO.GetTextureCoordsID());
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glDrawArrays(GL_TRIANGLES, 0, aVBO.GetVerticesSize());
if (model_has_texture && !drawWithColor) {
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
}
I'm out of ideas i hope someone can direct me on how to debug this any further.
The OpenGL spec. does not specify the exact behaviour that should occur when you issue a draw call with more vertices than your buffer stores. The reason this may work correctly on one machine and not on another comes down to implementation. Each vendor is free to do whatever they want if this situation occurs, so the render artifacts might show up on AMD hardware but not at all on nVIDIA or Intel. Making matters worse, there is actually no error state generated by a call to glDrawArrays (...) when it is asked to draw too many vertices. You definitely need to test your software on hardware sourced from multiple vendors to catch these sorts of errors; who manufactures the GPU in your computer, and the driver version, is just as important as the operating system and compiler.
Nevertheless there are ways to catch these silly mistakes. gDEBugger is one, and there is also a new OpenGL extension I will discuss below. I prefer to use the new extension because in my experience, in addition to deprecated API calls and errors (which gDEBugger can be configured to monitor), the extension can also warn you for using inefficiently aligned data structures and various other portability and performance issues.
I wanted to add some code I use to use OpenGL Debug Output in my software, since this is an example of an errant behaviour that does not actually generate an error that you can catch with glGetError (...). Sometimes, you can catch these mistakes with Debug Output (though, I just tested it and this is not one of those situations). You will need an OpenGL Debug Context for this to work (the process of setting this up is highly platform dependent), but it is a context flag just like forward/backward compatible and core (glfw should make this easy for you).
Automatic breakpoint macro for x86 platforms
// Breakpoints that should ALWAYS trigger (EVEN IN RELEASE BUILDS) [x86]!
#ifdef _MSC_VER
# define eTB_CriticalBreakPoint() if (IsDebuggerPresent ()) __debugbreak ();
#else
# define eTB_CriticalBreakPoint() asm (" int $3");
#endif
Enable OpenGL Debug Output (requires a Debug Context and a relatively recent driver, OpenGL 4.x era)
// SUPER VERBOSE DEBUGGING!
if (glDebugMessageControlARB != NULL) {
glEnable (GL_DEBUG_OUTPUT_SYNCHRONOUS_ARB);
glDebugMessageControlARB (GL_DONT_CARE, GL_DONT_CARE, GL_DONT_CARE, 0, NULL, GL_TRUE);
glDebugMessageCallbackARB ((GLDEBUGPROCARB)ETB_GL_ERROR_CALLBACK, NULL);
}
Some important utility functions to replace enumerant values with more meaningful text
const char*
ETB_GL_DEBUG_SOURCE_STR (GLenum source)
{
static const char* sources [] = {
"API", "Window System", "Shader Compiler", "Third Party", "Application",
"Other", "Unknown"
};
int str_idx =
min ( source - GL_DEBUG_SOURCE_API,
sizeof (sources) / sizeof (const char *) );
return sources [str_idx];
}
const char*
ETB_GL_DEBUG_TYPE_STR (GLenum type)
{
static const char* types [] = {
"Error", "Deprecated Behavior", "Undefined Behavior", "Portability",
"Performance", "Other", "Unknown"
};
int str_idx =
min ( type - GL_DEBUG_TYPE_ERROR,
sizeof (types) / sizeof (const char *) );
return types [str_idx];
}
const char*
ETB_GL_DEBUG_SEVERITY_STR (GLenum severity)
{
static const char* severities [] = {
"High", "Medium", "Low", "Unknown"
};
int str_idx =
min ( severity - GL_DEBUG_SEVERITY_HIGH,
sizeof (severities) / sizeof (const char *) );
return severities [str_idx];
}
DWORD
ETB_GL_DEBUG_SEVERITY_COLOR (GLenum severity)
{
static DWORD severities [] = {
0xff0000ff, // High (Red)
0xff00ffff, // Med (Yellow)
0xff00ff00, // Low (Green)
0xffffffff // ??? (White)
};
int col_idx =
min ( severity - GL_DEBUG_SEVERITY_HIGH,
sizeof (severities) / sizeof (DWORD) );
return severities [col_idx];
}
My Debug Output Callback (somewhat messy, because it prints each field in a different color in my software)
void
ETB_GL_ERROR_CALLBACK (GLenum source,
GLenum type,
GLuint id,
GLenum severity,
GLsizei length,
const GLchar* message,
GLvoid* userParam)
{
eTB_ColorPrintf (0xff00ffff, "OpenGL Error:\n");
eTB_ColorPrintf (0xff808080, "=============\n");
eTB_ColorPrintf (0xff6060ff, " Object ID: ");
eTB_ColorPrintf (0xff0080ff, "%d\n", id);
eTB_ColorPrintf (0xff60ff60, " Severity: ");
eTB_ColorPrintf ( ETB_GL_DEBUG_SEVERITY_COLOR (severity),
"%s\n",
ETB_GL_DEBUG_SEVERITY_STR (severity) );
eTB_ColorPrintf (0xffddff80, " Type: ");
eTB_ColorPrintf (0xffccaa80, "%s\n", ETB_GL_DEBUG_TYPE_STR (type));
eTB_ColorPrintf (0xffddff80, " Source: ");
eTB_ColorPrintf (0xffccaa80, "%s\n", ETB_GL_DEBUG_SOURCE_STR (source));
eTB_ColorPrintf (0xffff6060, " Message: ");
eTB_ColorPrintf (0xff0000ff, "%s\n\n", message);
// Force the console to flush its contents before executing a breakpoint
eTB_FlushConsole ();
// Trigger a breakpoint in gDEBugger...
glFinish ();
// Trigger a breakpoint in traditional debuggers...
eTB_CriticalBreakPoint ();
}
Since I could not actually get your situation to trigger a debug output event, I figured I would at least show an example of an event I was able to trigger. This is not an error that you can catch with glGetError (...), or an error at all for that matter. But it is certainly a draw call issue that you might be completely oblivious to for the duration of your project without using this extension :)
OpenGL Error:
=============
Object ID: 102
Severity: Medium
Type: Performance
Source: API
Message: glDrawElements uses element index type 'GL_UNSIGNED_BYTE' that is not optimal for the current hardware configuration; consider using 'GL_UNSIGNED_SHORT' instead.
After further debugging sessions with my friends and much tryouts i managed to find the problem. This took me two solid days to figure out and really it was just a silly mistake.
glDrawArrays(GL_TRIANGLES, 0, aVBO.GetVerticesSize());
The above code does not get the vertices size (as points) but as a total number of floats stored there. So everything is multiplied by 3. Adding a /3 solved it.
So i assume since the total points where multiplied by 3 times the vbo "stole" data from other vbos stored on the gpu. (Hence the tree model stack to my tower).
What i can't figure out yet though, and would like an answer on that, is why on my computer everything rendered fine but not on other computers. As i state in my original question a hint would be that my computer is actually a developer station while my friend's not.
Anyone who is kind enough to explain why this effect doesn't reproduce on me i will gladly accept his answer as a solution to my problem.
Thank you
I must admit this is my first time implementing shaders, previously I have only worked with fixed function pipeline; however, though I am certain that everything I did is correct - there must be an error.
glLinkProgram(program) - returns GL_FALSE when queried for GL_LINK_STATUS. In addition, the info log is empty (when I query the log length - it is 1, which is the null terminator per the docs, it checks out). So linker errors, and no logs. In addition, I had just discovered that the linker problems occur as soon as I make any use of gl_Position variable in vertex shader, both during assignment and if I use it for calculations. I tried all sorts of shader variations, it errors but it fails to produce the logs - it just seems to return GL_FALSE any time gl_Position is touched. Interestingly enough, fragment shader doesn't cause any problems.
Both fragment and vertex shaders compile fine with no errors. When I introduce syntax errors, they are detected, printed, and the process is aborted prior to program creation (so it seems to work fine). I debugged and made sure that files get loaded properly, source is null terminated, sizes are correct, I checked the number of attached programs after attachment and it is 2 (correct lol). It is removed for clarity, I have the checkForErrors() method that checks and prints opengl errors - none are detected.
I am stumped, please someone help! I've been losing sleep over this for 2 days now...
This is the code to load the shader:
FILE *file = fopen(fileName.c_str(), "rb");
if(file == NULL)
{
Log::error("file does not exist: \"" + fileName + "\"");
return NULL;
}
// Calculate file size
fseek(file , 0 , SEEK_END);
int size = ftell(file);
rewind(file);
// If file size is 0
if(size == 0)
{
Log::error("file is empty: \"" + fileName + "\"");
return NULL;
}
char **source = new char*[1];
source[0] = new char[size+1];
source[0][size] = '\0';
// If we weren't able to read the entire file into memory
int readSize = fread(source[0], sizeof(char), size, file);
if(size != readSize)
{
int fileError = ferror(file);
// Free the source, log an error and return false
delete source[0];
delete [] source;
fclose(file);
Log::error("couldn't load file into memory [ferror(" + toString<int>(fileError) + ")]: \"" + fileName + "\" (Size: " + toString<int>(readSize) + "/" + toString<int>(size) + " bytes)");
return NULL;
}
// Close the file
fclose(file);
// Create the shader object
// shaderType is GLenum that is set based on the file extension. I assure you it is correctly set to either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER
GLuint shaderID = glCreateShader(shaderType);
// If we could not create the shader object, check for OpenGL errors and return false
if(shaderID == 0)
{
checkForErrors();
Log::error("error creating shader \"" + name);
delete source[0];
delete [] source;
return NULL;
}
// Load shader source and compile it
glShaderSource(shaderID, 1, (const GLchar**)source, NULL);
glCompileShader(shaderID);
GLint success;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &success);
if(!success)
{
GLchar error[1024];
glGetShaderInfoLog(shaderID, 1024, NULL, error);
Log::error("error compiling shader \"" + name + "\"\n Log:\n" + error);
delete source[0];
delete [] source;
return NULL;
}
else
{
Log::debug("success! Loaded shader \"" + name + "\"");
}
// Clean up
delete source[0];
delete [] source;
Quick note: glShaderSource - I cast to (const GLchar**) because GCC complains about const to non-const char pointers; Otherwise, I believe I'm entirely compliant.
No errors there, by the way. The shaders (below) compile without any errors.
Vertex Shader:
void main()
{
// If I comment the line below, the link status is GL_TRUE.
gl_Position = vec4(1.0, 1.0, 1.0, 1.0);
}
Fragment Shader:
void main()
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
Below is the code that creates the shader program, attaches objects and links etc.:
// Create a new shader program
GLuint program = glCreateProgram();
if(program == 0)
{
Log::error("RenderSystem::loadShaderProgram() - could not create OpenGL shader program; This one is fatal, sorry.");
getContext()->fatalErrorIn("RenderSystem::loadShaderProgram(shader1, shader2)", "OpenGL failed to create object using glCreateProgram()");
return NULL;
}
// Attach shader objects to program
glAttachShader(program, vertexShaderID);
glAttachShader(program, fragmentShaderID);
// Link the program
GLint success = GL_FALSE;
glLinkProgram(program);
checkForErrors();
glGetProgramiv(program, GL_LINK_STATUS, &success);
if(success == GL_FALSE)
{
GLchar errorLog[1024] = {0};
glGetProgramInfoLog(program, 1024, NULL, errorLog);
Log::error(std::string() + "error linking program: " + errorLog);
return NULL;
}
success = GL_FALSE;
glValidateProgram(program);
glGetProgramiv(program, GL_VALIDATE_STATUS, &success);
if(success == GL_FALSE)
{
GLchar errorLog[1024] = {0};
glGetProgramInfoLog(program, 1024, NULL, errorLog);
checkForErrors();
Log::error(std::string() + "error validating shader program; Details: " + errorLog);
return NULL;
}
Usually it doesn't even get as far as validating the program... I'm so frustrated with this, it's very hard not to be vulgar.
Your help is needed and any assistance is genuinely appreciated!
EDIT: I'm running everything on Intel HD 3000 (with support for OpenGL up to 3.1). My target OpenGL version is 2.0.
EDIT2: I would also like to note that I had some issues with reading the shaders from text files if I set the "r" or "rt" flags in fopen - the read size was smaller than actual size by about 10 bytes (consistently on all files) - and feof() would return true. When I switched to reading in binary ("rb") the problem went away and files were read in fully. I did try several alternative implementations and they all produced the same error during linking (and I print shader source to console right after reading the file to ensure it looks correct, it does.).
OK, I knew when I was posting that this was going to be embarrassing, but it is quite bad: the Intel graphics config application that usually accompanies an Intel driver has 3D settings tab; Now, the "custom settings" check box was unchecked, but the grayed out option under the setting "Vertex Processing" spelled out "Enable Software Processing". Even though it was unchecked and everything was grayed out I simply entered the custom settings and checked everything to "Application Settings".
That fixed it! Everything links like it should! Don't know who thought of that option, why would that ever be useful?! Its the only option that looks stunningly out of place.
I went through several driver re-installs, intense debugging and configuration research, I had to speculate and second guess on absolutely everything! Terrible, wouldn't wish it on my worst enemies.