OpenGL Compiled Shader was corrupt - opengl

I'm trying to compile a shader program in OpenGL 3.2, but I'm getting a strange linking error.
After creating the vertex and fragment shaders compiling and attaching them, I try to link them into a program but I get the following infolog error:
ERROR: Compiled vertex shader was corrupt.
ERROR: Compiled fragment shader was corrupt.
I have absolutely no idea what it means and the only thing I could find on google was to ignore it. However, when I glUseProgram() it I get an invalid operation, so I can't just ignore this error.
Moreover, I just updated to XCode 5 and the very same code/shader source was working. Don't know how it can be related though..
Edit: shader source
Vertex:
#version 150
in vec3 position;
uniform mat4 worldMatrix;
uniform float time;
out vec3 outPos;
void main(){
gl_Position = worldMatrix*vec4(position, 1.0);
outPos = position;
}
Fragment:
#version 150
out vec4 outColor;
uniform float time;
uniform float red;
uniform float green;
uniform float blue;
void main(){
outColor=vec4(red, green, blue,1.0);
}

Got it to work.
At first I rewrote the shaders with another editor (text mate) and then it worked sometimes. Then I made sure that it was properly null terminated and it worked every time.
Maybe somehow there were non-printing characters like Andon M. Coleman suggested.

I had the same issue, and discovered that if you use the ' std::stringstream buffer' to read the file, as many code examples on the web use, the method .str().c_str() to get a *ptr needed for glShaderSource, the pointer gets deleted, meaning, you get random linker errors. Here is the work around I created...
int shaderFromFile(const std::string& filePath, GLenum shaderType) {
//open file
std::ifstream f;
f.open(filePath.c_str(), std::ios::in);
if(!f.is_open()){
throw std::runtime_error(std::string("Failed to open file: ") + filePath);
}
//read whole file into stringstream buffer
std::stringstream buffer;
buffer << f.rdbuf();
buffer << "\0";
f.close();
// need to copy, as pointer is deleted when call is finished
std::string shaderCode = buffer.str().c_str();
//create new shader
int ShaderID = glCreateShader(shaderType);
//set the source code
const GLchar* code = (const GLchar *) shaderCode.c_str();
glShaderSource(ShaderID, 1, &code, NULL);
//compile
glCompileShader(ShaderID);
//throw exception if compile error occurred
GLint status;
glGetShaderiv(ShaderID, GL_COMPILE_STATUS, &status);
std::cout << "Status from compile:" << status << "\r\n";
if (status == GL_FALSE) {
std::string msg("Compile failure in shader:\n");
GLint infoLogLength;
glGetShaderiv(ShaderID, GL_INFO_LOG_LENGTH, &infoLogLength);
char* strInfoLog = new char[infoLogLength + 1];
glGetShaderInfoLog(ShaderID, infoLogLength, NULL, strInfoLog);
msg += strInfoLog;
delete[] strInfoLog;
glDeleteShader(ShaderID); ShaderID = 0;
throw std::runtime_error(msg);
}
return ShaderID;
}

Related

GLSL program fails to link depending on their files' size

I've a weird problem with GLSL shaders. My program seems to not launch or launch successfully depending on the size of my shader files. If I only write a few lines of code in shader files, the program works fine. For example, the program launches successfully with this much code:
vertex.vs:
#version 460 core
layout (location = 0) in vec2 coords;
uniform float u_time;
out vec3 result_color;
void main()
{
vec3 colorA = vec3(0.514f, 0.179f, 0.900f);
vec3 colorB = vec3(0.895f, 0.359f, 0.488f);
gl_Position = vec4(coords, 0.0f, 1.0f);
result_color = colorA;
}
fragment.fs:
#version 460 core
in vec3 result_color;
out vec4 color;
void main()
{
color = vec4(result_color, 1.0f);
}
If I add more code to any of my shader files it only launches about 50% of the time:
vertex.vs:
#version 460 core
layout (location = 0) in vec2 coords;
uniform float u_time;
out vec3 result_color;
// float wave(float x)
// {
// return min(2.0f - 2.0f * fract(x / 2.0f),
// 2.0f * fract(x / 2.0f));
// }
void main()
{
vec3 colorA = vec3(0.514f, 0.179f, 0.900f);
vec3 colorB = vec3(0.895f, 0.359f, 0.488f);
// float frequency = 15.0f;
// float y = wave(coords.x * frequency + cos(u_time) * 5.0f);
// float fr_y = fract(coords.y * frequency + u_time);
gl_Position = vec4(coords, 0.0f, 1.0f);
result_color = colorA;
}
Note that the code I added is commented out but still the error occurs. And the more code I add the less is the chance it works.
Here's the class I use to read shaders from files and create a shader program:
#include <iostream>
#include <fstream>
#include <sstream>
#include <glad/glad.h>
#include "shader_program.h"
ShaderProgram::ShaderProgram(std::string vertexShaderPath, std::string fragmentShaderPath)
{
std::ifstream vShaderFile;
std::ifstream fShaderFile;
std::stringstream vShaderStream;
std::stringstream fShaderStream;
vShaderFile.exceptions(std::ifstream::failbit | std::ifstream::badbit);
fShaderFile.exceptions(std::ifstream::failbit | std::ifstream::badbit);
try
{
vShaderFile.open(vertexShaderPath);
vShaderStream << vShaderFile.rdbuf();
}
catch(std::ifstream::failure e)
{
std::cout << "ERROR: FAILED TO READ SHADER FILE: VERTEX SHADER" << std::endl;
}
try
{
fShaderFile.open(fragmentShaderPath);
fShaderStream << fShaderFile.rdbuf();
}
catch(std::ifstream::failure e)
{
std::cout << "ERROR: FAILED TO READ SHADER FILE: FRAGMENT SHADER" << std::endl;
}
const char* vertexShaderSource = vShaderStream.str().c_str();
const char* fragmentShaderSource = fShaderStream.str().c_str();
CreateShaderProgram(vertexShaderSource, fragmentShaderSource);
}
unsigned int ShaderProgram::GetProgram()
{
return program;
}
void ShaderProgram::CreateShaderProgram(const char* vertexShaderSource, const char* fragmentShaderSource)
{
program = glCreateProgram();
unsigned int vertexShader = CompileShader(GL_VERTEX_SHADER, vertexShaderSource);
unsigned int fragmentShader = CompileShader(GL_FRAGMENT_SHADER, fragmentShaderSource);
glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);
glLinkProgram(program);
int linkingResult;
glGetProgramiv(program, GL_LINK_STATUS, &linkingResult);
if (linkingResult == GL_FALSE)
{
char infoLog[512];
glGetProgramInfoLog(program, 512, NULL, infoLog);
std::cout << "ERROR: FAILED TO LINK PROGRAM\n" << infoLog << std::endl;
}
}
unsigned int ShaderProgram::CompileShader(GLuint type, const char* source)
{
unsigned int shader = glCreateShader(type);
glShaderSource(shader, 1, &source, NULL);
glCompileShader(shader);
int compilationResult;
glGetShaderiv(shader, GL_COMPILE_STATUS, &compilationResult);
if (compilationResult == GL_FALSE)
{
char infoLog[512];
glGetShaderInfoLog(shader, 512, NULL, infoLog);
std::cout << "ERROR: FAILED TO COMPILE SHADER: "
<< (type == GL_VERTEX_SHADER ? "VERTEX" : "FRAGMENT")
<< " SHADER\n" << infoLog << std::endl;
}
return shader;
}
The error message I get sometimes seems random. Most of the time it's this:
ERROR: FAILED TO LINK PROGRAM
Vertex shader(s) failed to link, fragment shader(s) failed to link.
Vertex link error: INVALID_OPERATION.
ERROR: error(#97) No program main found
fragment link error: INVALID_OPERATION.
ERROR: error(#97) No program main found
But sometimes it's about a syntax error which is nowhere to be found really:
ERROR: FAILED TO COMPILE SHADER: VERTEX SHADER
Vertex shader failed to compile with the following errors:
ERROR: 0:1: error(#132) Syntax error: "0" parse error
ERROR: error(#273) 1 compilation errors. No code generated
ERROR: FAILED TO LINK PROGRAM
Vertex shader(s) were not successfully compiled before glLinkProgram() was called. Link failed.
I use GLFW 3.3.8 and Glad. I started with OpenGL version 3.3 and tried switching to 4.6 which had no effect. Also tried to update my GPU drivers to no success either. I really have no idea where this behaviour may be coming from.
You have undefined behaviour.
vShaderStream.str().c_str();
str() returns string by value. So this temporary string object is destroyed at the end of full expression, and the pointer obtained by c_str() is dangling. As a solution you could do:
CreateShaderProgram(vShaderStream.str().c_str(), fShaderStream.str().c_str());
in case above, string temporaries are still alive when CreateShaderProgram is called.

Why is the glsl '#version' definition giving me an error?

I have a simply shader creation pipeline. Nothing special. The shader source reads in properly from what I can tell. Originally, I had the shader code in files with a suffix of '.glsl' just because I wanted to, but they are now plain '.txt' files.
After reading in the shader code, I compile the shader and link the shader as well as check the compile status and info log for errors. The actual shader code below doesn't look to me like it has an error considering it't now basically a carbon copy of a simple tutorial you can find online which I'll link if you need it.
I do get an error though, which is "0(1) : error C0104: Unknown pre-processor directive #version330".
This is the same for both the vertex and the fragment shaders, which have an almost identical creation process.
My driver version is [4.6.0 NVIDIA 445.87] as well, so I should be able to use #version 330.
I did also have them both set to '#version 330 core', but I was getting the same error.
This is my program and shader creation process.
struct s_Program
{
uint32_t program, vert, frag;
s_Program()
{
vert = glCreateShader(GL_VERTEX_SHADER);
frag = glCreateShader(GL_FRAGMENT_SHADER);
program = glCreateProgram();
}
bool genVert(const char* fpath)
{
std::fstream file(fpath);
if (!file)
{
logf("[FILE_ERROR] : Failed to read shader file.");
return(false);
}
logf("Reading vert shader.");
const char* source;
std::string newSource, line;
while (file >> line)
{
newSource += line;
}
source = newSource.c_str();
glShaderSource(vert, 1, &source, 0);
glCompileShader(vert);
int comp;
glGetShaderiv(vert, GL_COMPILE_STATUS, &comp);
if (comp == GL_FALSE)
{
int maxLength = 0;
glGetShaderiv(vert, GL_INFO_LOG_LENGTH, &maxLength);
std::vector<char> errorLog(maxLength);
glGetShaderInfoLog(vert, maxLength, &maxLength, &errorLog[0]);
logf(&errorLog[0]);
glDeleteShader(vert);
return(false);
}
glAttachShader(program, vert);
return(true);
}
bool genFrag(const char* fpath)
{
std::fstream file(fpath);
if (!file)
{
logf("[FILE_ERROR] : Failed to read shader file.");
return(false);
}
const char* source;
std::string newSource = "", line;
while (file >> line)
{
newSource += line;
}
source = newSource.c_str();
glShaderSource(frag, 1, &source, 0);
glCompileShader(frag);
int comp;
glGetShaderiv(frag, GL_COMPILE_STATUS, &comp);
if (comp == GL_FALSE)
{
int maxLength = 0;
glGetShaderiv(frag, GL_INFO_LOG_LENGTH, &maxLength);
std::vector<char> errorLog(maxLength);
glGetShaderInfoLog(frag, maxLength, &maxLength, &errorLog[0]);
logf(&errorLog[0]);
glDeleteShader(frag);
return(false);
}
glAttachShader(program, frag);
return(true);
}
void linkProgram()
{
glLinkProgram(program); glDetachShader(program, vert); glDetachShader(program, frag);
glDeleteShader(vert); glDeleteShader(frag);
}
void bindProgram()
{ glUseProgram(program); }
void unbindProgram()
{ glUseProgram(0); }
};
These are my shaders.
// Vertex
#version 330
in vec4 _pos;
out vec4 aColor;
void main()
{
_pos = vec4(_pos, 1.0);
gl_Position = _pos;
}
// Fragment
#version 330
out vec4 FragColor;
in vec4 _color;
void main()
{
_color = vec4(0.5, 0.5, 0.1, 1.0);
FragColor = _color;
}
This is the implementation in the main file.
s_Program mainShader;
if (!mainShader.genVert("data/shaders/standardVert.txt"))
{ logf("[ERR] : Failed to generate Vertex Shader."); }
if (mainShader.genFrag("data/shaders/standardFrag.txt"))
{ logf("[ERR] : Failed to generate Fragment Shader."); }
mainShader.linkProgram();
I then call my function "bindProgram" in my loop witch sets GL to use that program.
I'm clearly missing something, I just want to know what.
Your while (file >> line) loop reads on a word-by-word basis. Appending each word to a string effectively strips all whitespace from your shader.
Instead, do this to read the file as-is into a std::string:
std::stringstream src_ss;
file >> src_ss.rdbuf();
std::string newSource{ src_ss.str() };

Why does my GLSL shader fail compilation with no error message?

I'm building a game using OpenGL and C++. I'm using GLFW and GLAD. I'm currently in the process of setting up simple shaders, but I'm completely roadblocked by a compilation problem. In a nutshell, shader compilation fails with no error message.
Here's my vertex shader (it's meant to draw 2D images and text):
#version 330 core
layout (location = 0) in vec2 vPosition;
layout (location = 1) in vec2 vTexCoords;
layout (location = 2) in vec4 vColor;
out vec4 fColor;
out vec2 fTexCoords;
uniform mat4 mvp;
void main()
{
vec4 position = mvp * vec4(vPosition, 0, 1);
position.y *= -1;
gl_Position = position;
fColor = vColor;
fTexCoords = vTexCoords;
}
And here's the relevant code to create the shader, load the shader source, compile the shader, and check for errors.
GLuint shaderId = glCreateShader(GL_VERTEX_SHADER);
std::string source = FileUtilities::ReadAllText(Paths::Shaders + filename);
GLchar const* file = source.c_str();
GLint length = static_cast<GLint>(source.size());
glShaderSource(shaderId, 0, &file, &length);
glCompileShader(shaderId);
GLint status;
glGetShaderiv(shaderId, GL_COMPILE_STATUS, &status);
if (status == GL_FALSE)
{
GLint logSize = 0;
glGetShaderiv(shaderId, GL_INFO_LOG_LENGTH, &logSize);
std::vector<GLchar> message = std::vector<GLchar>(logSize);
glGetShaderInfoLog(shaderId, logSize, nullptr, &message[0]);
glDeleteShader(shaderId);
std::cout << std::string(message.begin(), message.end());
}
Using that code, logSize is returned as 1, meaning that I'm unable to access the error message provided by GL. From what I can tell, the message doesn't exist at all. I've already seen the question posted here, in which the issue was a missing call to glCompileShader. As you can see, my code does call that function.
In attempting to solve this problem, I've already confirmed a few things.
My shader source (a string) is being read correctly. The source is a single string that, as far as I can tell, exactly matches the actual shader file.
There are no casting issues from the string source to GLchar const* or GLint (the variables file and length). Both look correct.
If I artificially inflate the value of logSize (to, say, 1000), the resulting message is nothing but zeroes. No error message exists.
I am calling glfwInit() and related functions before reaching this point in the code. Querying glGetString(GL_VERSION) does correctly return the target version (3.3.0).
Does anyone know how to fix this? As I said, my progress is completely blocked since I can't render anything (2D or 3D) without working shaders.
Thank you!
The problem is that you never upload any shader source to the shader.
The second parameter in this line:
glShaderSource(shaderId, 0, &file, &length);
tells OpenGL to load 0 code strings to the shader (nothing). Change this to 1, and it should work.

GLSL Attribute Location Returning -1

I am clearly misunderstanding something pretty simple here to do with GLSL and all Google results point to the obvious answer that I'm not using the variable I'm trying to find and it has been optimised out - However I am using the variable in question. Consider the following very basic shaders:
Vertex shader
attribute vec2 TexCoord;
varying vec2 TexCoordA;
void main(){
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
TexCoordA = TexCoord;
}
Fragment shader
varying vec2 TexCoordA;
void main(){
gl_FragColor = vec3(TexCoordA.x, TexCoordA.y, 0);
}
They compile and link fine- no errors. However using "glGetAttribLocation" returns -1 when I try and find the location of "TexCoord". If I use TexCoordA for another purpose (such as a call to "texture2D()") then I am able to find the location of TexCoord correctly.
Why does this matter you're probably asking (because why else would you use UV coords for anything other than a texture call)? I am trying to render one pixel into a frame buffer for all the UV coordinates and then read them back again on a second pass - this is the only way I can guarantee the results I'm looking for.
TL;DR
Why does "glGetAttribLocation" return -1 for the above shaders given they compile and link without a problem?
Requested information about code surrounding the problem area as follows (I am loading about 20-25 other shaders the same way so I'm confident the problem isn't here):
Problem lines:
mPassOneProgram = LoadShader("PCT_UV_CORRECTION_PASS_1.vert", "PCT_UV_CORRECTION_PASS_1.frag");
mPassOneUVLocation = glGetAttribLocation(mPassOneProgram, "TexCoord");
Shader loader code:
GLuint LoadShader(const char *vertex_path, const char *fragment_path) {
GLuint vertShader = glCreateShader(GL_VERTEX_SHADER);
GLuint fragShader = glCreateShader(GL_FRAGMENT_SHADER);
// Read shaders
std::string vertShaderStr = readFile(vertex_path);
std::string fragShaderStr = readFile(fragment_path);
const char *vertShaderSrc = vertShaderStr.c_str();
const char *fragShaderSrc = fragShaderStr.c_str();
GLint result = GL_FALSE;
int logLength;
// Compile vertex shader
std::cout << "Compiling vertex shader." << std::endl;
glShaderSource(vertShader, 1, &vertShaderSrc, NULL);
glCompileShader(vertShader);
// Check vertex shader
glGetShaderiv(vertShader, GL_COMPILE_STATUS, &result);
glGetShaderiv(vertShader, GL_INFO_LOG_LENGTH, &logLength);
std::vector<char> vertShaderError(logLength);
glGetShaderInfoLog(vertShader, logLength, NULL, &vertShaderError[0]);
std::cout << &vertShaderError[0] << std::endl;
OutputDebugString(&vertShaderError[0]);
// Compile fragment shader
std::cout << "Compiling fragment shader." << std::endl;
glShaderSource(fragShader, 1, &fragShaderSrc, NULL);
glCompileShader(fragShader);
// Check fragment shader
glGetShaderiv(fragShader, GL_COMPILE_STATUS, &result);
glGetShaderiv(fragShader, GL_INFO_LOG_LENGTH, &logLength);
std::vector<char> fragShaderError(logLength);
glGetShaderInfoLog(fragShader, logLength, NULL, &fragShaderError[0]);
std::cout << &fragShaderError[0] << std::endl;
OutputDebugString(&vertShaderError[0]);
std::cout << "Linking program" << std::endl;
GLuint program = glCreateProgram();
glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
glLinkProgram(program);
glGetProgramiv(program, GL_LINK_STATUS, &result);
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logLength);
std::vector<char> programError( (logLength > 1) ? logLength : 1 );
glGetProgramInfoLog(program, logLength, NULL, &programError[0]);
std::cout << &programError[0] << std::endl;
OutputDebugString(&vertShaderError[0]);
glDeleteShader(vertShader);
glDeleteShader(fragShader);
return program;
}
Managed to solve this by doing
gl_FrontColor = vec3(TexCoord.x, TexCoord.y, 0)
in the Vertex shader and
gl_FragColor = gl_Color;
in the Fragment shader.
Which is essentially the same thing and I still don't understand why it wasn't working before. I'm gonna put this one down to a bug in the compiler as nobody else seems to be able to find a problem.
glGetShaderiv(vertShader, GL_COMPILE_STATUS, &result);
...
glGetShaderiv(fragShader, GL_COMPILE_STATUS, &result);
...
glGetProgramiv(program, GL_LINK_STATUS, &result);
Each of these should be followed by a check to ensure that result is equal to GL_TRUE, otherwise the shader hasn't properly compiled. See here for a complete shader / program set of classes.

OpenGL - Problems getting shaders to compile

Finally isolated the issue of my shader not being able to be used to it failing to compile.
Here is my shader loading routine. The first part reads in the shader:
void GLSLShader::LoadFromFile(GLenum whichShader, const string filename)
{
ifstream fp;
// Attempt to open the shader
fp.open(filename.c_str(), ios_base::in);
// If the file exists, load it
if(fp)
{
// Copy the shader into the buffer
string buffer(std::istreambuf_iterator<char>(fp), (std::istreambuf_iterator<char>()));
// Debug output to show full text of shader
errorLog.writeSuccess("Shader debug: %s", buffer.c_str());
LoadFromString(whichShader, buffer);
}
else
{
errorLog.writeError("Could not load the shader %s", filename.c_str());
}
}
After it has loaded it into a string, it sends it to be loaded:
void GLSLShader::LoadFromString(GLenum type, const string source)
{
// Create the shader
GLuint shader = glCreateShader(type);
// Convert the string
const char * ptmp = source.c_str();
glShaderSource(shader, 1, &ptmp, NULL);
// Compile the shader
glCompileShader(shader);
// Check to see if the shader has loaded
GLint status;
glGetShaderiv (shader, GL_COMPILE_STATUS, &status);
if (status == GL_FALSE) {
GLint infoLogLength;
glGetShaderiv (shader, GL_INFO_LOG_LENGTH, &infoLogLength);
GLchar *infoLog= new GLchar[infoLogLength];
glGetShaderInfoLog (shader, infoLogLength, NULL, infoLog);
errorLog.writeError("could not compile: %s", infoLog);
delete [] infoLog;
}
_shaders[_totalShaders++]=shader;
}
I am getting the debug output "could not compile: " and it seems that the error buffer is empty. This has been driving me crazy for the past 3 days. Does anyone see my error?
Here are the very simple shaders:
#version 330
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
smooth out vec4 theColor;
void main()
{
gl_Position = position;
theColor = color;
}
#version 330
smooth in vec4 theColor;
out vec4 outputColor;
void main()
{
outputColor = theColor;
}
UPDATE
It looks like for some reason there is crap being added to the end of the shader in memory. I am not sure why. I have tried reading it in to a char* and a string. Here is the output:
<-!-> Shader: #version 330
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
smooth out vec4 theColor;
void main()
{
gl_Position = position;
theColor = color;
}
n
<-!-> Shader: #version 330
smooth in vec4 theColor;
out vec4 outputColor;
void main()
{
outputColor = theColor;
}
nts/Resources
Notice the 'n' at the end of the first shader and the 'nts/Resources' at the end of the second one. Any idea why?
ANOTHER UPDATE
The crap at the end was caused by an extra line at the end. I removed it and it is back to outputting the correct shader text. Still no luck thought with compiling.
I am at a loss here. The Superbible has their extensive libraries that are unoptimized and I don't need. There doesn't seem to be a damn decent shader compiler for Mac. I really don't mind how simple it is. It just needs to work with shaders. Does anyone have an example?
You're calling glShaderSource without supplying the length of each string in the sources array. Hence the string supplied is assumed to be terminated by a null byte ('\0'). The way you're reading the file into memory does not terminate the shader source with an additional null byte. So the GLSL compiler will read beyond the end of the shader source into random memory, where it finds… garbage.
Solution: Either add a terminating null byte, or supply the shader text lengths parameter.