/*
Defines the class Shader.
*/
I am developing an application in C++ with OpenGL and CMake. The structure of my directories is:
tanx
CMakeLists.txt
ext (contains GLFW and glad)
src
TanX.cpp
TanX.h
math (contains a couple of math methods; not very relevant for my problem)
Shader
Shader.h
vertex.glsl
fragment.glsl
CMakeLists.txt
Shader.h ist mainly supposed to read vertex and fragment shader sources from files, compile them and link them to a Shader program. It looks like this:
#ifndef _SHADER_H
#define _SHADER_H
#include <string>
#include <exception>
#include <fstream>
#include <glad.h>
class shader_exception :std::exception {
private:
const char* text;
public:
shader_exception(const char* text)
: text(text) {}
virtual const char* what() const {
return text;
}
};
class Shader {
private:
std::string* vertex_source = new std::string();
std::string* fragment_source = new std::string();
unsigned int vao, vbo, vertex_shader, fragment_shader, program;
public:
Shader(const char* vertex_source_path, const char* fragment_source_path) {
std::ifstream file_reader;
file_reader.open(vertex_source_path);
std::string line;
if (file_reader.is_open()) {
while (getline(file_reader, line)) {
vertex_source->append(line + "\n");
}
file_reader.close();
}
else
throw shader_exception("Could not open vertex shader source file");
file_reader.open(fragment_source_path); // this is where I get an unhandled exception dialog box
if (file_reader.is_open()) {
while (getline(file_reader, line)) {
fragment_source->append(line + "\n");
}
file_reader.close();
}
else
throw shader_exception("Could not open fragment shader file");
const char** vertex_source_c = (const char**)malloc(vertex_source->size());
*vertex_source_c = vertex_source->c_str();
const char** fragment_source_c = (const char**)malloc(fragment_source->size());
*fragment_source_c = fragment_source->c_str();
vertex_shader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex_shader, 1, vertex_source_c, NULL);
glCompileShader(vertex_shader);
int success;
glGetShaderiv(vertex_shader, GL_COMPILE_STATUS, &success);
char infoLog[512];
if (!success) {
glGetShaderInfoLog(vertex_shader, 512, NULL, infoLog);
throw shader_exception(infoLog);
}
glShaderSource(fragment_shader, 1, fragment_source_c, NULL);
glCompileShader(fragment_shader);
glGetShaderiv(fragment_shader, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(fragment_shader, 512, NULL, infoLog);
throw shader_exception(infoLog);
}
program = glCreateProgram();
glAttachShader(program, vertex_shader);
glAttachShader(program, fragment_shader);
glLinkProgram(program);
glGetProgramiv(program, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(program, 512, NULL, infoLog);
throw shader_exception(infoLog);
}
glDeleteShader(vertex_shader);
glDeleteShader(fragment_shader);
}
Shader() = delete;
void use()
{
glUseProgram(program);
}
~Shader() {
glDeleteProgram(program);
}
};
#endif
In TanX.cpp, I try to create a Shader object like this:
Shader shader("vertex.glsl", "fragment.glsl");
As you can see, the shader source files I'd like to use are vertex.glsl and fragment.glsl which are located in the "Shader" - folder. In order to make them usable to ifstream, the CMakeLists.txt file in the Shader-folder looks like this (This is one of the options mentioned here to make files usable for the program):
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/vertex.glsl ${CMAKE_CURRENT_BINARY_DIR} COPYONLY)
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/fragment.glsl ${CMAKE_CURRENT_BINARY_DIR} COPYONLY)
The top-level CMakeLists.txt file also contains
add_subdirectory("TanX/src/Shader")
so that the CMakeLists.txt file in Shader is executed.
The problem is that when I try to run my code in Visual Studio, I get an unhandled exception in the Shader.h file, at line 45 (where it tries to open fragment.glsl). It is a shader_exception, so apparently opening vertex.glsl failed. The details of the "Exception Unhandled" - Dialog Box: "Unhandled exception at 0x00007FFF43E4A388 in TanX.exe: Microsoft C++ exception: shader_exception at memory location 0x00000028896FF608. occurred". Now I have two questions
1) What do I neeed to do to get ifstream to open vertex.glsl (and then fragment.glsl after that)?
2) Why do I even get an unhandled exception error, shouldn't construction of the Shader-object fail as soon as the exception gets thrown and that lead to the program not even reach line 45?
Right off the bat, this ...
Shader* shader;
try {
*shader = Shader("vertex.glsl", "fragment.glsl");
}
... is undefined behavior, because you are dereferencing an uninitialized pointer (i.e., you haven't allocated any storage for the Shader object). This may not be the only problem but it is a critical one that must be fixed before you can debug further (because the behavior of the program afterward is unpredictable).
You probably meant to write:
shader = new Shader(...);
(And really you should use std::unique_ptr for this rather than a bare pointer, but this is veering out of the scope of the original question. Among other benefits, it makes this kind of error easier to spot and avoid.)
As for other problems, have you checked that the program is run in the directory where the .glsl files are located? Relative paths passed to file streams are interpreted relative to the program's current working directory, not relative to the location of the original source file.
By my count, line 45 is ...
}
... so something seems to be misaligned. It would be helpful to post the actual program output and exception information (or dialog box contents) along with line numbers or an annotation showing the line the debugger is indicating.
The problem was where I put the files to. I put them into the bin folder of the Shader folder, but as it is implemented in the header, the correct place is the bin folder of the top-level folder.
Related
I am trying to create a OpenGL framework for a game that includes a shader function that parses a vertex Shader and a vertex fragment.
using the OpenGL Call to isolate the error and report a bug break at the line.
#define ASSERT(x) if (!(x)) __debugbreak();
#define GLCall(x) GLClearError();\
x;\
ASSERT(GLLogCall(#x, __FILE__, __LINE__))
when the debug is reported, "[OPENGL Error]: <1281>", I get an error at this code:
GLCall(glAttachShader(program, vs));
which is used in this function:
static unsigned int CreateShader(const std::string& vertexShader, const
std::string& fragmentShader)
{
GLCall(unsigned int program = glCreateProgram());
GLCall(unsigned int vs = CompileShader(GL_VERTEX_SHADER, vertexShader));
GLCall(unsigned int fs = CompileShader(GL_FRAGMENT_SHADER, fragmentShader));
GLCall(glAttachShader(program, vs));
GLCall(glAttachShader(program, fs));
GLCall(glLinkProgram(program));
GLCall(glValidateProgram(program));
GLCall(glDeleteShader(vs));
GLCall(glDeleteShader(fs));
return program;
}
On the console window, it displays:
"Failed to compile shader! vertex"
ERROR: 0:7 'position' : undeclared indentified
ERROR: 0:7 'assign' : cannot convert from 'float' to 'Position 4-component vector of float'"
However, it still produces a blue quad on my screen.
Make sure that your shader is being properly updated whenever you're building your code. Open the build directory and look for the shader file that you're using, and ensure that changes you're making to the source shader are reflected in the build version. A common error with shaders is that they are copied when the files are first created and then not updated.
I am following the same series by TheCherno on YouTube as you, and I've been using CMake as my build system. It copied over the shader file when I first created it, but hasn't since. By adding the below line to my CMakeLists.txt I ensure that if there are any changes to the shader file that CMake will update the file.
configure_file(${PROJECT_SOURCE_DIR}/res/shaders/shader.shader ${PROJECT_BINARY_DIR}/res/shaders/shader.shader COPYONLY)
I'm having an issue with compiling GLSL code. When I try to print whether my shader was compiled correctly by using glGetShaderiv(), my program sometimes prints out the wrong result. For example, with this shader (test.vert):
#version 410
void main()
{
}
and using the following code:
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
#include <fstream>
#include <string>
int main() {
glfwInit();
GLFWwindow* window = glfwCreateWindow(200, 200, "OpenGL", nullptr, nullptr);
glfwMakeContextCurrent(window);
glewInit();
std::string fileText = "";
std::string textBuffer = "";
std::ifstream fileStream{ "test.vert" };
while (fileStream.good()) {
getline(fileStream, textBuffer);
fileText += textBuffer;
}
GLuint vertShaderID = glCreateShader(GL_VERTEX_SHADER);
const char* vertShaderText = fileText.c_str();
glShaderSource(vertShaderID, 1, &vertShaderText, NULL);
glCompileShader(vertShaderID);
GLint vertCompiled;
glGetShaderiv(vertShaderID, GL_COMPILE_STATUS, &vertCompiled);
if (vertCompiled != GL_TRUE) {
std::cerr << "vert shader did not compile." << std::endl;
}
glfwTerminate();
system("PAUSE");
return 0;
}
the program outputs that the shader did not compile, although I believe that it should have. I have tested many other shader programs, for example by putting a random 'a' or another letter in the middle of a word in the shader code, and I'm still getting incorrect outputs (this test had no error output).
I have also tried printing out the value of 'fileText' and it was correct (the same as in test.vert). What am I doing wrong?
I'm using a 64-bit Windows system, the supported OpenGL version is 4.40.
getline clips off the \n. That means that your entire file will not have any line breaks. It's all on one line, and therefore looks like this:
#version 410 void main() { }
That's not legal GLSL.
Please stop reading files line-by-line. If you want to read an entire file, then read the entire file.
I'm fairly noobish with C++ so it's probably a bit too early to get into this sorta thing, but anyway. I've been working on setting up a simple triangle in OpenGL and SDL but I have some weird things happening already. The first thing is that when I compile it I get one of three errors:
Line 194: ERROR: Compiled vertex shader was corrupt.
Line 156: ERROR: 0:1: '' : #version required and missing. ERROR: 0:1: '<' : syntax error syntax error
Line 126: ERROR: 0:1: '' : #version required and missing.
ERROR: 0:1: 'lour' : syntax error syntax error
Like literally I just repeatedly hit the build button and it seems completely random whether it displays one or the other of the messages. Ok, so that's weird but what's weirder is that when I run it in debug mode (i.e I put a breakpoint and step through everything) it works perfectly, for no apparent reason.
So, I've come to the conclusion that I've done something stupid with memory, but in any case here is the relevant code (ask if you want to see more):
Read File Function
std::string readFile(std::string name) {
std::ifstream t1(name.c_str());
if (t1.fail()) {
std::cout << "Error loading stream" << "\n";
}
std::stringstream buffer;
buffer << t1.rdbuf();
std::string src = buffer.str();
std::cout << src << std::endl;
return src;
}
Compile Shaders
const char *vSource = readFile("vertexShader.gl").c_str();
// Read fragment shader
const char *fSource = readFile("fragmentShader.gl").c_str();
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vSource, NULL);
glCompileShader(vs);
GLint isCompiled = 0;
glGetShaderiv(vs, GL_COMPILE_STATUS, &isCompiled);
if(isCompiled == GL_FALSE) { // ... Error stuff here...
N.B there's a fragment shader compiling bit which looks exactly the same. Lines 126 and 156 are in the error stuff part of the vertex and fragment shaders respectively.
Link Shaders
shader_programme = glCreateProgram();
glAttachShader(shader_programme, vs);
glAttachShader(shader_programme, fs);
glLinkProgram(shader_programme);
GLint isLinked = 0;
glGetProgramiv(shader_programme, GL_LINK_STATUS, (int *)&isLinked);
if(isLinked == GL_FALSE) { // ... Error stuff ... }
glDetachShader(shader_programme, vs);
glDetachShader(shader_programme, fs);
You can see the shaders if you want but they work (as in the fragment shader shows the correct colour in debug mode) so I don' think that they are the problem.
I'm on OSX using SDL2, OpenGL v3.2, GLSL v1.50 and Xcode 4 (and yes I did the text file thing if you know what I mean).
Sorry for posting a lot of code - if anyone has any tips on how to debug memory leaks in Xcode that might help, but thanks anyway :)
you throw away the strings as soon as you read them leading to the vSource referencing deleted memory (and undefined behavior), instead keep the sources as std::string while you need the char* to remain valid:
std::string vSource = readFile("vertexShader.gl");
char* vSourcecharp = vSource.c_str();
// Read fragment shader
std::string fSource = readFile("fragmentShader.gl");
char* fSourcecharp = fSource.c_str();
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vs, 1, &vSourcecharp, NULL);
glCompileShader(vs);
I am finding that QGLShaderProgram is consistently failing to compile any shader and providing no error log. Here are the symptoms:
QGLShaderProgram reports that it failed to compile but produces an empty error log. If I try to bind the shader an exception is thrown.
I can compile a shader using glCompileShader without problem. However, the first time I try to compile this way after QGLShaderProgram has failed, fails with this error log:
ERROR: error(#270) Internal error: Wrong symbol table level
ERROR: 0:2: error(#232) Function declarations cannot occur inside of functions:
main
ERROR: error(#273) 2 compilation errors. No code generated
Following that one failure, the next time I try to compile using glCompileShader works fine.
The problem has arisen only since upgrading from Qt 4.8 to 5.2. Nothing else has changed on this machine.
I have tested on two PCs, one with an ATI Radeon HD 5700, the other with an AMD FirePro V7900. The problem only appears on the Radeon PC.
Here is my test code demonstrating the problem:
main.cpp
#include <QApplication>
#include "Test.h"
int main(int argc, char* argv[])
{
QApplication* app = new QApplication(argc, argv);
Drawer* drawer = new Drawer;
return app->exec();
}
Test.h
#pragma once
#include <qobject>
#include <QTimer>
#include <QWindow>
#include <QOpenGLContext>
#include <QOpenGLFunctions>
class Drawer : public QWindow, protected QOpenGLFunctions
{
Q_OBJECT;
public:
Drawer();
QTimer* mTimer;
QOpenGLContext* mContext;
int frame;
public Q_SLOTS:
void draw();
};
Test.cpp
#include "Test.h"
#include <QGLShaderProgram>
#include <iostream>
#include <ostream>
using namespace std;
Drawer::Drawer()
: mTimer(new QTimer)
, mContext(new QOpenGLContext)
, frame(0)
{
mContext->create();
setSurfaceType(OpenGLSurface);
mTimer->setInterval(40);
connect(mTimer, SIGNAL(timeout()), this, SLOT(draw()));
mTimer->start();
show();
}
const char* vertex = "#version 110 \n void main() { gl_Position = gl_Vertex; }";
const char* fragment = "#version 110 \n void main() { gl_FragColor = vec4(0.0,0.0,0.0,0.0); }";
void Drawer::draw()
{
mContext->makeCurrent(this);
if (frame==0) {
initializeOpenGLFunctions();
}
// Compile using QGLShaderProgram. This always fails
if (frame < 5)
{
QGLShaderProgram* prog = new QGLShaderProgram;
bool f = prog->addShaderFromSourceCode(QGLShader::Fragment, fragment);
cout << "fragment "<<f<<endl;
bool v = prog->addShaderFromSourceCode(QGLShader::Vertex, vertex);
cout << "vertex "<<v<<endl;
bool link = prog->link();
cout << "link "<<link<<endl;
}
// Manual compile using OpenGL direct. This works except for the first time it
// follows the above block
{
GLuint prog = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(prog, 1, &fragment, 0);
glCompileShader(prog);
GLint success = 0;
glGetShaderiv(prog, GL_COMPILE_STATUS, &success);
GLint logSize = 0;
glGetShaderiv(prog, GL_INFO_LOG_LENGTH, &logSize);
GLchar* log = new char[8192];
glGetShaderInfoLog(prog, 8192, 0, log);
cout << "manual compile " << success << endl << log << endl;
delete[] log;
}
glClearColor(1,1,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mContext->swapBuffers(this);
frame++;
}
Elsewhere, I have tested using QGLWidget, and on a project that uses GLEW instead of QOpenGLFunctions with exactly the same results.
The version of Qt I'm linking against was built with the following configuration:
configure -developer-build -opensource -nomake examples -nomake tests -mp -opengl desktop -icu -confirm-license
Any suggestions? Or shall I just send this in as a bug report?
Update
In response to peppe's comments:
1) What does QOpenGLDebugLogger says?
The only thing I can get from QOpenGLDebugLogger is
QWindowsGLContext::getProcAddress: Unable to resolve 'glGetPointerv'
This is printed when I initialize it (and not as a debug event firing, but just to console). It happens even though mContext->hasExtension(QByteArrayLiteral("GL_KHR_debug")) returns true and I'm initializing it within the first frame's draw() function.
2) Can you print the compile log of the QOGLShaders even if they compile successfully?
I cannot successfully compile QOpenGLShader or QGLShader at any point so I'm not able to test this. However, when compiling successfully using plain GL functions, the log returns blank.
3) Which GL version did you get from the context? (Check with QSurfaceFormat).
I've tried with versions 3.0, 3.2, 4.2, all with the same result.
4) Please set the same QSurfaceFormat on both the context and the window before creating them
5) Remember to create() the window
I've implemented both of these now and the result is the same.
I've just tested on a third PC and that has no issues. So it is this specific computer which, incidentally, happens to be a Mac Pro running Windows in bootcamp. It has had absolutely no trouble in any other context running the latest ATI drivers but I can only really conclude that there is a bug somewhere between the ATI drivers, this computer's graphics chip and QOpenGLShaderProgram.
I think I'm unlikely to find a solution, so giving up. Thank you for all your input!
I'm having a problem with my shader loading code. The bizarre thing that's confusing me is that it works maybe once in 5 times, but then only sort of works. For instance, it'll load the frag shader, but then texturing won't work properly (it'll draw a strange semblance of the texture over the geometry instead). I think the problem is with the loading code, so that's what my question is about. Can anyone spot an error I haven't found in the code below?
char* vs, * fs;
vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER);
fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER);
long sizeOfVShaderFile = getSizeOfFile(VERTEX_SHADER_FILE_NAME);
long sizeOfFShaderFile = getSizeOfFile(FRAGMENT_SHADER_FILE_NAME);
if(sizeOfVShaderFile == -1)
{
cerr << VERTEX_SHADER_FILE_NAME<<" is null! Exiting..." << endl;
return;
}
if(sizeOfFShaderFile == -1)
{
cerr << FRAGMENT_SHADER_FILE_NAME<<" is null! Exiting..." << endl;
return;
}
vs = readFile(VERTEX_SHADER_FILE_NAME);
fs = readFile(FRAGMENT_SHADER_FILE_NAME);
const char* vv = vs, *ff = fs;
glShaderSource(vertexShaderHandle , 1, &vv, NULL);
cout << "DEBUGGING SHADERS" << endl;
cout << "VERTEX SHADER: ";
printShaderInfoLog(vertexShaderHandle);
cout << endl;
glShaderSource(fragmentShaderHandle, 1, &ff, NULL);
cout << "FRAGMENT SHADER: ";
printShaderInfoLog(fragmentShaderHandle);
cout << endl;
glCompileShader(vertexShaderHandle);
cout << "VERTEX SHADER: ";
printShaderInfoLog(vertexShaderHandle);
cout << endl;
glCompileShader(fragmentShaderHandle);
cout << "FRAGMENT SHADER: ";
printShaderInfoLog(fragmentShaderHandle);
cout << endl;
programHandle = glCreateProgram();
cout << "DEBUGGING PROGRAM" << endl;
glAttachShader(programHandle, vertexShaderHandle);
printProgramInfoLog(programHandle);
glAttachShader(programHandle, fragmentShaderHandle);
printProgramInfoLog(programHandle);
glLinkProgram(programHandle);
printProgramInfoLog(programHandle);
glUseProgram(programHandle);
printProgramInfoLog(programHandle);
delete[] vs; delete[] fs;
Here's the readFile function:
char* readFile(const char* path)
{
unsigned int fileSize = getSizeOfFile(path);
char* file_data = new char[fileSize];
ifstream input_stream;
input_stream.open(path, ios::binary);
input_stream.read(file_data, fileSize);
input_stream.close();
//this is deleted at the end of the shader code
return file_data;
}
All of the below messages are from the exact same executable (no rebuild).
Here's the first possible error message:
BallGLWidget::initializeGL called
DEBUGGING SHADERS
VERTEX SHADER:
FRAGMENT SHADER:
VERTEX SHADER: ERROR: 0:17: '<' : syntax error syntax error
FRAGMENT SHADER:
DEBUGGING PROGRAM
ERROR: One or more attached shaders not successfully compiled
ERROR: One or more attached shaders not successfully compiled
glGetError enum value: GL_NO_ERROR
Another possible error message:
BallGLWidget::initializeGL called
DEBUGGING SHADERS
VERTEX SHADER:
FRAGMENT SHADER:
VERTEX SHADER: ERROR: 0:17: 'tt' : syntax error syntax error
FRAGMENT SHADER: ERROR: 0:33: '?' : syntax error syntax error
DEBUGGING PROGRAM
ERROR: One or more attached shaders not successfully compiled
ERROR: One or more attached shaders not successfully compiled
Here's the output when it works (maybe 1 in 5 or 6 times)
BallGLWidget::initializeGL called
DEBUGGING SHADERS
VERTEX SHADER:
FRAGMENT SHADER:
VERTEX SHADER:
FRAGMENT SHADER:
DEBUGGING PROGRAM
Image format is GL_RGB
Checking textures...
glGetError enum value: GL_NO_ERROR
I seriously doubt its the shaders themselves since they do work sometimes... and the reported errors are garbage.
If any more information would be helpful I'll gladly provide it.
EDIT: Here's the shaders
The vertex shader:
attribute vec2 a_v_position;
attribute vec2 a_tex_position;
varying vec2 tex_coord_output;
void main()
{
tex_coord_output = a_tex_position;
gl_Position = vec4(a_v_position, 0.0, 1.0);
}
The fragment shader:
varying vec2 tex_coord_output;
uniform sampler2D ballsampler;
void main()
{
gl_FragColor = texture2D(ballsampler, tex_coord_output);
}
Your question is a duplicate of Getting garbage chars when reading GLSL files and here's my answer to it:
You're using C++, so I suggest you leverage that. Instead of reading into a self allocated char array I suggest you read into a std::string:
#include <string>
#include <fstream>
std::string loadFileToString(char const * const fname)
{
std::ifstream ifile(fname);
std::string filetext;
while( ifile.good() ) {
std::string line;
std::getline(ifile, line);
filetext.append(line + "\n");
}
return filetext;
}
That automatically takes care of all memory allocation and proper delimiting -- the keyword is RAII: Resource Allocation Is Initialization. Later on you can upload the shader source with something like
void glcppShaderSource(GLuint shader, std::string const &shader_string)
{
GLchar const *shader_source = shader_string.c_str();
GLint const shader_length = shader_string.size();
glShaderSource(shader, 1, &shader_source, &shader_length);
}
void load_shader(GLuint shaderobject, char * const shadersourcefilename)
{
glcppShaderSource(shaderobject, loadFileToString(shadersourcefilename));
}
You are reading the files but as far as I can see you are not zero-terminating the text. Try allocating filesize+1 and set the last char to zero.